Simplify performance measurements
The many measurements that we injected into the build under test were
skewing our measurements to the point of making them unreliable or
unrealistic. We now only measure end to end build time. Here's a breakdown
with the rationale for removing each other measurement:
- configuration time: can be done by having a `gradle help` scenario instead
- execution time: the user does not care whether a long build is stuck in execution or configuration
- setup/teardown: was ill-defined anyway, basically total - configuration - execution
- JIT compile time: this is nothing we can influence and thus pointless to measure
- Memory usage: Was only measured at one point in the build, which doesn't tell us anything about
any problems at any other point in the build
- GC CPU time: If this increases we'd see it in total execution time
Generally, looking at the graphs has never pointed us directly at the problem, we always need to
profile anyway. So instead of skewing our measurements with lots of profling code, we should
instead use a dedicated profiling job to measure if we actually see a regression.
Memory usage can be tested indirectly by giving each scenario a reasonable amount of memory.
If memory usage rises above that reasonable limit, we'd see execution time rise, telling us about
the regression. Generally, we do not optimize for smallest memory usage, but for fastest execution
with reasonable memory overhead.
This change also removes all JVM tweaking and wait periods which we introduced in an attempt to
make tests more predictable and stable. These tweaks have not really helped us achieve more stable
tests and have often done the opposite. They also add lots of complexity and make our tests more
unrealistic. A real users will not add all these JVM options to Gradle.
21 Nov 16 54d691d4780b25e9b989ed04356f755d2b70e973
Fix native performance tests
12 Oct 16 e7c93ef9b1e216bfbabfecb480edf09c82e864dc