GradleBuildExperimentSpec.groovy

Clone Tools
  • last updated a few seconds ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Allow easier adding of build mutator for cross build tests

  1. … 4 more files in changeset.
Allow easier adding of build mutator for cross build tests

  1. … 4 more files in changeset.
Allow easier adding of build mutator for cross build tests

  1. … 4 more files in changeset.
Separate Gradle profiler specific options

  1. … 7 more files in changeset.
First working versino

  1. … 12 more files in changeset.
Support measuring build operations in the profiler report

  1. … 5 more files in changeset.
Support measuring build operations in the profiler report

  1. … 5 more files in changeset.
Convert JavaUpToDatePerformanceTest to use gradle profiler

  1. … 5 more files in changeset.
Convert JavaUpToDatePerformanceTest to use gradle profiler

  1. … 5 more files in changeset.
Remove YourkitProfiler

This profiler wasn't very useful, as it also measured warmup runs.

We don't use it on CI and locally one can use the Gradle Profiler

with the `--profile yourkit` option.

  1. … 5 more files in changeset.
Explicit clean up before performance test measurements (#2640)

Previously we were using a workaround where odd runs were removed from measurements, and executed a `clean` build instead of the measured build.

Performance tests can now specify `cleanTasks`, similar to how they specify `tasksToRun`. These `cleanTasks` will be executed before each run (warm-up and measurement runs alike).

A new column is added to performance test tables to track this new information. It is a nullable column to allow for test results added by older versions of Gradle.

I've updated the task output caching tests and the Maven vs. Gradle comparisons to declare `cleanTasks` instead of the old hack with the odd-even runs.

  1. … 27 more files in changeset.
Simplify performance measurements

The many measurements that we injected into the build under test were

skewing our measurements to the point of making them unreliable or

unrealistic. We now only measure end to end build time. Here's a breakdown

with the rationale for removing each other measurement:

- configuration time: can be done by having a `gradle help` scenario instead

- execution time: the user does not care whether a long build is stuck in execution or configuration

- setup/teardown: was ill-defined anyway, basically total - configuration - execution

- JIT compile time: this is nothing we can influence and thus pointless to measure

- Memory usage: Was only measured at one point in the build, which doesn't tell us anything about

any problems at any other point in the build

- GC CPU time: If this increases we'd see it in total execution time

Generally, looking at the graphs has never pointed us directly at the problem, we always need to

profile anyway. So instead of skewing our measurements with lots of profling code, we should

instead use a dedicated profiling job to measure if we actually see a regression.

Memory usage can be tested indirectly by giving each scenario a reasonable amount of memory.

If memory usage rises above that reasonable limit, we'd see execution time rise, telling us about

the regression. Generally, we do not optimize for smallest memory usage, but for fastest execution

with reasonable memory overhead.

This change also removes all JVM tweaking and wait periods which we introduced in an attempt to

make tests more predictable and stable. These tweaks have not really helped us achieve more stable

tests and have often done the opposite. They also add lots of complexity and make our tests more

unrealistic. A real users will not add all these JVM options to Gradle.

  1. … 59 more files in changeset.
Support customizing invocations in performance tests

- remove previous parameterized generics that lead to a dead end

  1. … 13 more files in changeset.
Execute performance test scenarios on a fresh working copy

Reusing whatever state the last test left behind can make performance

seem better (because of preexisting caches) or worse (because of lots of output).

This makes the results dependent on the order in which the tests are executed.

It also prevented us from using incremental build for the project templates.

We now create a fresh copy of the template project for each test run,

fixing both of these problems at once.

  1. … 14 more files in changeset.
Move all result-related classes to org.gradle.performance.results

  1. … 55 more files in changeset.
Extract performance test fixtures to separate project

    • -0
    • +94
    ./GradleBuildExperimentSpec.groovy
  1. … 248 more files in changeset.