AbstractCrossBuildPerformanceTest.groovy

Clone Tools
  • last updated a few seconds ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Extract common code for profiler report generation

    • -29
    • +5
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 3 more files in changeset.
Rename AbstractCrossBuild{GradleProfiler -> }PerformanceTest

There is only one.

    • -0
    • +105
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 9 more files in changeset.
Remove AbstractCrossBuildPerformanceTest

    • -78
    • +0
    ./AbstractCrossBuildPerformanceTest.groovy
Rename CrossBuild{ -> GradleInternal}PerformanceTestRunner

    • -3
    • +3
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 6 more files in changeset.
First working versino

    • -4
    • +2
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 12 more files in changeset.
Rename performance test infrastructure legacy classes

To make clear that they are using the Gradle build internal

infrastructure.

    • -2
    • +2
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 44 more files in changeset.
Allow using Gradle profiler in cross version tests

    • -2
    • +2
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 13 more files in changeset.
Allow using Gradle profiler in cross version tests

    • -2
    • +2
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 12 more files in changeset.
Allow using Gradle profiler in cross version tests

    • -2
    • +2
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 13 more files in changeset.
Allow using Gradle profiler in cross version tests

    • -2
    • +2
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 13 more files in changeset.
Rerun distributed performance test in RERUNNER step (#8801)

After the improvement of automatically rerunning and tagging, we want to manage performance test in the same way:

- Only run each performance test scenario once.

- If it fails, `GRADLE_RERUNNER` will kick in and rerun the failed scenario. The good thing is that it might be scheduled to another build agent, which mitigates the effect of bad agent.

This PR does:

- Remove all `Retry` from performance tests.

- Add `GRADLE_RERUNNER` to performance tests and refactor some code.

- Add tests for `PerformanceTest`.

- Since `GRADLE_RERUNNER` depends on reading of test binary result, write binary test result file in `RerunableDistributedPerformanceTest`.

    • -5
    • +0
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 23 more files in changeset.
Rebase to latest master

    • -5
    • +0
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 23 more files in changeset.
Run distributed performance test with retry

    • -5
    • +0
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 9 more files in changeset.
Run distributed performance test with retry

    • -5
    • +0
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 9 more files in changeset.
Run distributed performance test with retry

    • -5
    • +0
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 9 more files in changeset.
Run distributed performance test with retry

    • -5
    • +0
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 8 more files in changeset.
Run distributed performance test with retry

    • -5
    • +0
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 9 more files in changeset.
Polish PR

    • -5
    • +0
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 4 more files in changeset.
Polish PR

    • -5
    • +0
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 4 more files in changeset.
Use Spock's Retry extension instead of RetryRule

    • -7
    • +5
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 27 more files in changeset.
Move RetryRule up in AbstractCrossBuildPerformanceTest

Signed-off-by: Paul Merlin <paul@gradle.com>

    • -16
    • +26
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 1 more file in changeset.
Redirect output of performance tests to disk

The performance tests now no longer use our integration test fixtures.

Instead they use a simple ProcessBuilder to be closer to what a user

would do. More specifically, we no longer capture output in the test VM,

as that can introduce its own flakiness into the measurement. It also

reduces the memory needed to run the performance tests. Last but not least

spooling the output to disk makes later analysis easier.

    • -2
    • +4
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 9 more files in changeset.
Add rule for setting the test method name as performance test id

    • -0
    • +4
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 10 more files in changeset.
Clean up performance tests

- remove unused Java software model tests

- remove unused native scenario tests

- move tests into appropriate packages

- remove unused test categories

- give tests more descriptive names

- remove unused test templates

    • -3
    • +0
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 103 more files in changeset.
Use separate user home for each version under performance test

Until now, each performance test was using a separate test directory,

but the different versions under test were all using the same directory.

This meant that therer was a consistent bias depending on the order in which

the versions were executed.

    • -1
    • +1
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 14 more files in changeset.
Simplify performance measurements

The many measurements that we injected into the build under test were

skewing our measurements to the point of making them unreliable or

unrealistic. We now only measure end to end build time. Here's a breakdown

with the rationale for removing each other measurement:

- configuration time: can be done by having a `gradle help` scenario instead

- execution time: the user does not care whether a long build is stuck in execution or configuration

- setup/teardown: was ill-defined anyway, basically total - configuration - execution

- JIT compile time: this is nothing we can influence and thus pointless to measure

- Memory usage: Was only measured at one point in the build, which doesn't tell us anything about

any problems at any other point in the build

- GC CPU time: If this increases we'd see it in total execution time

Generally, looking at the graphs has never pointed us directly at the problem, we always need to

profile anyway. So instead of skewing our measurements with lots of profling code, we should

instead use a dedicated profiling job to measure if we actually see a regression.

Memory usage can be tested indirectly by giving each scenario a reasonable amount of memory.

If memory usage rises above that reasonable limit, we'd see execution time rise, telling us about

the regression. Generally, we do not optimize for smallest memory usage, but for fastest execution

with reasonable memory overhead.

This change also removes all JVM tweaking and wait periods which we introduced in an attempt to

make tests more predictable and stable. These tweaks have not really helped us achieve more stable

tests and have often done the opposite. They also add lots of complexity and make our tests more

unrealistic. A real users will not add all these JVM options to Gradle.

    • -2
    • +1
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 59 more files in changeset.
Wire integration test build context instance

- enables using performance test specific build context when an instance

is properly wired

    • -1
    • +5
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 48 more files in changeset.
Allow specifying the list of baselines from command line

This commit introduces the ability to set the list of baseline versions for performance tests through a command-line

flag (`--baselines`) or a system property (`org.gradle.performance.baselines`). The list must be specified as a

comma-separated list of versions or as a semicolon separated list of versions.

Two versions are handled particularly:

* `last` corresponds to the last release of Gradle

* `nightly` corresponds to the latest build of Gradle (`master` branch)

This commit also removes the "ad-hoc" mode for executing tests. The idea is to replace it with this flag, which

can be set to `nightly`. However, the "ad-hoc" mode skipped writing results to the database, in order to avoid

polluting the performance results DB with tests. If you want to do this, you need to set the `gradle.performance.db.url`

to a local, temporary database:

```

./gradlew cleanSmallOldJava smallOldJava cleanPerformanceTest performanceTest --scenarios 'clean Java build smallOldJava (daemon)' -x prepareSamples --baselines nightly -Porg.gradle.performance.db.url=jdbc:h2:./build/database

```

    • -3
    • +6
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 10 more files in changeset.
Fix GC logging location in Maven performance tests

    • -1
    • +1
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 4 more files in changeset.
Execute performance test scenarios on a fresh working copy

Reusing whatever state the last test left behind can make performance

seem better (because of preexisting caches) or worse (because of lots of output).

This makes the results dependent on the order in which the tests are executed.

It also prevented us from using incremental build for the project templates.

We now create a fresh copy of the template project for each test run,

fixing both of these problems at once.

    • -0
    • +1
    ./AbstractCrossBuildPerformanceTest.groovy
  1. … 14 more files in changeset.