AbstractCrossVersionPerformanceTest.groovy

Clone Tools
  • last updated a few seconds ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Allow using Gradle profiler in cross version tests

    • -2
    • +2
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 13 more files in changeset.
Allow using Gradle profiler in cross version tests

    • -2
    • +2
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 12 more files in changeset.
Allow using Gradle profiler in cross version tests

    • -2
    • +2
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 13 more files in changeset.
Allow using Gradle profiler in cross version tests

    • -2
    • +2
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 13 more files in changeset.
Rerun distributed performance test in RERUNNER step (#8801)

After the improvement of automatically rerunning and tagging, we want to manage performance test in the same way:

- Only run each performance test scenario once.

- If it fails, `GRADLE_RERUNNER` will kick in and rerun the failed scenario. The good thing is that it might be scheduled to another build agent, which mitigates the effect of bad agent.

This PR does:

- Remove all `Retry` from performance tests.

- Add `GRADLE_RERUNNER` to performance tests and refactor some code.

- Add tests for `PerformanceTest`.

- Since `GRADLE_RERUNNER` depends on reading of test binary result, write binary test result file in `RerunableDistributedPerformanceTest`.

    • -5
    • +0
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 23 more files in changeset.
Rebase to latest master

    • -5
    • +0
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 23 more files in changeset.
Run distributed performance test with retry

    • -5
    • +0
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 9 more files in changeset.
Run distributed performance test with retry

    • -5
    • +0
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 9 more files in changeset.
Run distributed performance test with retry

    • -5
    • +0
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 9 more files in changeset.
Run distributed performance test with retry

    • -5
    • +0
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 8 more files in changeset.
Run distributed performance test with retry

    • -5
    • +0
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 9 more files in changeset.
Improve tagging process

    • -5
    • +0
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 7 more files in changeset.
Improve tagging process

    • -5
    • +0
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 7 more files in changeset.
Use Spock's Retry extension instead of RetryRule

    • -5
    • +5
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 27 more files in changeset.
Report test results to Slack

    • -0
    • +4
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 8 more files in changeset.
Add a performance test specific to parallel downloads

This commit introduces a performance test measuring the impact of parallel downloads:

- parallel download of artifacts

- parallel download of metadata

The test project is a Spring Boot app (copied from their samples), and we're using a remote repository test

fixture to simulate network latency.

    • -1
    • +1
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 7 more files in changeset.
Don't retry adhoc performance tests (#1810)

When running adhoc performance tests we currently retry the performance tests when they fail. Since we are not interested in the result but flame graphs we should not retry when running adhoc performance tests.

    • -3
    • +2
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 6 more files in changeset.
Redirect output of performance tests to disk

The performance tests now no longer use our integration test fixtures.

Instead they use a simple ProcessBuilder to be closer to what a user

would do. More specifically, we no longer capture output in the test VM,

as that can introduce its own flakiness into the measurement. It also

reduces the memory needed to run the performance tests. Last but not least

spooling the output to disk makes later analysis easier.

    • -3
    • +5
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 9 more files in changeset.
Retry performance tests on failure

    • -5
    • +22
    ./AbstractCrossVersionPerformanceTest.groovy
Add rule for setting the test method name as performance test id

    • -0
    • +4
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 10 more files in changeset.
Clean up performance tests

- remove unused Java software model tests

- remove unused native scenario tests

- move tests into appropriate packages

- remove unused test categories

- give tests more descriptive names

- remove unused test templates

    • -2
    • +2
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 103 more files in changeset.
Use separate user home for each version under performance test

Until now, each performance test was using a separate test directory,

but the different versions under test were all using the same directory.

This meant that therer was a consistent bias depending on the order in which

the versions were executed.

    • -1
    • +1
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 14 more files in changeset.
Simplify performance measurements

The many measurements that we injected into the build under test were

skewing our measurements to the point of making them unreliable or

unrealistic. We now only measure end to end build time. Here's a breakdown

with the rationale for removing each other measurement:

- configuration time: can be done by having a `gradle help` scenario instead

- execution time: the user does not care whether a long build is stuck in execution or configuration

- setup/teardown: was ill-defined anyway, basically total - configuration - execution

- JIT compile time: this is nothing we can influence and thus pointless to measure

- Memory usage: Was only measured at one point in the build, which doesn't tell us anything about

any problems at any other point in the build

- GC CPU time: If this increases we'd see it in total execution time

Generally, looking at the graphs has never pointed us directly at the problem, we always need to

profile anyway. So instead of skewing our measurements with lots of profling code, we should

instead use a dedicated profiling job to measure if we actually see a regression.

Memory usage can be tested indirectly by giving each scenario a reasonable amount of memory.

If memory usage rises above that reasonable limit, we'd see execution time rise, telling us about

the regression. Generally, we do not optimize for smallest memory usage, but for fastest execution

with reasonable memory overhead.

This change also removes all JVM tweaking and wait periods which we introduced in an attempt to

make tests more predictable and stable. These tweaks have not really helped us achieve more stable

tests and have often done the opposite. They also add lots of complexity and make our tests more

unrealistic. A real users will not add all these JVM options to Gradle.

    • -4
    • +4
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 59 more files in changeset.
Wire integration test build context instance

- enables using performance test specific build context when an instance

is properly wired

    • -2
    • +5
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 48 more files in changeset.
Add ForkingUnderDevelopmentGradleDistribution and use it for perf tests

- reduces differences between master vs. snapshot versions

in performance test execution

    • -2
    • +2
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 6 more files in changeset.
Adds an Android performance test

This commit introduces a new performance test for Android builds. Unlike traditional performance tests,

the "templates" of Android builds are real, external, projects checked out from Git. They are tweaked

to allow exeuction on the latest (development) version of Gradle, and add the traditional measurements

(CPU, heap).

Adding an Android performance test requires extending the `AbstractAndroidPerformanceTest` class, which

itself is a cross-version performance test.

This first step adds a single "medium" Android build as an experiment. It's worth noting that the

performance test will only execute if:

- the Android SDK is installed and path is set through the `ANDROID_HOME` environment variables

- the `$ANDROID_HOME/licenses` directory contains the accepted license files

    • -1
    • +1
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 6 more files in changeset.
Reintroduce gradle/gradle@784d747

- There was previously a concern that this would break the

BuildScansPerformanceTest, but that test doesn't actually use the

BaselineVersion class at all.

    • -4
    • +0
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 31 more files in changeset.
Revert "Make strict performance testing default."

This reverts commit 784d7476556e63c8d7f15919ae9875c3b7181aa3.

    • -0
    • +4
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 31 more files in changeset.
Make strict performance testing default.

- This makes it impossible to set maximum regression limits manually and

defaults to using a statistically derived allowable amount of

variance.

    • -4
    • +0
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 31 more files in changeset.
Allow specifying the list of baselines from command line

This commit introduces the ability to set the list of baseline versions for performance tests through a command-line

flag (`--baselines`) or a system property (`org.gradle.performance.baselines`). The list must be specified as a

comma-separated list of versions or as a semicolon separated list of versions.

Two versions are handled particularly:

* `last` corresponds to the last release of Gradle

* `nightly` corresponds to the latest build of Gradle (`master` branch)

This commit also removes the "ad-hoc" mode for executing tests. The idea is to replace it with this flag, which

can be set to `nightly`. However, the "ad-hoc" mode skipped writing results to the database, in order to avoid

polluting the performance results DB with tests. If you want to do this, you need to set the `gradle.performance.db.url`

to a local, temporary database:

```

./gradlew cleanSmallOldJava smallOldJava cleanPerformanceTest performanceTest --scenarios 'clean Java build smallOldJava (daemon)' -x prepareSamples --baselines nightly -Porg.gradle.performance.db.url=jdbc:h2:./build/database

```

    • -3
    • +2
    ./AbstractCrossVersionPerformanceTest.groovy
  1. … 10 more files in changeset.