performanceTest.gradle

Clone Tools
  • last updated a few seconds ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Keep properties close to Provider classes

  1. … 8 more files in changeset.
Migrated PerformanceTest plugin to Kotlin

  1. … 3 more files in changeset.
Dogfood JUnit Platform

  1. … 10 more files in changeset.
Make sure ProjectGeneratorTask is used appropriately

- Add a description for mediumSwiftMulti

- Add a TODO for getting rid of CppMultiProjectGeneratorTask

  1. … 2 more files in changeset.
Make all Gradle features available in performance tests

Signed-off-by: Paul Merlin <paul@gradle.com>

  1. … 2 more files in changeset.
Change intTestImage distribution to only contain dependencies

This assembles a distribution that only contains the dependencies

required by the subproject under test. This way we stricter enforce

modularization and increase the cache hits for tests that run against

the distribution image.

  1. … 19 more files in changeset.
Change intTestImage distribution to only contain dependencies

This assembles a distribution that only contains the dependencies

required by the subproject under test. This way we stricter enforce

modularization and increase the cache hits for tests that run against

the distribution image.

  1. … 19 more files in changeset.
Fix task dependencies between test project generation performance tests

  1. … 2 more files in changeset.
Add must-run-after rules to ensure that test projects are created first

Fix dependencies for different types of test project generator tasks

Update to latest nightly

+review REVIEW-6502

  1. … 8 more files in changeset.
Add performance tests to verification group so they show up in tasks

Also delete test projects generated by a simple JavaExec task

Redirect output of performance tests to disk

The performance tests now no longer use our integration test fixtures.

Instead they use a simple ProcessBuilder to be closer to what a user

would do. More specifically, we no longer capture output in the test VM,

as that can introduce its own flakiness into the measurement. It also

reduces the memory needed to run the performance tests. Last but not least

spooling the output to disk makes later analysis easier.

  1. … 9 more files in changeset.
Do not inject commit id from CI

The performance tests can retrieve it themselves from git directly.

  1. … 5 more files in changeset.
Do not show tests without results for most recent commit in summary

  1. … 4 more files in changeset.
Don't forward system streams during performance tests

These tests generate huge amount of output, overwhelming TeamCity.

Also, this forwarding skews performance test results as we are also

measuring TeamCity's test output handler.

This also removes the need for the output workaround in the build scan

performance tests which were primarily suffering from this.

  1. … 2 more files in changeset.
Heap dump on performance test OOME

Revert memory for performance test

Giving additional memory did not fix the OOME issue

in the build scan tests.

Increase maxHeapSize as some performance builds are failing consistently with a Java heap space exception.

Disable console output for performance tests

The output is verbose to the point of being useless in teamcity

and depending on how well teamcity handles that amount of output,

it might put backpressure on the build under test, skewing the results.

We still have the XML test reports in case we need to analyze the output

after the fact.

Clean up performance tests

- remove unused Java software model tests

- remove unused native scenario tests

- move tests into appropriate packages

- remove unused test categories

- give tests more descriptive names

- remove unused test templates

  1. … 103 more files in changeset.
Simplify performance measurements

The many measurements that we injected into the build under test were

skewing our measurements to the point of making them unreliable or

unrealistic. We now only measure end to end build time. Here's a breakdown

with the rationale for removing each other measurement:

- configuration time: can be done by having a `gradle help` scenario instead

- execution time: the user does not care whether a long build is stuck in execution or configuration

- setup/teardown: was ill-defined anyway, basically total - configuration - execution

- JIT compile time: this is nothing we can influence and thus pointless to measure

- Memory usage: Was only measured at one point in the build, which doesn't tell us anything about

any problems at any other point in the build

- GC CPU time: If this increases we'd see it in total execution time

Generally, looking at the graphs has never pointed us directly at the problem, we always need to

profile anyway. So instead of skewing our measurements with lots of profling code, we should

instead use a dedicated profiling job to measure if we actually see a regression.

Memory usage can be tested indirectly by giving each scenario a reasonable amount of memory.

If memory usage rises above that reasonable limit, we'd see execution time rise, telling us about

the regression. Generally, we do not optimize for smallest memory usage, but for fastest execution

with reasonable memory overhead.

This change also removes all JVM tweaking and wait periods which we introduced in an attempt to

make tests more predictable and stable. These tweaks have not really helped us achieve more stable

tests and have often done the opposite. They also add lots of complexity and make our tests more

unrealistic. A real users will not add all these JVM options to Gradle.

  1. … 59 more files in changeset.
Make integration tests cacheable again

All the inputs are declared now with the right

path sensitivity

We made zip dependencies for integration test cacheable

since this will fix the problem that we currently do not have

reproducible zip files and therefore would void caching

for all integration tests using the binary distribution

PR #815

  1. … 10 more files in changeset.
Only report performance measurements from current branch

Our performance graphs are currently showing all the results from

all branches in one graph, which makes them very confusing to read,

because the test configuration itself can vary from branch to branch.

It also leads to way too many lines, so the graphs would be hard to decipher

even if the results were the same on all branches.

This change assignes a different channel for each branch so we only report

results that were measured on that branch.

Read branch name from project property

This makes the performance test workers for

branches appear with the right branch name

in the TC ui.

  1. … 1 more file in changeset.
Retain java.io.tmpdir for performance tests

- In CI, the build has java.io.tmpdir set to TeamCity agent's

temp/buildTmp directory that gets cleaned up after the build

completes.

Tune performance test runner JVM args

- optimize for stable performance on the runner to minimize the impact

of measurement, monitoring and execution of the build on the

performance test results

Publish only the toolingApi to local archives

It looks like this is the only dependency we

are using from there.

+review REVIEW-6331

  1. … 3 more files in changeset.
Add commented out jvm args for profiling performanceReport