PerformanceTestJvmOptions.groovy

Clone Tools
  • last updated a few seconds ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Remove or replace pre-java-8 MaxPermSize jvm args from the codebase

  1. … 19 more files in changeset.
Remove or replace pre-java-8 MaxPermSize jvm args from the codebase

  1. … 19 more files in changeset.
Remove or replace pre-java-8 MaxPermSize jvm args from the codebase

  1. … 20 more files in changeset.
Remove or replace pre-java-8 MaxPermSize jvm args from the codebase

  1. … 21 more files in changeset.
Remove or replace pre-java-8 MaxPermSize jvm args from the codebase

  1. … 19 more files in changeset.
Remove or replace pre-java-8 MaxPermSize jvm args from the codebase

  1. … 19 more files in changeset.
Use client

  1. … 2 more files in changeset.
Revert "Use G1"

This reverts commit 3a2ada0da24f3c8df3c22c58c7e739b9f233ec52.

  1. … 1 more file in changeset.
Use G1

  1. … 1 more file in changeset.
Use repository mirrors in performance tests

  1. … 10 more files in changeset.
Cleanup Java performance test projects and scenarios

- Sort tests into packages

- Add new test projects: `largeMonolithicJavaProject`,

`largeJavaMultiProject`, `mediumJavaMultiProjectWithTestNG`

- Cleanup template.gradle file

-- Remove "old Java" templates

-- Remove unused Scala and Groovy performance

test project configurations

-- Remove large enterprise performance test projects

- Simplify Java scenarios: clean assemble, first use, change test,

getting IDE models, dependency report, abi change, non-abi change

- Adjust tests to not use old test projects anymore

- Add file mutators

  1. … 96 more files in changeset.
Reduce memory for performance scenarios

The scenarios should only have an amount of memory that is

"reasonable" for what they are doing. This serves two purposes.

It allows us to detect large memory regressions, as a reasonable

upper limit will lead to lots of GC time if that limit is breached.

It also makes test results more predictable, as too much memory means

that many test runs will not need garbage collection at all while other

test runs will have large GC cycles.

  1. … 20 more files in changeset.
Simplify performance measurements

The many measurements that we injected into the build under test were

skewing our measurements to the point of making them unreliable or

unrealistic. We now only measure end to end build time. Here's a breakdown

with the rationale for removing each other measurement:

- configuration time: can be done by having a `gradle help` scenario instead

- execution time: the user does not care whether a long build is stuck in execution or configuration

- setup/teardown: was ill-defined anyway, basically total - configuration - execution

- JIT compile time: this is nothing we can influence and thus pointless to measure

- Memory usage: Was only measured at one point in the build, which doesn't tell us anything about

any problems at any other point in the build

- GC CPU time: If this increases we'd see it in total execution time

Generally, looking at the graphs has never pointed us directly at the problem, we always need to

profile anyway. So instead of skewing our measurements with lots of profling code, we should

instead use a dedicated profiling job to measure if we actually see a regression.

Memory usage can be tested indirectly by giving each scenario a reasonable amount of memory.

If memory usage rises above that reasonable limit, we'd see execution time rise, telling us about

the regression. Generally, we do not optimize for smallest memory usage, but for fastest execution

with reasonable memory overhead.

This change also removes all JVM tweaking and wait periods which we introduced in an attempt to

make tests more predictable and stable. These tweaks have not really helped us achieve more stable

tests and have often done the opposite. They also add lots of complexity and make our tests more

unrealistic. A real users will not add all these JVM options to Gradle.

  1. … 59 more files in changeset.
Add JVM options to reduce malloc calls and native heap fragmentation

Enable Class Data Sharing (cds) for non-daemon JVMs in perf tests

- Class Data Sharing requires that -Xshare:dump has been run after JVM

installation

- CDS is available on server JVM at least on Oracle Java 8

  1. … 1 more file in changeset.
Set JVM options for daemon client in performance tests

- optimize for stable results

  1. … 2 more files in changeset.
Disable JVM File canonicalization cache in performance tests

- The cache implementation is bad and causes jitter to measurements.

- Some properties of the cache:

- It uses synchronization

- It has a hard limit of 200 entries.

- It gets reaped every 300 calls to the get method.

- Entries expire in 30 seconds

Limit number of JIT compiler threads {4=>2} in performance tests

- The goal is to reduce jitter in performance test results

- when the number of compiler threads is set to 2 (default is 4),

it's more likely that JIT compiler threads run

on a different core than the main application threads

- by default, JIT Compiler threads run at NearMaxPriority

- see source code: https://github.com/dmlloyd/openjdk/blob/95c1d34/hotspot/src/share/vm/compiler/compileBroker.cpp#L1065-L1080

- it would be possible to use -XX:CompilerThreadPriority={N} to

reduce the OS thread priority of compiler threads. However

this could have bad consequences since the JIT compiler is

designed to run on high priority.

- The JVM adjusts tiered compilation thresholds based on the number of

compiler threads (http://www.slideshare.net/maddocig/tiered#7)

- It would be possible to fine tune tiered compilation by setting

different thresholds (-XX:TieredStopAtLevel=1 or

-XX:Tier{X}InvocationThreshold={N}) but it's better to first try to

adjust the number compiler threads to reduce jitter.

Set -XX:BiasedLockingStartupDelay=0 for performance tests

Control default JVM options for performance tests in one location

    • -0
    • +35
    ./PerformanceTestJvmOptions.groovy
  1. … 3 more files in changeset.