commit | f3e6cc7777245e8a3a8aeaf6ec48c1247a0136cf | [log] [tgz] |
---|---|---|
author | Edwin Kempin <ekempin@google.com> | Mon May 27 17:33:16 2019 +0200 |
committer | Edwin Kempin <ekempin@google.com> | Thu Jun 13 12:59:53 2019 +0200 |
tree | 3224f3e3ee756892fb09b88ae687499008f4328e | |
parent | 1b9244d7f3511756c29feb8db72992c39d598531 [diff] |
Add extension point to record execution times in a performance log Add a new extension point that is invoked for all operations for which the execution time is measured. The invocation of the extension point does not happen immediately, but only at the end of a request (REST call, SSH call, git push). Implementors can write the execution times into a performance log for further analysis. In Gerrit there are 2 possibilities to measure and record the execution time of an operation: 1. TraceTimer: Opens an autocloseable context to execute an operation. On close the execution time is written to the server logs, but only if the request is traced (or if log level was set to fine). 2. Timer Metrics: Record execution times as a metric. In addition this writes the execution time to the server logs, but only if the request is traced (or if log level was set to fine). These are the 2 places where performance log entries must be captured. Performance log entries are stored in the LoggingContext which is based on ThreadLocals. LoggingContextAwareRunnable and LoggingContextAwareCallable which are used by all executors ensure that captured performance log entries are properly copied between threads. At the end of a request (supported are REST call, SSH call and git push) the captured performance log entries are handed over to the PerformanceLogger implementations. If no PerformanceLogger is registered, or if execution times of operations outside of request scope are measured, performance log entries are not captured because nobody would consume them. Storing the performance log records in the LoggingContext and invoking the PerformanceLogger plugins only at the end of a request has some advantages and disadvantages: 1. [advantage] Users of TraceTimer can continue to use the static methods calls to create a TraceTimer (TraceContext.newTimer(...)). To invoke the plugins immediately we would need to get them injected into TraceTimer, hence callers would need a factory to create a TraceTimer. That would result in quite some boilerplate code and in addition some places that use TraceTimer (in VersionedMetaData) cannot use injection. 2. [advantage] The metric system is setup very early in the injector chain (in the DB injector, see SiteProgram#createDbInjector(boolean)). To invoke the plugins directly from the timer metrics we would need to have the PerformanceLogger plugins already available on this injector level, which is difficult. 3. [disadvantage] The captured performance log records are kept in memory while a request is processed. This leads to higher memory foodprint, but we think this is OK. To keep the performance and memory overhead for recording performance as low as possible we did some optimizations: 1. Performance log entries are only created if there is a consumer (at least one PerformanceLogger plugin + time measured inside request context) 2. Performance log entries avoid the instantiation of a Map to record meta data (instead we have dedicated fields for meta data keys and values). For the timer metrics we use generic names for the meta data keys ("field1", "field2", "field3"). This is because the actual field names are not available at this place. We may make them available, but that's outside the scope of this change and may be done in a follow-up change. To be able to write stable acceptance tests that verify that the PerformanceLogger plugins are invoked, it is important that the server calls the plugins before the response is sent back to the client. This is why the scope of the PerformanceLogContext in RestApiServlet is a little smaller than the scope of the TraceContext. Change-Id: I699db01609a1b4a88cee8959bdd9f1dfbb8dc74e Signed-off-by: Edwin Kempin <ekempin@google.com>
Gerrit is a code review and project management tool for Git based projects.
Gerrit makes reviews easier by showing changes in a side-by-side display, and allowing inline comments to be added by any reviewer.
Gerrit simplifies Git based project maintainership by permitting any authorized user to submit changes to the master Git repository, rather than requiring all approved changes to be merged in by hand by the project maintainer.
For information about how to install and use Gerrit, refer to the documentation.
Our canonical Git repository is located on googlesource.com. There is a mirror of the repository on Github.
Please report bugs on the issue tracker.
Gerrit is the work of hundreds of contributors. We appreciate your help!
Please read the contribution guidelines.
Note that we do not accept Pull Requests via the Github mirror.
The IRC channel on freenode is #gerrit. An archive is available at: echelog.com.
The Developer Mailing list is repo-discuss on Google Groups.
Gerrit is provided under the Apache License 2.0.
Install Bazel and run the following:
git clone --recurse-submodules https://gerrit.googlesource.com/gerrit cd gerrit && bazel build release
The instruction how to configure GerritForge/BinTray repositories is here
On Debian/Ubuntu run:
apt-get update & apt-get install gerrit=<version>-<release>
NOTE: release is a counter that starts with 1 and indicates the number of packages that have been released with the same version of the software.
On CentOS/RedHat run:
yum clean all && yum install gerrit-<version>[-<release>]
On Fedora run:
dnf clean all && dnf install gerrit-<version>[-<release>]
Docker images of Gerrit are available on DockerHub
To run a CentOS 7 based Gerrit image:
docker run -p 8080:8080 gerritforge/gerrit-centos7[:version]
To run a Ubuntu 15.04 based Gerrit image:
docker run -p 8080:8080 gerritforge/gerrit-ubuntu15.04[:version]
NOTE: release is optional. Last released package of the version is installed if the release number is omitted.