commit | 1c93439e98cc4efb3a44880278a9cbbe5628d36b | [log] [tgz] |
---|---|---|
author | Luca Milanesio <luca.milanesio@gmail.com> | Wed Aug 28 22:36:58 2024 +0100 |
committer | Luca Milanesio <luca.milanesio@gmail.com> | Wed Aug 28 22:49:33 2024 +0100 |
tree | 3a23dc1a917bfe7e51f95459fcfd375b8c9a3702 | |
parent | 19c62e17795603abf3c74a16272c05b1b9e3a3f3 [diff] |
Add missing Singleton to ActiveWokersCheck and DeadlockCheck The ActiveWorkersCheck was triggering the creation of a new check for every incoming REST-API call, causing potential deadlocks because of the creation of the DropWizard's MetricMaker. As a consequence, the Gerrit instance could have ran out of incoming HTTP threads. See as example stack trace: HTTP GET /config/server/healthcheck~status java.lang.Thread.State: BLOCKED (on object monitor) at com.google.gerrit.metrics.dropwizard.DropWizardMetricMaker.newCounter(DropWizardMetricMaker.java:142) - waiting to lock <0x00007f828e8b80b0> (a com.google.gerrit.metrics.dropwizard.DropWizardMetricMaker) at com.google.gerrit.server.plugins.PluginMetricMaker.newCounter(PluginMetricMaker.java:55) at com.googlesource.gerrit.plugins.healthcheck.HealthCheckMetrics.getFailureCounterMetric(HealthCheckMetrics.java:33) at com.googlesource.gerrit.plugins.healthcheck.check.AbstractHealthCheck.<init>(AbstractHealthCheck.java:54) at com.googlesource.gerrit.plugins.healthcheck.check.ActiveWorkersCheck.<init>(ActiveWorkersCheck.java:46) Change-Id: I947b46b58e1d44840b3a44f740627e55dc3194aa
Allow having a single entry point to check the availability of the services that Gerrit exposes.
Clone or link this plugin to the plugins directory of Gerrit‘s source tree, and then run bazel build on the plugin’s directory.
Example:
git clone --recursive https://gerrit.googlesource.com/gerrit git clone https://gerrit.googlesource.com/plugins/healthcheck pushd gerrit/plugins && ln -s ../../healthcheck . && popd cd gerrit && bazel build plugins/healthcheck
The output plugin jar is created in:
bazel-genfiles/plugins/healthcheck/healthcheck.jar
Copy the healthcheck.jar into the Gerrit's /plugins directory and wait for the plugin to be automatically loaded. The healthcheck plugin is compatible with both primary Gerrit setups and Gerrit replicas. The only difference to bear in mind is that some checks will be automatically disabled on replicas (e.g. query changes) because the associated subsystem is switched off.
The healthcheck plugin exposes a single endpoint under its root URL and provides a JSON output of the Gerrit health status.
The HTTP status code returned indicates whether Gerrit is healthy (HTTP status 200) or has some issues (HTTP status 500).
The HTTP response payload is a JSON output that contains the details of the checks performed.
Each check returns a JSON payload with the following information:
ts: epoch timestamp in millis of the individual check
elapsed: elapsed time in millis to complete the check
result: result of the health check
Example of a healthy Gerrit response:
GET /config/server/healthcheck~status 200 OK Content-Type: application/json )]}' { "ts": 139402910202, "elapsed": 100, "querychanges": { "ts": 139402910202, "elapsed": 20, "result": "passed" }, "reviewdb": { "ts": 139402910202, "elapsed": 50, "result": "passed" }, "projectslist": { "ts": 139402910202, "elapsed": 100, "result": "passed" }, "jgit": { "ts": 139402910202, "elapsed": 80, "result": "passed" } }
Example of a Gerrit instance with the projects list timing out:
GET /config/server/healthcheck~status 500 ERROR Content-Type: application/json )]}' { "ts": 139402910202, "elapsed": 100, "querychanges": { "ts": 139402910202, "elapsed": 20, "result": "passed" }, "reviewdb": { "ts": 139402910202, "elapsed": 50, "result": "passed" }, "projectslist": { "ts": 139402910202, "elapsed": 100, "result": "timeout" }, "jgit": { "ts": 139402910202, "elapsed": 80, "result": "passed" } }
It's also possible to artificially make the healthcheck fail by placing a file at a configurable path specified like:
[healtcheck] failFileFlaPath="data/healthcheck/fail"
This will make the healthcheck endpoint return 500 even if the node is otherwise healthy. This is useful when a node needs to be removed from the pool of available Gerrit instance while it undergoes maintenance.
NOTE: If the path starts with /
then even paths outside of Gerrit‘s home will be checked. If the path starts WITHOUT /
then the path is relative to Gerrit’s home.
NOTE: The file needs to be a real file rather than a symlink.
As for all other endpoints in Gerrit, some metrics are automatically emitted when the /config/server/healthcheck~status
endpoint is hit (thanks to the Dropwizard library).
Some additional metrics are also produced to give extra insights on their result about results and latency of healthcheck sub component, such as jgit, reviewdb, etc.
More information can be found in the metrics.md file.