commit | a26ab72dd3b75ff95b8684bc8c9a60a1aed2b8a0 | [log] [tgz] |
---|---|---|
author | Daniele Sassoli <danielesassoli@gmail.com> | Fri Oct 27 19:09:28 2023 -0700 |
committer | Daniele Sassoli <danielesassoli@gmail.com> | Thu Nov 02 00:29:59 2023 +0000 |
tree | 03a84fa1f543c6d3b3f4ee0f3e602827ad456da8 | |
parent | 4294ad0b803c4e4ff6da4e72fe6f6c0df93cbd72 [diff] |
Add the ability to fail healthcheck by creating a fail file This will enable the Gerrit Admin to cause healthcheck failures by simply creating a file at a configurable location. This is useful when a node needs to be taken out of the pool of available nodes, for example while it undergoes maintanence and should not be serving requests. This approach is instantaneous, unlike enabling/disabling the plugin as Gerrit only unloads/reloads plugins once every 'plugins.frequency'. The file approach gives more immediate feedback as the healthcheck will start failing as soon as the file is created, even if we're already processing the request. Change-Id: I999ca049cff9213d3720a982530a5b03f2d02e44
Allow having a single entry point to check the availability of the services that Gerrit exposes.
Clone or link this plugin to the plugins directory of Gerrit‘s source tree, and then run bazel build on the plugin’s directory.
Example:
git clone --recursive https://gerrit.googlesource.com/gerrit git clone https://gerrit.googlesource.com/plugins/healthcheck pushd gerrit/plugins && ln -s ../../healthcheck . && popd cd gerrit && bazel build plugins/healthcheck
The output plugin jar is created in:
bazel-genfiles/plugins/healthcheck/healthcheck.jar
Copy the healthcheck.jar into the Gerrit's /plugins directory and wait for the plugin to be automatically loaded. The healthcheck plugin is compatible with both primary Gerrit setups and Gerrit replicas. The only difference to bear in mind is that some checks will be automatically disabled on replicas (e.g. query changes) because the associated subsystem is switched off.
The healthcheck plugin exposes a single endpoint under its root URL and provides a JSON output of the Gerrit health status.
The HTTP status code returned indicates whether Gerrit is healthy (HTTP status 200) or has some issues (HTTP status 500).
The HTTP response payload is a JSON output that contains the details of the checks performed.
Each check returns a JSON payload with the following information:
ts: epoch timestamp in millis of the individual check
elapsed: elapsed time in millis to complete the check
result: result of the health check
Example of a healthy Gerrit response:
GET /config/server/healthcheck~status 200 OK Content-Type: application/json )]}' { "ts": 139402910202, "elapsed": 100, "querychanges": { "ts": 139402910202, "elapsed": 20, "result": "passed" }, "reviewdb": { "ts": 139402910202, "elapsed": 50, "result": "passed" }, "projectslist": { "ts": 139402910202, "elapsed": 100, "result": "passed" }, "jgit": { "ts": 139402910202, "elapsed": 80, "result": "passed" } }
Example of a Gerrit instance with the projects list timing out:
GET /config/server/healthcheck~status 500 ERROR Content-Type: application/json )]}' { "ts": 139402910202, "elapsed": 100, "querychanges": { "ts": 139402910202, "elapsed": 20, "result": "passed" }, "reviewdb": { "ts": 139402910202, "elapsed": 50, "result": "passed" }, "projectslist": { "ts": 139402910202, "elapsed": 100, "result": "timeout" }, "jgit": { "ts": 139402910202, "elapsed": 80, "result": "passed" } }
It's also possible to artificially make the healthcheck fail by placing a file at a configurable path specified like:
[healtcheck] failFileFlaPath="data/healthcheck/fail"
This will make the healthcheck endpoint return 500 even if the node is otherwise healthy. This is useful when a node needs to be removed from the pool of available Gerrit instance while it undergoes maintenance.
NOTE: If the path starts with /
then even paths outside of Gerrit‘s home will be checked. If the path starts WITHOUT /
then the path is relative to Gerrit’s home.
NOTE: The file needs to be a real file rather than a symlink.
As for all other endpoints in Gerrit, some metrics are automatically emitted when the /config/server/healthcheck~status
endpoint is hit (thanks to the Dropwizard library).
Some additional metrics are also produced to give extra insights on their result about results and latency of healthcheck sub component, such as jgit, reviewdb, etc.
More information can be found in the config.md file.