commit | 356ffc3653e4e213c51f9c23f9c0996d159a50fc | [log] [tgz] |
---|---|---|
author | Marcin Czech <maczech@gmail.com> | Fri Feb 02 09:33:03 2024 +0100 |
committer | Marcin Czech <maczech@gmail.com> | Mon Feb 26 09:49:20 2024 +0100 |
tree | 87044b13b30b877e648021093fe9656fa1bf2135 | |
parent | 37ad7ce16bbcd6bca746a6abb0e6f57fb63e9124 [diff] |
Avoid duplicate indexing tasks for the same id Indexing tasks check if all necessary data is present on the node by comparing the event timestamp and the event target sha1 with the local repository sha1. For very active repositories, some of the indexing tasks never pass this check because: * During the retry backoff, the target branch of the change was updated to a newer sha1 * The consumed event is pointing to an outdated `/meta` NoteDb version. This means that we can end up with multiple indexing tasks trying to reindex the same change, some of them except the last one will fail after the maximum number of retries. To avoid this situation, make sure that: * There is only one pending indexing task trying to index a change. * If any indexing task successfully indexed a change, the previous indexing task pending for that change can be discarded. Bug: Issue 320542020 Change-Id: I0117676ed015209a6b39f05e296adf6caf8b4485
This plugin allows to deploy a distributed cluster of multiple Gerrit masters each using a separate site without sharing any storage. The alignment between the masters happens using the replication plugin and an external message broker.
Requirements for the Gerrit masters are:
NOTE: The multi-site plugin will not start if Gerrit is not yet migrated to NoteDb.
Supports multiple read/write masters across multiple sites across different geographic locations. The Gerrit nodes are kept synchronized between each other using the replication plugin and a global ref-database in order to detect and prevent split-brains.
For more details on the overall multi-site design and roadmap, please refer to the multi-site plugin DESIGN.md document
This plugin is released under the same Apache 2.0 license and copyright holders as of the Gerrit Code Review project.
The multi-site plugin can only be built in tree mode, by cloning Gerrit and the multi-site plugin code, and checking them out on the desired branch.
Example of cloning Gerrit and multi-site for a stable-2.16 build:
git clone -b stable-2.16 https://gerrit.googlesource.com/gerrit git clone -b stable-2.16 https://gerrit.googlesource.com/plugins/multi-site cd gerrit/plugins ln -s ../../multi-site .
Example of building the multi-site plugin:
cd gerrit bazel build plugins/multi-site
The multi-site.jar plugin is generated to bazel-bin/plugins/multi-site/multi-site.jar
.
Example of testing the multi-site plugin:
cd gerrit bazel test plugins/multi-site:multi_site_tests
NOTE: The multi-site tests include also the use of Docker containers for instantiating and using a Kafka/Zookeeper broker. Make sure you have a Docker daemon running (/var/run/docker.sock accessible) or a DOCKER_HOST pointing to a Docker server.
Each Gerrit server of the cluster must be identified with a globally unique instance-id defined in $GERRIT_SITE/etc/gerrit.config
. When migrating from a multi-site configuration with Gerrit v3.3 or earlier, you must reuse the instance-id value stored under $GERRIT_SITE/data/multi-site
.
Example:
[gerrit] instanceId = 758fe5b7-1869-46e6-942a-3ae0ae7e3bd2
Install the multi-site plugin into the $GERRIT_SITE/lib
directory of all the Gerrit servers that are part of the multi-site cluster. Create a symbolic link from $GERRIT_SITE/lib/multi-site.jar
into the $GERRIT_SITE/plugins
.
Add the multi-site module to $GERRIT_SITE/etc/gerrit.config
as follows:
[gerrit] installDbModule = com.googlesource.gerrit.plugins.multisite.GitModule installModule = com.googlesource.gerrit.plugins.multisite.Module
For more details on the configuration settings, please refer to the multi-site configuration documentation.
You also need to setup the Git-level replication between nodes, for more details please refer to the replication plugin documentation.
For information about available HTTP endpoints please refer to the documentation.