commit | 89d497605c85b3422f2b6958cca1df3fc2dd532f | [log] [tgz] |
---|---|---|
author | Luca Milanesio <luca.milanesio@gmail.com> | Sat Apr 20 22:24:08 2024 +0100 |
committer | Luca Milanesio <luca.milanesio@gmail.com> | Sat Apr 20 21:29:03 2024 +0000 |
tree | 3dcfe664431595b6795ad731960da44ed683f425 | |
parent | 1df24549848cf15a077d54b9cea99ed8e6a8d07f [diff] |
Check for existence of change's target SHA1 for reindexing When the target branch of a change was advancing rapidly, the reindexing retries applied by the ForwardedIndexingHandlerWithRetries were continuing potentially forever as they were waiting to see the target branch's expected SHA1 for the event. Whilst it is vital to wait for a target branch SHA1 to appear before reindexing a change on a remote site, it isn't essential to have that SHA1 being *exactly* the tip of the target branch, as that moment can be easily missed. Example: - Create two changes C1 and C2 in parallel and push them with %submit option - The changes C1 and C2 will trigger the rapid advance of their target SHA1 - The other nodes may reindex C1 but not C2 because the target branch has advanced more than expected Checking simply the existence of the target SHA1 in the repository is enough to satisfy the conditions for the change being reindexed. Bug: Issue 335353379 Change-Id: Ife361ec3378a60c0d1eb15df0f9570d569adc07a
This plugin allows to deploy a distributed cluster of multiple Gerrit masters each using a separate site without sharing any storage. The alignment between the masters happens using the replication plugin and an external message broker.
Requirements for the Gerrit masters are:
NOTE: The multi-site plugin will not start if Gerrit is not yet migrated to NoteDb.
Supports multiple read/write masters across multiple sites across different geographic locations. The Gerrit nodes are kept synchronized between each other using the replication plugin and a global ref-database in order to detect and prevent split-brains.
For more details on the overall multi-site design and roadmap, please refer to the multi-site plugin DESIGN.md document
This plugin is released under the same Apache 2.0 license and copyright holders as of the Gerrit Code Review project.
The multi-site plugin can only be built in tree mode, by cloning Gerrit and the multi-site plugin code, and checking them out on the desired branch.
Example of cloning Gerrit and multi-site for a stable-2.16 build:
git clone -b stable-2.16 https://gerrit.googlesource.com/gerrit git clone -b stable-2.16 https://gerrit.googlesource.com/plugins/multi-site cd gerrit/plugins ln -s ../../multi-site .
Example of building the multi-site plugin:
cd gerrit bazel build plugins/multi-site
The multi-site.jar plugin is generated to bazel-bin/plugins/multi-site/multi-site.jar
.
Example of testing the multi-site plugin:
cd gerrit bazel test plugins/multi-site:multi_site_tests
NOTE: The multi-site tests include also the use of Docker containers for instantiating and using a Kafka/Zookeeper broker. Make sure you have a Docker daemon running (/var/run/docker.sock accessible) or a DOCKER_HOST pointing to a Docker server.
Each Gerrit server of the cluster must be identified with a globally unique instance-id defined in $GERRIT_SITE/etc/gerrit.config
. When migrating from a multi-site configuration with Gerrit v3.3 or earlier, you must reuse the instance-id value stored under $GERRIT_SITE/data/multi-site
.
Example:
[gerrit] instanceId = 758fe5b7-1869-46e6-942a-3ae0ae7e3bd2
Install the multi-site plugin into the $GERRIT_SITE/lib
directory of all the Gerrit servers that are part of the multi-site cluster. Create a symbolic link from $GERRIT_SITE/lib/multi-site.jar
into the $GERRIT_SITE/plugins
.
Add the multi-site module to $GERRIT_SITE/etc/gerrit.config
as follows:
[gerrit] installDbModule = com.googlesource.gerrit.plugins.multisite.GitModule installModule = com.googlesource.gerrit.plugins.multisite.Module
For more details on the configuration settings, please refer to the multi-site configuration documentation.
You also need to setup the Git-level replication between nodes. This can be done with either pull or push replication plugin.
For information about available HTTP endpoints please refer to the documentation.