Merge "Add multi-site plugin to dockerised environment" into stable-2.16
diff --git a/BUILD b/BUILD
index c2c508b..ee78d2f 100644
--- a/BUILD
+++ b/BUILD
@@ -17,11 +17,10 @@
],
resources = glob(["src/main/resources/**/*"]),
deps = [
- "@commons-lang3//jar",
- "@kafka_client//jar",
+ "@curator-client//jar",
"@curator-framework//jar",
"@curator-recipes//jar",
- "@curator-client//jar",
+ "@kafka-client//jar",
"@zookeeper//jar",
],
)
@@ -45,15 +44,12 @@
visibility = ["//visibility:public"],
exports = PLUGIN_DEPS + PLUGIN_TEST_DEPS + [
":multi-site__plugin",
- "@mockito//jar",
- "@wiremock//jar",
- "@kafka_client//jar",
- "@testcontainers-kafka//jar",
- "//lib/testcontainers",
- "@curator-framework//jar",
- "@curator-recipes//jar",
- "@curator-test//jar",
"@curator-client//jar",
- "@zookeeper//jar",
+ "@curator-framework//jar",
+ "@curator-test//jar",
+ "@mockito//jar",
+ "@testcontainers-kafka//jar",
+ "@wiremock//jar",
+ "//lib/testcontainers",
],
)
diff --git a/DESIGN.md b/DESIGN.md
index 248d8cc..0995a14 100644
--- a/DESIGN.md
+++ b/DESIGN.md
@@ -257,8 +257,18 @@
The multi-site solution described here depends upon the combined use of different
components:
-- **multi-site plugin**: Enables the replication of Gerrit _indexes_, _caches_,
- and _stream events_ across sites.
+- **multi-site libModule**: exports interfaces as DynamicItems to plug in specific
+implementation of `Brokers` and `Global Ref-DB` plugins.
+
+- **broker plugin**: an implementation of the broker interface, which enables the
+replication of Gerrit _indexes_, _caches_, and _stream events_ across sites.
+When no specific implementation is provided, then the [Broker Noop implementation](#broker-noop-implementation)
+then libModule interfaces are mapped to internal no-ops implementations.
+
+- **Global Ref-DB plugin**: an implementation of the Global Ref-DB interface,
+which enables the detection of out-of-sync refs across gerrit sites.
+When no specific implementation is provided, then the [Global Ref-DB Noop implementation](#global-ref-db-noop-implementation)
+then libModule interfaces are mapped to internal no-ops implementations.
- **replication plugin**: enables the replication of the _Git repositories_ across
sites.
@@ -273,14 +283,82 @@
The interactions between these components are illustrated in the following diagram:
-![Initial Multi-Site Plugin Architecture](./images/architecture-first-iteration.png)
+![Initial Multi-Site Plugin Architecture](images/architecture-first-iteration.png)
## Implementation Details
-### Message brokers
-The multi-site plugin adopts an event-sourcing pattern and is based on an
-external message broker. The current implementation uses Apache Kafka.
-It is, however, potentially extensible to others, like RabbitMQ or NATS.
+### Multi-site libModule
+As mentioned earlier there are different components behind the overarching architecture
+of this solution of a distributed multi-site gerrit installation, each one fulfilling
+a specific goal. However, whilst the goal of each component is well-defined, the
+mechanics on how each single component achieves that goal is not: the choice of which
+specific message broker or which Ref-DB to use can depend on different factors,
+such as scalability, maintainability, business standards and costs, to name a few.
+
+For this reason the multi-site component is designed to be explicitly agnostic to
+specific choices of brokers and Global Ref-DB implementations, and it does
+not care how they, specifically, fulfill their task.
+
+Instead, this component takes on only two responsibilities:
+
+* Wrapping the GitRepositoryManager so that every interaction with git can be
+verified by the Global Ref-DB plugin.
+
+* Exposing DynamicItem bindings onto which concrete _Broker_ and a _Global Ref-DB_
+plugins can register their specific implementations.
+When no such plugins are installed, then the initial binding points to no-ops.
+
+* Detect out-of-sync refs across multiple gerrit sites:
+Each change attempting to mutate a ref will be checked against the Ref-DB to
+guarantee that each node has an up-to-date view of the repository state.
+
+### Message brokers plugin
+Each gerrit node in the cluster needs to be informed and inform all other nodes
+about fundamental events, such as indexing of new changes, cache evictions and
+stream events. This component will provide a specific pub/sub broker implementation
+that is able to do so.
+
+When provided, the message broker plugin will override the dynamicItem binding exposed
+by the multi-site module with a specific implementation, such as Kafka, RabbitMQ, NATS, etc.
+
+#### Broker Noop implementation
+The default `Noop` implementation provided by the `Multi-site` libModule does nothing
+upon publishing and producing events. This is useful for setting up a test environment
+and allows multi-site library to be installed independently from any additional
+plugins or the existence of a specific broker installation.
+The Noop implementation can also be useful when there is no need for coordination
+with remote nodes, since it avoids maintaining an external broker altogether:
+for example, using the multi-site plugin purely for the purpose of replicating the Git
+repository to a disaster-recovery site and nothing else.
+
+### Global Ref-DB plugin
+Whilst the replication plugin allows the propagation of the Git repositories across
+sites and the broker plugin provides a mechanism to propagate events, the Global
+Ref-DB ensures correct alignment of refs of the multi-site nodes.
+
+It is the responsibility of this plugin to store atomically key/pairs of refs in
+order to allow the libModule to detect out-of-sync refs across multi sites.
+(aka split brain). This is achieved by storing the most recent `sha` for each
+specific mutable `refs`, by the usage of some sort of atomic _Compare and Set_ operation.
+
+We mentioned earlier the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem),
+which in a nutshell states that a distributed system can only provide two of these
+three properties: _Consistency_, _Availability_ and _Partition tolerance_: the Global
+Ref-DB helps achieving _Consistency_ and _Partition tolerance_ (thus sacrificing
+Availability).
+
+See [Prevent split brain thanks to Global Ref-DB](#prevent-split-brain-thanks-to-global-ref-db)
+For a thorough example on this.
+
+When provided, the Global Ref-DB plugin will override the dynamicItem binding
+exposed by the multi-site module with a specific implementation, such as Zoekeeper,
+etcd, MySQL, Mongo, etc.
+
+#### Global Ref-DB Noop implementation
+The default `Noop` implementation provided by the `Multi-site` libModule accepts
+any refs without checking for consistency. This is useful for setting up a test environment
+and allows multi-site library to be installed independently from any additional
+plugins or the existence of a specific Ref-DB installation.
### Eventual consistency on Git, indexes, caches, and stream events
@@ -348,7 +426,7 @@
#### The diagram below illustrates the happy path with crash recovery returning the system to a healthy state.
-![Healthy Use Case](src/main/resources/Documentation/git-replication-healthy.png)
+![Healthy Use Case](images/git-replication-healthy.png)
In this case we are considering two different clients each doing a `push` on top of
the same reference. This could be a new commit in a branch or the change of an existing commit.
@@ -376,7 +454,7 @@
#### The Split Brain situation is illustrated in the following diagram.
-![Split Brain Use Case](src/main/resources/Documentation/git-replication-split-brain.png)
+![Split Brain Use Case](images/git-replication-split-brain.png)
In this case the steps are very similar except that `Instance1` fails after acknowledging the
push of `W0 -> W1` but before having replicated the status to `Instance2`.
@@ -408,24 +486,12 @@
**NOTE**: The two options are not exclusive.
-#### Introduce a `DfsRefDatabase`
+#### Prevent split brain thanks to Global Ref-DB
-An implementation of the out-of-sync detection logic could be based on a central
-coordinator holding the _last known status_ of a _mutable ref_ (immutable refs won't
-have to be stored here). This would be, essentially, a DFS base `RefDatabase` or `DfsRefDatabase`.
+The above scenario can be prevented by using an implementation of the Global Ref-DB
+interface, which will operate as follows:
-This component would:
-
-- Contain a subset of the local `RefDatabase` data:
- - Store only _mutable _ `refs`
- - Keep only the most recent `sha` for each specific `ref`
-- Require that atomic _Compare and Set_ operations can be performed on a
-key -> value storage. For example, it could be implemented using `Zookeeper`. (One implementation
-was done by Dave Borowitz some years ago.)
-
-This interaction is illustrated in the diagram below:
-
-![Split Brain Prevented](src/main/resources/Documentation/git-replication-split-brain-detected.png)
+![Split Brain Prevented](images/git-replication-split-brain-detected.png)
The difference, in respect to the split brain use case, is that now, whenever a change of a
_mutable ref_ is requested, the Gerrit server verifies with the central RefDB that its
@@ -469,17 +535,6 @@
able to differentiate the type of traffic and, thus, is forced always to use the
RW site, even though the operation is RO.
-- **Support for different brokers**: Currently, the multi-site plugin supports Kafka.
- More brokers need to be supported in a fashion similar to the
- [ITS-* plugins framework](https://gerrit-review.googlesource.com/admin/repos/q/filter:plugins%252Fits).
- Explicit references to Kafka must be removed from the multi-site plugin. Other plugins may contribute
- implementations to the broker extension point.
-
-- **Split the publishing and subscribing**: Create two separate
- plugins. Combine the generation of the events into the current kafka-
- events plugin. The multi-site plugin will focus on
- consumption of, and sorting of, the replication issues.
-
## Step-2: Move to multi-site Stage #8.
- Auto-reconfigure HAProxy rules based on the projects sharding policy
@@ -487,5 +542,3 @@
- Serve RW/RW traffic based on the project name/ref-name.
- Balance traffic with "locally-aware" policies based on historical data
-
-- Preventing split-brain in case of temporary sites isolation
diff --git a/external_plugin_deps.bzl b/external_plugin_deps.bzl
index a8fabbb..9f69653 100644
--- a/external_plugin_deps.bzl
+++ b/external_plugin_deps.bzl
@@ -3,35 +3,34 @@
def external_plugin_deps():
maven_jar(
name = "wiremock",
- artifact = "com.github.tomakehurst:wiremock-standalone:2.18.0",
- sha1 = "cf7776dc7a0176d4f4a990155d819279078859f9",
+ artifact = "com.github.tomakehurst:wiremock-standalone:2.23.2",
+ sha1 = "4a920d6c04fd2444c7bc94880adc8313f5b31ba3",
)
maven_jar(
name = "mockito",
- artifact = "org.mockito:mockito-core:2.21.0",
- sha1 = "cdd1d0d5b2edbd2a7040735ccf88318c031f458b",
+ artifact = "org.mockito:mockito-core:2.27.0",
+ sha1 = "835fc3283b481f4758b8ef464cd560c649c08b00",
deps = [
- "@byte_buddy//jar",
- "@byte_buddy_agent//jar",
+ "@byte-buddy//jar",
+ "@byte-buddy-agent//jar",
"@objenesis//jar",
],
)
- BYTE_BUDDY_VER = "1.8.15"
+ BYTE_BUDDY_VER = "1.9.10"
CURATOR_VER = "4.2.0"
- CURATOR_TEST_VER = "2.12.0"
maven_jar(
- name = "byte_buddy",
+ name = "byte-buddy",
artifact = "net.bytebuddy:byte-buddy:" + BYTE_BUDDY_VER,
- sha1 = "cb36fe3c70ead5fcd016856a7efff908402d86b8",
+ sha1 = "211a2b4d3df1eeef2a6cacf78d74a1f725e7a840",
)
maven_jar(
- name = "byte_buddy_agent",
+ name = "byte-buddy-agent",
artifact = "net.bytebuddy:byte-buddy-agent:" + BYTE_BUDDY_VER,
- sha1 = "a2dbe3457401f65ad4022617fbb3fc0e5f427c7d",
+ sha1 = "9674aba5ee793e54b864952b001166848da0f26b",
)
maven_jar(
@@ -41,49 +40,43 @@
)
maven_jar(
- name = "kafka_client",
+ name = "kafka-client",
artifact = "org.apache.kafka:kafka-clients:2.1.0",
sha1 = "34d9983705c953b97abb01e1cd04647f47272fe5",
)
maven_jar(
name = "testcontainers-kafka",
- artifact = "org.testcontainers:kafka:1.10.6",
- sha1 = "5984e31306bd6c84a36092cdd19e0ef7e2268d98",
- )
-
- maven_jar(
- name = "commons-lang3",
- artifact = "org.apache.commons:commons-lang3:3.6",
- sha1 = "9d28a6b23650e8a7e9063c04588ace6cf7012c17",
+ artifact = "org.testcontainers:kafka:1.11.3",
+ sha1 = "932d1baa2541f218b1b44a0546ae83d530011468",
)
maven_jar(
name = "curator-test",
- artifact = "org.apache.curator:curator-test:" + CURATOR_TEST_VER,
- sha1 = "0a797be57ba95b67688a7615f7ad41ee6b3ceff0"
+ artifact = "org.apache.curator:curator-test:" + CURATOR_VER,
+ sha1 = "98ac2dd69b8c07dcaab5e5473f93fdb9e320cd73",
)
maven_jar(
name = "curator-framework",
artifact = "org.apache.curator:curator-framework:" + CURATOR_VER,
- sha1 = "5b1cc87e17b8fe4219b057f6025662a693538861"
+ sha1 = "5b1cc87e17b8fe4219b057f6025662a693538861",
)
maven_jar(
name = "curator-recipes",
artifact = "org.apache.curator:curator-recipes:" + CURATOR_VER,
- sha1 = "7f775be5a7062c2477c51533b9d008f70411ba8e"
+ sha1 = "7f775be5a7062c2477c51533b9d008f70411ba8e",
)
maven_jar(
name = "curator-client",
artifact = "org.apache.curator:curator-client:" + CURATOR_VER,
- sha1 = "d5d50930b8dd189f92c40258a6ba97675fea3e15"
- )
+ sha1 = "d5d50930b8dd189f92c40258a6ba97675fea3e15",
+ )
maven_jar(
name = "zookeeper",
- artifact = "org.apache.zookeeper:zookeeper:3.4.8",
- sha1 = "933ea2ed15e6a0e24b788973e3d128ff163c3136"
+ artifact = "org.apache.zookeeper:zookeeper:3.4.14",
+ sha1 = "c114c1e1c8172a7cd3f6ae39209a635f7a06c1a1",
)
diff --git a/images/architecture-first-iteration.png b/images/architecture-first-iteration.png
index 1a9fe36..841ee40 100644
--- a/images/architecture-first-iteration.png
+++ b/images/architecture-first-iteration.png
Binary files differ
diff --git a/src/main/resources/Documentation/git-replication-healthy.png b/images/git-replication-healthy.png
similarity index 100%
rename from src/main/resources/Documentation/git-replication-healthy.png
rename to images/git-replication-healthy.png
Binary files differ
diff --git a/images/git-replication-split-brain-detected.png b/images/git-replication-split-brain-detected.png
new file mode 100644
index 0000000..a49b9be
--- /dev/null
+++ b/images/git-replication-split-brain-detected.png
Binary files differ
diff --git a/src/main/resources/Documentation/git-replication-split-brain.png b/images/git-replication-split-brain.png
similarity index 100%
rename from src/main/resources/Documentation/git-replication-split-brain.png
rename to images/git-replication-split-brain.png
Binary files differ
diff --git a/setup_local_env/setup.sh b/setup_local_env/setup.sh
index 5f4d076..2d615b5 100755
--- a/setup_local_env/setup.sh
+++ b/setup_local_env/setup.sh
@@ -318,10 +318,10 @@
fi
if [ $DOWNLOAD_WEBSESSION_FLATFILE = "true" ];then
echo "Downloading websession-flatfile plugin stable 2.16"
- wget https://gerrit-ci.gerritforge.com/view/Plugins-stable-2.16/job/plugin-websession-flatfile-bazel-master-stable-2.16/lastSuccessfulBuild/artifact/bazel-genfiles/plugins/websession-flatfile/websession-flatfile.jar \
+ wget https://gerrit-ci.gerritforge.com/view/Plugins-stable-2.16/job/plugin-websession-flatfile-bazel-master-stable-2.16/lastSuccessfulBuild/artifact/bazel-bin/plugins/websession-flatfile/websession-flatfile.jar \
-O $DEPLOYMENT_LOCATION/websession-flatfile.jar || { echo >&2 "Cannot download websession-flatfile plugin: Check internet connection. Abort\
ing"; exit 1; }
- wget https://gerrit-ci.gerritforge.com/view/Plugins-stable-2.16/job/plugin-healthcheck-bazel-stable-2.16/lastSuccessfulBuild/artifact/bazel-genfiles/plugins/healthcheck/healthcheck.jar \
+ wget https://gerrit-ci.gerritforge.com/view/Plugins-stable-2.16/job/plugin-healthcheck-bazel-stable-2.16/lastSuccessfulBuild/artifact/bazel-bin/plugins/healthcheck/healthcheck.jar \
-O $DEPLOYMENT_LOCATION/healthcheck.jar || { echo >&2 "Cannot download healthcheck plugin: Check internet connection. Abort\
ing"; exit 1; }
else
diff --git a/src/.DS_Store b/src/.DS_Store
new file mode 100644
index 0000000..d549e6c
--- /dev/null
+++ b/src/.DS_Store
Binary files differ
diff --git a/src/main/.DS_Store b/src/main/.DS_Store
new file mode 100644
index 0000000..c068634
--- /dev/null
+++ b/src/main/.DS_Store
Binary files differ
diff --git a/src/main/java/com/googlesource/gerrit/plugins/multisite/Configuration.java b/src/main/java/com/googlesource/gerrit/plugins/multisite/Configuration.java
index 72fec47..3764013 100644
--- a/src/main/java/com/googlesource/gerrit/plugins/multisite/Configuration.java
+++ b/src/main/java/com/googlesource/gerrit/plugins/multisite/Configuration.java
@@ -14,7 +14,6 @@
package com.googlesource.gerrit.plugins.multisite;
-import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Suppliers.memoize;
import static com.google.common.base.Suppliers.ofInstance;
@@ -41,11 +40,6 @@
import java.util.Map;
import java.util.Properties;
import java.util.UUID;
-import org.apache.commons.lang.StringUtils;
-import org.apache.curator.RetryPolicy;
-import org.apache.curator.framework.CuratorFramework;
-import org.apache.curator.framework.CuratorFrameworkFactory;
-import org.apache.curator.retry.BoundedExponentialBackoffRetry;
import org.apache.kafka.common.serialization.StringSerializer;
import org.eclipse.jgit.errors.ConfigInvalidException;
import org.eclipse.jgit.lib.Config;
@@ -66,7 +60,6 @@
// common parameters to cache and index sections
static final String THREAD_POOL_SIZE_KEY = "threadPoolSize";
-
static final int DEFAULT_INDEX_MAX_TRIES = 2;
static final int DEFAULT_INDEX_RETRY_INTERVAL = 30000;
private static final int DEFAULT_POLLING_INTERVAL_MS = 1000;
@@ -85,8 +78,9 @@
private final Supplier<Index> index;
private final Supplier<KafkaSubscriber> subscriber;
private final Supplier<Kafka> kafka;
- private final Supplier<ZookeeperConfig> zookeeperConfig;
+ private final Supplier<SharedRefDatabase> sharedRefDb;
private final Supplier<Collection<Message>> replicationConfigValidation;
+ private final Config multiSiteConfig;
@Inject
Configuration(SitePaths sitePaths) {
@@ -95,19 +89,24 @@
@VisibleForTesting
public Configuration(Config multiSiteConfig, Config replicationConfig) {
- Supplier<Config> lazyCfg = lazyLoad(multiSiteConfig);
+ Supplier<Config> lazyMultiSiteCfg = lazyLoad(multiSiteConfig);
+ this.multiSiteConfig = multiSiteConfig;
replicationConfigValidation = lazyValidateReplicatioConfig(replicationConfig);
- kafka = memoize(() -> new Kafka(lazyCfg));
- publisher = memoize(() -> new KafkaPublisher(lazyCfg));
- subscriber = memoize(() -> new KafkaSubscriber(lazyCfg));
- cache = memoize(() -> new Cache(lazyCfg));
- event = memoize(() -> new Event(lazyCfg));
- index = memoize(() -> new Index(lazyCfg));
- zookeeperConfig = memoize(() -> new ZookeeperConfig(lazyCfg));
+ kafka = memoize(() -> new Kafka(lazyMultiSiteCfg));
+ publisher = memoize(() -> new KafkaPublisher(lazyMultiSiteCfg));
+ subscriber = memoize(() -> new KafkaSubscriber(lazyMultiSiteCfg));
+ cache = memoize(() -> new Cache(lazyMultiSiteCfg));
+ event = memoize(() -> new Event(lazyMultiSiteCfg));
+ index = memoize(() -> new Index(lazyMultiSiteCfg));
+ sharedRefDb = memoize(() -> new SharedRefDatabase(lazyMultiSiteCfg));
}
- public ZookeeperConfig getZookeeperConfig() {
- return zookeeperConfig.get();
+ public Config getMultiSiteConfig() {
+ return multiSiteConfig;
+ }
+
+ public SharedRefDatabase getSharedRefDb() {
+ return sharedRefDb.get();
}
public Kafka getKafka() {
@@ -193,17 +192,6 @@
}
}
- private static long getLong(
- Supplier<Config> cfg, String section, String subSection, String name, long defaultValue) {
- try {
- return cfg.get().getLong(section, subSection, name, defaultValue);
- } catch (IllegalArgumentException e) {
- log.error("invalid value for {}; using default value {}", name, defaultValue);
- log.debug("Failed to retrieve long value: {}", e.getMessage(), e);
- return defaultValue;
- }
- }
-
private static String getString(
Supplier<Config> cfg, String section, String subsection, String name, String defaultValue) {
String value = cfg.get().getString(section, subsection, name);
@@ -261,7 +249,7 @@
private final Map<EventFamily, String> eventTopics;
private final String bootstrapServers;
- private static final Map<EventFamily, String> EVENT_TOPICS =
+ private static final ImmutableMap<EventFamily, String> EVENT_TOPICS =
ImmutableMap.of(
EventFamily.INDEX_EVENT,
"GERRIT.EVENT.INDEX",
@@ -340,7 +328,7 @@
}
}
- public class KafkaSubscriber extends Properties {
+ public static class KafkaSubscriber extends Properties {
private static final long serialVersionUID = 1L;
static final String KAFKA_SUBSCRIBER_SUBSECTION = "subscriber";
@@ -400,6 +388,37 @@
}
}
+ public static class SharedRefDatabase {
+ public static final String SECTION = "ref-database";
+ public static final String SUBSECTION_ENFORCEMENT_RULES = "enforcementRules";
+
+ private final boolean enabled;
+ private final Multimap<EnforcePolicy, String> enforcementRules;
+
+ private SharedRefDatabase(Supplier<Config> cfg) {
+ enabled = getBoolean(cfg, SECTION, null, ENABLE_KEY, true);
+
+ enforcementRules = MultimapBuilder.hashKeys().arrayListValues().build();
+ for (EnforcePolicy policy : EnforcePolicy.values()) {
+ enforcementRules.putAll(
+ policy, getList(cfg, SECTION, SUBSECTION_ENFORCEMENT_RULES, policy.name()));
+ }
+ }
+
+ public boolean isEnabled() {
+ return enabled;
+ }
+
+ public Multimap<EnforcePolicy, String> getEnforcementRules() {
+ return enforcementRules;
+ }
+
+ private List<String> getList(
+ Supplier<Config> cfg, String section, String subsection, String name) {
+ return ImmutableList.copyOf(cfg.get().getStringList(section, subsection, name));
+ }
+ }
+
/** Common parameters to cache, event, index */
public abstract static class Forwarding {
static final boolean DEFAULT_SYNCHRONIZE = true;
@@ -487,177 +506,6 @@
}
}
- public static class ZookeeperConfig {
- public static final String SECTION = "ref-database";
- public static final int defaultSessionTimeoutMs;
- public static final int defaultConnectionTimeoutMs;
- public static final String DEFAULT_ZK_CONNECT = "localhost:2181";
- private final int DEFAULT_RETRY_POLICY_BASE_SLEEP_TIME_MS = 1000;
- private final int DEFAULT_RETRY_POLICY_MAX_SLEEP_TIME_MS = 3000;
- private final int DEFAULT_RETRY_POLICY_MAX_RETRIES = 3;
- private final int DEFAULT_CAS_RETRY_POLICY_BASE_SLEEP_TIME_MS = 100;
- private final int DEFAULT_CAS_RETRY_POLICY_MAX_SLEEP_TIME_MS = 300;
- private final int DEFAULT_CAS_RETRY_POLICY_MAX_RETRIES = 3;
- private final int DEFAULT_TRANSACTION_LOCK_TIMEOUT = 1000;
-
- static {
- CuratorFrameworkFactory.Builder b = CuratorFrameworkFactory.builder();
- defaultSessionTimeoutMs = b.getSessionTimeoutMs();
- defaultConnectionTimeoutMs = b.getConnectionTimeoutMs();
- }
-
- public static final String SUBSECTION = "zookeeper";
- public static final String KEY_CONNECT_STRING = "connectString";
- public static final String KEY_SESSION_TIMEOUT_MS = "sessionTimeoutMs";
- public static final String KEY_CONNECTION_TIMEOUT_MS = "connectionTimeoutMs";
- public static final String KEY_RETRY_POLICY_BASE_SLEEP_TIME_MS = "retryPolicyBaseSleepTimeMs";
- public static final String KEY_RETRY_POLICY_MAX_SLEEP_TIME_MS = "retryPolicyMaxSleepTimeMs";
- public static final String KEY_RETRY_POLICY_MAX_RETRIES = "retryPolicyMaxRetries";
- public static final String KEY_LOCK_TIMEOUT_MS = "lockTimeoutMs";
- public static final String KEY_ROOT_NODE = "rootNode";
- public final String KEY_CAS_RETRY_POLICY_BASE_SLEEP_TIME_MS = "casRetryPolicyBaseSleepTimeMs";
- public final String KEY_CAS_RETRY_POLICY_MAX_SLEEP_TIME_MS = "casRetryPolicyMaxSleepTimeMs";
- public final String KEY_CAS_RETRY_POLICY_MAX_RETRIES = "casRetryPolicyMaxRetries";
- public static final String KEY_MIGRATE = "migrate";
- public final String TRANSACTION_LOCK_TIMEOUT_KEY = "transactionLockTimeoutMs";
-
- public static final String SUBSECTION_ENFORCEMENT_RULES = "enforcementRules";
-
- private final String connectionString;
- private final String root;
- private final int sessionTimeoutMs;
- private final int connectionTimeoutMs;
- private final int baseSleepTimeMs;
- private final int maxSleepTimeMs;
- private final int maxRetries;
- private final int casBaseSleepTimeMs;
- private final int casMaxSleepTimeMs;
- private final int casMaxRetries;
- private final boolean enabled;
-
- private final Multimap<EnforcePolicy, String> enforcementRules;
-
- private final Long transactionLockTimeOut;
-
- private CuratorFramework build;
-
- private ZookeeperConfig(Supplier<Config> cfg) {
- connectionString =
- getString(cfg, SECTION, SUBSECTION, KEY_CONNECT_STRING, DEFAULT_ZK_CONNECT);
- root = getString(cfg, SECTION, SUBSECTION, KEY_ROOT_NODE, "gerrit/multi-site");
- sessionTimeoutMs =
- getInt(cfg, SECTION, SUBSECTION, KEY_SESSION_TIMEOUT_MS, defaultSessionTimeoutMs);
- connectionTimeoutMs =
- getInt(cfg, SECTION, SUBSECTION, KEY_CONNECTION_TIMEOUT_MS, defaultConnectionTimeoutMs);
-
- baseSleepTimeMs =
- getInt(
- cfg,
- SECTION,
- SUBSECTION,
- KEY_RETRY_POLICY_BASE_SLEEP_TIME_MS,
- DEFAULT_RETRY_POLICY_BASE_SLEEP_TIME_MS);
-
- maxSleepTimeMs =
- getInt(
- cfg,
- SECTION,
- SUBSECTION,
- KEY_RETRY_POLICY_MAX_SLEEP_TIME_MS,
- DEFAULT_RETRY_POLICY_MAX_SLEEP_TIME_MS);
-
- maxRetries =
- getInt(
- cfg,
- SECTION,
- SUBSECTION,
- KEY_RETRY_POLICY_MAX_RETRIES,
- DEFAULT_RETRY_POLICY_MAX_RETRIES);
-
- casBaseSleepTimeMs =
- getInt(
- cfg,
- SECTION,
- SUBSECTION,
- KEY_CAS_RETRY_POLICY_BASE_SLEEP_TIME_MS,
- DEFAULT_CAS_RETRY_POLICY_BASE_SLEEP_TIME_MS);
-
- casMaxSleepTimeMs =
- getInt(
- cfg,
- SECTION,
- SUBSECTION,
- KEY_CAS_RETRY_POLICY_MAX_SLEEP_TIME_MS,
- DEFAULT_CAS_RETRY_POLICY_MAX_SLEEP_TIME_MS);
-
- casMaxRetries =
- getInt(
- cfg,
- SECTION,
- SUBSECTION,
- KEY_CAS_RETRY_POLICY_MAX_RETRIES,
- DEFAULT_CAS_RETRY_POLICY_MAX_RETRIES);
-
- transactionLockTimeOut =
- getLong(
- cfg,
- SECTION,
- SUBSECTION,
- TRANSACTION_LOCK_TIMEOUT_KEY,
- DEFAULT_TRANSACTION_LOCK_TIMEOUT);
-
- checkArgument(StringUtils.isNotEmpty(connectionString), "zookeeper.%s contains no servers");
-
- enabled = Configuration.getBoolean(cfg, SECTION, null, ENABLE_KEY, true);
-
- enforcementRules = MultimapBuilder.hashKeys().arrayListValues().build();
- for (EnforcePolicy policy : EnforcePolicy.values()) {
- enforcementRules.putAll(
- policy,
- Configuration.getList(cfg, SECTION, SUBSECTION_ENFORCEMENT_RULES, policy.name()));
- }
- }
-
- public CuratorFramework buildCurator() {
- if (build == null) {
- this.build =
- CuratorFrameworkFactory.builder()
- .connectString(connectionString)
- .sessionTimeoutMs(sessionTimeoutMs)
- .connectionTimeoutMs(connectionTimeoutMs)
- .retryPolicy(
- new BoundedExponentialBackoffRetry(baseSleepTimeMs, maxSleepTimeMs, maxRetries))
- .namespace(root)
- .build();
- this.build.start();
- }
-
- return this.build;
- }
-
- public Long getZkInterProcessLockTimeOut() {
- return transactionLockTimeOut;
- }
-
- public RetryPolicy buildCasRetryPolicy() {
- return new BoundedExponentialBackoffRetry(
- casBaseSleepTimeMs, casMaxSleepTimeMs, casMaxRetries);
- }
-
- public boolean isEnabled() {
- return enabled;
- }
-
- public Multimap<EnforcePolicy, String> getEnforcementRules() {
- return enforcementRules;
- }
- }
-
- static List<String> getList(
- Supplier<Config> cfg, String section, String subsection, String name) {
- return ImmutableList.copyOf(cfg.get().getStringList(section, subsection, name));
- }
-
static boolean getBoolean(
Supplier<Config> cfg, String section, String subsection, String name, boolean defaultValue) {
try {
diff --git a/src/main/java/com/googlesource/gerrit/plugins/multisite/Module.java b/src/main/java/com/googlesource/gerrit/plugins/multisite/Module.java
index 5e91c80..6bc7027 100644
--- a/src/main/java/com/googlesource/gerrit/plugins/multisite/Module.java
+++ b/src/main/java/com/googlesource/gerrit/plugins/multisite/Module.java
@@ -34,9 +34,9 @@
import com.googlesource.gerrit.plugins.multisite.kafka.consumer.KafkaConsumerModule;
import com.googlesource.gerrit.plugins.multisite.kafka.router.ForwardedEventRouterModule;
import com.googlesource.gerrit.plugins.multisite.validation.ValidationModule;
+import com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.zookeeper.ZkValidationModule;
import java.io.BufferedReader;
import java.io.BufferedWriter;
-import java.io.FileReader;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
@@ -110,7 +110,10 @@
install(
new ValidationModule(
- config, disableGitRepositoryValidation || !config.getZookeeperConfig().isEnabled()));
+ config, disableGitRepositoryValidation || !config.getSharedRefDb().isEnabled()));
+
+ install(new ZkValidationModule(config));
+
bind(Gson.class)
.annotatedWith(BrokerGson.class)
.toProvider(GsonProvider.class)
@@ -148,7 +151,7 @@
private UUID tryToLoadSavedInstanceId(String serverIdFile) {
if (Files.exists(Paths.get(serverIdFile))) {
- try (BufferedReader br = new BufferedReader(new FileReader(serverIdFile))) {
+ try (BufferedReader br = Files.newBufferedReader(Paths.get(serverIdFile))) {
return UUID.fromString(br.readLine());
} catch (IOException e) {
log.warn(
diff --git a/src/main/java/com/googlesource/gerrit/plugins/multisite/ZookeeperConfig.java b/src/main/java/com/googlesource/gerrit/plugins/multisite/ZookeeperConfig.java
new file mode 100644
index 0000000..35471fa
--- /dev/null
+++ b/src/main/java/com/googlesource/gerrit/plugins/multisite/ZookeeperConfig.java
@@ -0,0 +1,247 @@
+// Copyright (C) 2019 The Android Open Source Project
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package com.googlesource.gerrit.plugins.multisite;
+
+import static com.google.common.base.Preconditions.checkArgument;
+import static com.google.common.base.Suppliers.memoize;
+import static com.google.common.base.Suppliers.ofInstance;
+
+import com.google.common.base.Strings;
+import com.google.common.base.Supplier;
+import com.google.gerrit.server.config.SitePaths;
+import java.io.IOException;
+import org.apache.commons.lang.StringUtils;
+import org.apache.curator.RetryPolicy;
+import org.apache.curator.framework.CuratorFramework;
+import org.apache.curator.framework.CuratorFrameworkFactory;
+import org.apache.curator.retry.BoundedExponentialBackoffRetry;
+import org.eclipse.jgit.errors.ConfigInvalidException;
+import org.eclipse.jgit.lib.Config;
+import org.eclipse.jgit.storage.file.FileBasedConfig;
+import org.eclipse.jgit.util.FS;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ZookeeperConfig {
+ private static final Logger log = LoggerFactory.getLogger(ZookeeperConfig.class);
+ public static final int defaultSessionTimeoutMs;
+ public static final int defaultConnectionTimeoutMs;
+ public static final String DEFAULT_ZK_CONNECT = "localhost:2181";
+ private final int DEFAULT_RETRY_POLICY_BASE_SLEEP_TIME_MS = 1000;
+ private final int DEFAULT_RETRY_POLICY_MAX_SLEEP_TIME_MS = 3000;
+ private final int DEFAULT_RETRY_POLICY_MAX_RETRIES = 3;
+ private final int DEFAULT_CAS_RETRY_POLICY_BASE_SLEEP_TIME_MS = 100;
+ private final int DEFAULT_CAS_RETRY_POLICY_MAX_SLEEP_TIME_MS = 300;
+ private final int DEFAULT_CAS_RETRY_POLICY_MAX_RETRIES = 3;
+ private final int DEFAULT_TRANSACTION_LOCK_TIMEOUT = 1000;
+
+ static {
+ CuratorFrameworkFactory.Builder b = CuratorFrameworkFactory.builder();
+ defaultSessionTimeoutMs = b.getSessionTimeoutMs();
+ defaultConnectionTimeoutMs = b.getConnectionTimeoutMs();
+ }
+
+ public static final String SUBSECTION = "zookeeper";
+ public static final String KEY_CONNECT_STRING = "connectString";
+ public static final String KEY_SESSION_TIMEOUT_MS = "sessionTimeoutMs";
+ public static final String KEY_CONNECTION_TIMEOUT_MS = "connectionTimeoutMs";
+ public static final String KEY_RETRY_POLICY_BASE_SLEEP_TIME_MS = "retryPolicyBaseSleepTimeMs";
+ public static final String KEY_RETRY_POLICY_MAX_SLEEP_TIME_MS = "retryPolicyMaxSleepTimeMs";
+ public static final String KEY_RETRY_POLICY_MAX_RETRIES = "retryPolicyMaxRetries";
+ public static final String KEY_ROOT_NODE = "rootNode";
+ public final String KEY_CAS_RETRY_POLICY_BASE_SLEEP_TIME_MS = "casRetryPolicyBaseSleepTimeMs";
+ public final String KEY_CAS_RETRY_POLICY_MAX_SLEEP_TIME_MS = "casRetryPolicyMaxSleepTimeMs";
+ public final String KEY_CAS_RETRY_POLICY_MAX_RETRIES = "casRetryPolicyMaxRetries";
+ public final String TRANSACTION_LOCK_TIMEOUT_KEY = "transactionLockTimeoutMs";
+
+ private final String connectionString;
+ private final String root;
+ private final int sessionTimeoutMs;
+ private final int connectionTimeoutMs;
+ private final int baseSleepTimeMs;
+ private final int maxSleepTimeMs;
+ private final int maxRetries;
+ private final int casBaseSleepTimeMs;
+ private final int casMaxSleepTimeMs;
+ private final int casMaxRetries;
+
+ public static final String SECTION = "ref-database";
+ private final Long transactionLockTimeOut;
+
+ private CuratorFramework build;
+
+ public ZookeeperConfig(Config zkCfg) {
+ Supplier<Config> lazyZkConfig = lazyLoad(zkCfg);
+ connectionString =
+ getString(lazyZkConfig, SECTION, SUBSECTION, KEY_CONNECT_STRING, DEFAULT_ZK_CONNECT);
+ root = getString(lazyZkConfig, SECTION, SUBSECTION, KEY_ROOT_NODE, "gerrit/multi-site");
+ sessionTimeoutMs =
+ getInt(lazyZkConfig, SECTION, SUBSECTION, KEY_SESSION_TIMEOUT_MS, defaultSessionTimeoutMs);
+ connectionTimeoutMs =
+ getInt(
+ lazyZkConfig,
+ SECTION,
+ SUBSECTION,
+ KEY_CONNECTION_TIMEOUT_MS,
+ defaultConnectionTimeoutMs);
+
+ baseSleepTimeMs =
+ getInt(
+ lazyZkConfig,
+ SECTION,
+ SUBSECTION,
+ KEY_RETRY_POLICY_BASE_SLEEP_TIME_MS,
+ DEFAULT_RETRY_POLICY_BASE_SLEEP_TIME_MS);
+
+ maxSleepTimeMs =
+ getInt(
+ lazyZkConfig,
+ SECTION,
+ SUBSECTION,
+ KEY_RETRY_POLICY_MAX_SLEEP_TIME_MS,
+ DEFAULT_RETRY_POLICY_MAX_SLEEP_TIME_MS);
+
+ maxRetries =
+ getInt(
+ lazyZkConfig,
+ SECTION,
+ SUBSECTION,
+ KEY_RETRY_POLICY_MAX_RETRIES,
+ DEFAULT_RETRY_POLICY_MAX_RETRIES);
+
+ casBaseSleepTimeMs =
+ getInt(
+ lazyZkConfig,
+ SECTION,
+ SUBSECTION,
+ KEY_CAS_RETRY_POLICY_BASE_SLEEP_TIME_MS,
+ DEFAULT_CAS_RETRY_POLICY_BASE_SLEEP_TIME_MS);
+
+ casMaxSleepTimeMs =
+ getInt(
+ lazyZkConfig,
+ SECTION,
+ SUBSECTION,
+ KEY_CAS_RETRY_POLICY_MAX_SLEEP_TIME_MS,
+ DEFAULT_CAS_RETRY_POLICY_MAX_SLEEP_TIME_MS);
+
+ casMaxRetries =
+ getInt(
+ lazyZkConfig,
+ SECTION,
+ SUBSECTION,
+ KEY_CAS_RETRY_POLICY_MAX_RETRIES,
+ DEFAULT_CAS_RETRY_POLICY_MAX_RETRIES);
+
+ transactionLockTimeOut =
+ getLong(
+ lazyZkConfig,
+ SECTION,
+ SUBSECTION,
+ TRANSACTION_LOCK_TIMEOUT_KEY,
+ DEFAULT_TRANSACTION_LOCK_TIMEOUT);
+
+ checkArgument(StringUtils.isNotEmpty(connectionString), "zookeeper.%s contains no servers");
+ }
+
+ public CuratorFramework buildCurator() {
+ if (build == null) {
+ this.build =
+ CuratorFrameworkFactory.builder()
+ .connectString(connectionString)
+ .sessionTimeoutMs(sessionTimeoutMs)
+ .connectionTimeoutMs(connectionTimeoutMs)
+ .retryPolicy(
+ new BoundedExponentialBackoffRetry(baseSleepTimeMs, maxSleepTimeMs, maxRetries))
+ .namespace(root)
+ .build();
+ this.build.start();
+ }
+
+ return this.build;
+ }
+
+ public Long getZkInterProcessLockTimeOut() {
+ return transactionLockTimeOut;
+ }
+
+ public RetryPolicy buildCasRetryPolicy() {
+ return new BoundedExponentialBackoffRetry(casBaseSleepTimeMs, casMaxSleepTimeMs, casMaxRetries);
+ }
+
+ private static FileBasedConfig getConfigFile(SitePaths sitePaths, String configFileName) {
+ return new FileBasedConfig(sitePaths.etc_dir.resolve(configFileName).toFile(), FS.DETECTED);
+ }
+
+ private long getLong(
+ Supplier<Config> cfg, String section, String subSection, String name, long defaultValue) {
+ try {
+ return cfg.get().getLong(section, subSection, name, defaultValue);
+ } catch (IllegalArgumentException e) {
+ log.error("invalid value for {}; using default value {}", name, defaultValue);
+ log.debug("Failed to retrieve long value: {}", e.getMessage(), e);
+ return defaultValue;
+ }
+ }
+
+ private int getInt(
+ Supplier<Config> cfg, String section, String subSection, String name, int defaultValue) {
+ try {
+ return cfg.get().getInt(section, subSection, name, defaultValue);
+ } catch (IllegalArgumentException e) {
+ log.error("invalid value for {}; using default value {}", name, defaultValue);
+ log.debug("Failed to retrieve integer value: {}", e.getMessage(), e);
+ return defaultValue;
+ }
+ }
+
+ private Supplier<Config> lazyLoad(Config config) {
+ if (config instanceof FileBasedConfig) {
+ return memoize(
+ () -> {
+ FileBasedConfig fileConfig = (FileBasedConfig) config;
+ String fileConfigFileName = fileConfig.getFile().getPath();
+ try {
+ log.info("Loading configuration from {}", fileConfigFileName);
+ fileConfig.load();
+ } catch (IOException | ConfigInvalidException e) {
+ log.error("Unable to load configuration from " + fileConfigFileName, e);
+ }
+ return fileConfig;
+ });
+ }
+ return ofInstance(config);
+ }
+
+ private boolean getBoolean(
+ Supplier<Config> cfg, String section, String subsection, String name, boolean defaultValue) {
+ try {
+ return cfg.get().getBoolean(section, subsection, name, defaultValue);
+ } catch (IllegalArgumentException e) {
+ log.error("invalid value for {}; using default value {}", name, defaultValue);
+ log.debug("Failed to retrieve boolean value: {}", e.getMessage(), e);
+ return defaultValue;
+ }
+ }
+
+ private String getString(
+ Supplier<Config> cfg, String section, String subsection, String name, String defaultValue) {
+ String value = cfg.get().getString(section, subsection, name);
+ if (!Strings.isNullOrEmpty(value)) {
+ return value;
+ }
+ return defaultValue;
+ }
+}
diff --git a/src/main/java/com/googlesource/gerrit/plugins/multisite/index/ChangeCheckerImpl.java b/src/main/java/com/googlesource/gerrit/plugins/multisite/index/ChangeCheckerImpl.java
index 29435bc..8cb2fec 100644
--- a/src/main/java/com/googlesource/gerrit/plugins/multisite/index/ChangeCheckerImpl.java
+++ b/src/main/java/com/googlesource/gerrit/plugins/multisite/index/ChangeCheckerImpl.java
@@ -102,8 +102,8 @@
.map(
e ->
(computedChangeTs.get() > e.eventCreatedOn)
- || (computedChangeTs.get() == e.eventCreatedOn)
- && (Objects.equals(getBranchTargetSha(), e.targetSha)))
+ || ((computedChangeTs.get() == e.eventCreatedOn)
+ && (Objects.equals(getBranchTargetSha(), e.targetSha))))
.orElse(true);
}
diff --git a/src/main/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/AbstractKafkaSubcriber.java b/src/main/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/AbstractKafkaSubcriber.java
index 25200ad..911471f 100644
--- a/src/main/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/AbstractKafkaSubcriber.java
+++ b/src/main/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/AbstractKafkaSubcriber.java
@@ -14,6 +14,8 @@
package com.googlesource.gerrit.plugins.multisite.kafka.consumer;
+import static java.nio.charset.StandardCharsets.UTF_8;
+
import com.google.common.flogger.FluentLogger;
import com.google.gerrit.extensions.registration.DynamicSet;
import com.google.gerrit.server.permissions.PermissionBackendException;
@@ -133,7 +135,7 @@
}
} catch (Exception e) {
logger.atSevere().withCause(e).log(
- "Malformed event '%s': [Exception: %s]", new String(consumerRecord.value()));
+ "Malformed event '%s': [Exception: %s]", new String(consumerRecord.value(), UTF_8));
}
}
diff --git a/src/main/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/SourceAwareEventWrapper.java b/src/main/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/SourceAwareEventWrapper.java
index 09fbe0d..cc638c0 100644
--- a/src/main/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/SourceAwareEventWrapper.java
+++ b/src/main/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/SourceAwareEventWrapper.java
@@ -14,11 +14,12 @@
package com.googlesource.gerrit.plugins.multisite.kafka.consumer;
+import static java.util.Objects.requireNonNull;
+
import com.google.gerrit.server.events.Event;
import com.google.gson.Gson;
import com.google.gson.JsonObject;
import java.util.UUID;
-import org.apache.commons.lang3.Validate;
public class SourceAwareEventWrapper {
@@ -67,9 +68,9 @@
}
public void validate() {
- Validate.notNull(eventId, "EventId cannot be null");
- Validate.notNull(eventType, "EventType cannot be null");
- Validate.notNull(sourceInstanceId, "Source Instance ID cannot be null");
+ requireNonNull(eventId, "EventId cannot be null");
+ requireNonNull(eventType, "EventType cannot be null");
+ requireNonNull(sourceInstanceId, "Source Instance ID cannot be null");
}
@Override
@@ -94,8 +95,8 @@
}
public void validate() {
- Validate.notNull(header, "Header cannot be null");
- Validate.notNull(body, "Body cannot be null");
+ requireNonNull(header, "Header cannot be null");
+ requireNonNull(body, "Body cannot be null");
header.validate();
}
}
diff --git a/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/MultiSiteGitRepositoryManager.java b/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/MultiSiteGitRepositoryManager.java
index f609c49..8ce1f59 100644
--- a/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/MultiSiteGitRepositoryManager.java
+++ b/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/MultiSiteGitRepositoryManager.java
@@ -14,7 +14,7 @@
package com.googlesource.gerrit.plugins.multisite.validation;
-import com.google.gerrit.reviewdb.client.Project.NameKey;
+import com.google.gerrit.reviewdb.client.Project;
import com.google.gerrit.server.git.GitRepositoryManager;
import com.google.gerrit.server.git.LocalDiskRepositoryManager;
import com.google.gerrit.server.git.RepositoryCaseMismatchException;
@@ -39,22 +39,23 @@
}
@Override
- public Repository openRepository(NameKey name) throws RepositoryNotFoundException, IOException {
+ public Repository openRepository(Project.NameKey name)
+ throws RepositoryNotFoundException, IOException {
return wrap(name, gitRepositoryManager.openRepository(name));
}
@Override
- public Repository createRepository(NameKey name)
+ public Repository createRepository(Project.NameKey name)
throws RepositoryCaseMismatchException, RepositoryNotFoundException, IOException {
return wrap(name, gitRepositoryManager.createRepository(name));
}
@Override
- public SortedSet<NameKey> list() {
+ public SortedSet<Project.NameKey> list() {
return gitRepositoryManager.list();
}
- private Repository wrap(NameKey projectName, Repository projectRepo) {
+ private Repository wrap(Project.NameKey projectName, Repository projectRepo) {
return multiSiteRepoFactory.create(projectName.get(), projectRepo);
}
}
diff --git a/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/ValidationModule.java b/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/ValidationModule.java
index 7d8df66..1dfc3f9 100644
--- a/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/ValidationModule.java
+++ b/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/ValidationModule.java
@@ -15,13 +15,14 @@
package com.googlesource.gerrit.plugins.multisite.validation;
import com.google.gerrit.extensions.config.FactoryModule;
+import com.google.gerrit.extensions.events.ProjectDeletedListener;
+import com.google.gerrit.extensions.registration.DynamicSet;
import com.google.gerrit.server.git.GitRepositoryManager;
import com.google.inject.Scopes;
import com.googlesource.gerrit.plugins.multisite.Configuration;
import com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.CustomSharedRefEnforcementByProject;
import com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.DefaultSharedRefEnforcement;
import com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.SharedRefEnforcement;
-import com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.zookeeper.ZkValidationModule;
public class ValidationModule extends FactoryModule {
private final Configuration cfg;
@@ -44,14 +45,13 @@
if (!disableGitRepositoryValidation) {
bind(GitRepositoryManager.class).to(MultiSiteGitRepositoryManager.class);
}
- if (cfg.getZookeeperConfig().getEnforcementRules().isEmpty()) {
+ if (cfg.getSharedRefDb().getEnforcementRules().isEmpty()) {
bind(SharedRefEnforcement.class).to(DefaultSharedRefEnforcement.class).in(Scopes.SINGLETON);
} else {
bind(SharedRefEnforcement.class)
.to(CustomSharedRefEnforcementByProject.class)
.in(Scopes.SINGLETON);
}
-
- install(new ZkValidationModule(cfg));
+ DynamicSet.bind(binder(), ProjectDeletedListener.class).to(ProjectDeletedSharedDbCleanup.class);
}
}
diff --git a/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/CustomSharedRefEnforcementByProject.java b/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/CustomSharedRefEnforcementByProject.java
index 7a806a8..77a0c0b 100644
--- a/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/CustomSharedRefEnforcementByProject.java
+++ b/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/CustomSharedRefEnforcementByProject.java
@@ -41,7 +41,7 @@
Map<String, Map<String, EnforcePolicy>> enforcementMap = new HashMap<>();
for (Map.Entry<EnforcePolicy, String> enforcementEntry :
- config.getZookeeperConfig().getEnforcementRules().entries()) {
+ config.getSharedRefDb().getEnforcementRules().entries()) {
parseEnforcementEntry(enforcementMap, enforcementEntry);
}
diff --git a/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/DefaultSharedRefEnforcement.java b/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/DefaultSharedRefEnforcement.java
index 6b495fb..63bab09 100644
--- a/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/DefaultSharedRefEnforcement.java
+++ b/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/DefaultSharedRefEnforcement.java
@@ -16,11 +16,10 @@
import com.google.common.base.MoreObjects;
import com.google.common.collect.ImmutableMap;
-import java.util.Map;
public class DefaultSharedRefEnforcement implements SharedRefEnforcement {
- private static final Map<String, EnforcePolicy> PREDEF_ENFORCEMENTS =
+ private static final ImmutableMap<String, EnforcePolicy> PREDEF_ENFORCEMENTS =
ImmutableMap.of("All-Users:refs/meta/external-ids", EnforcePolicy.DESIRED);
@Override
diff --git a/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/ZkValidationModule.java b/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/ZkValidationModule.java
index 3e8f75a..1f41b1f 100644
--- a/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/ZkValidationModule.java
+++ b/src/main/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/ZkValidationModule.java
@@ -14,34 +14,28 @@
package com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.zookeeper;
-import com.google.gerrit.extensions.events.ProjectDeletedListener;
-import com.google.gerrit.extensions.registration.DynamicSet;
import com.google.inject.AbstractModule;
import com.googlesource.gerrit.plugins.multisite.Configuration;
-import com.googlesource.gerrit.plugins.multisite.validation.ProjectDeletedSharedDbCleanup;
+import com.googlesource.gerrit.plugins.multisite.ZookeeperConfig;
import com.googlesource.gerrit.plugins.multisite.validation.ZkConnectionConfig;
import com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.SharedRefDatabase;
import org.apache.curator.framework.CuratorFramework;
public class ZkValidationModule extends AbstractModule {
- private Configuration cfg;
+ private ZookeeperConfig cfg;
public ZkValidationModule(Configuration cfg) {
- this.cfg = cfg;
+ this.cfg = new ZookeeperConfig(cfg.getMultiSiteConfig());
}
@Override
protected void configure() {
bind(SharedRefDatabase.class).to(ZkSharedRefDatabase.class);
- bind(CuratorFramework.class).toInstance(cfg.getZookeeperConfig().buildCurator());
+ bind(CuratorFramework.class).toInstance(cfg.buildCurator());
bind(ZkConnectionConfig.class)
.toInstance(
- new ZkConnectionConfig(
- cfg.getZookeeperConfig().buildCasRetryPolicy(),
- cfg.getZookeeperConfig().getZkInterProcessLockTimeOut()));
-
- DynamicSet.bind(binder(), ProjectDeletedListener.class).to(ProjectDeletedSharedDbCleanup.class);
+ new ZkConnectionConfig(cfg.buildCasRetryPolicy(), cfg.getZkInterProcessLockTimeOut()));
}
}
diff --git a/src/main/resources/.DS_Store b/src/main/resources/.DS_Store
new file mode 100644
index 0000000..d714c4e
--- /dev/null
+++ b/src/main/resources/.DS_Store
Binary files differ
diff --git a/src/main/resources/Documentation/.DS_Store b/src/main/resources/Documentation/.DS_Store
new file mode 100644
index 0000000..2c69d28
--- /dev/null
+++ b/src/main/resources/Documentation/.DS_Store
Binary files differ
diff --git a/src/main/resources/Documentation/git-replication-split-brain-detected.png b/src/main/resources/Documentation/git-replication-split-brain-detected.png
deleted file mode 100644
index dba5a81..0000000
--- a/src/main/resources/Documentation/git-replication-split-brain-detected.png
+++ /dev/null
Binary files differ
diff --git a/src/main/resources/Documentation/sources/architecture-first-iteration.xml b/src/main/resources/Documentation/sources/architecture-first-iteration.xml
new file mode 100644
index 0000000..c3aa4f9
--- /dev/null
+++ b/src/main/resources/Documentation/sources/architecture-first-iteration.xml
@@ -0,0 +1,2 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<mxfile modified="2019-06-07T17:30:04.207Z" host="www.draw.io" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" etag="zx6JNhjWm9CAjpC8XxAK" version="10.7.5" type="google"><diagram id="aLS-0ix4CCXe6ffUliya" name="Page-1">7V1bl6M2Ev41/dg+SOIiHvuSmeSczGY2nezsPGIb2yS08WLc051fv8JGGFTCgI0Ad6tfxshcPFTVV6X6SqUb8vD8+jn2Nqsv0dwPb7Axf70hjzcYY0op+ycdeTuMIGTZh5FlHMyzsePAU/CPnw0a2egumPvb0olJFIVJsCkPzqL12p8lpTEvjqMf5dMWUVh+6sZb+mDgaeaFcPRbME9Wh1FqGcfxn/1gueJPRkb2zdSb/b2Mo906e946WvuHb549fpvs1O3Km0c/CkPkpxvyEEdRcvj0/Prgh+l75W/scN2nim/znxz766TJBc7T9u3lP+Zff5KXnx8/mX/e2d//fYsPd3nxwl32KrIfm7zxd7P/3/npTdANuf+xChL/aePN0m9/MHVgY6vkOcy+XgRh+BCFUcyO9++C3M+97Sq//Dl68ab7O6dHsb8N/ikeR4mXFI6ZnvnFY38eFA8zbSiMwHeSvaYXP07818JQ9o4++9Gzn8Rv7JTs21uuk5kuO1x+P46KgSx+0qqgFW425mXKuMzvfZQK+5AJpoWQiBZSjZAQgUJyTImMLKJKSEgLSRBSLgAuJGxDIdlSIZmKhGQCmfhz5giywyhOVtEyWnvhT8fR+6PUjLKE/Ncg+W86PLGyo++Fbx5fsyv2B2/ZwV9+krxl/s/bJREbOj711yjaSKR/g8liscCzGRvfJnH0t1/4Zm5PbeZs02+8OLlLfSH7YhZ6220w48OfgjD/0Wv2Hg+/Gln8+Pv+2MiPj799f/RWPPrqxwEThh9ng0Pp4TbaxTP/hKxpplnsBSz95JQPzLQiVYWTeh37oZcEL+WYQaaj+0uZLLy3wgmbKFgnW6DC+f0vgB5Dgj12mKTi33hr9nmZfv41YgEPO+3TE/+WPa14AjAOFrJs0o+ztzBgNhDXo9b0YCy/TvOBPED6bZewu3C5jh++bKuEXqbtAvSSgpcyB4OAfC4Ar6ZAdA7IjRYSEGkICRw7OoaE59evKQgcdcxyBCUTlOfwP8ouOupPNbTIH+MgWvbEhjBVEM53rVOnsw+HH9AtiMF4aaS+efTqrVxrDXMiKC6y3Ilb+HOc8k0PFgcUuTPlgVOkn+++xtHrG3d107jSy5WVqMbBXYHfMihwVHkSouipRLDpzlO94zB7KPkrt2nsTHDZpgkVpmEV3gjcyzStCXHEm7EJRuEP94sPFsCH37+x4z9ib8GmSaK2MktLykpYVohsPi6ZonthsFynETOT3z5eTu02YHH3XfbFczCf7/VdhjNlG7iy6X6egxGihlLizLAgDmFlOGRDqf+mpd6p1InjTOTR4oByd4Dc7zabMPC3smjA9p5TKayn2/Sf2cpbL/2tVo5uIEHQDVOWTDdIn7pBISb4TDdmXiJTj1wdaqNIrSDnoIdZ1g+LDq4fLtCPr7tpGGxXbPCnF38/zz5DGT6wkLElOAgeVw4mZAwTpjow6FjqVMwQGDB/2m9YgCFFp6XetcdHWJwFDC52DMT++ABz6VrcZ4jbIiVh2zIaHvcpa5gHvE+FyDy18ct6EXtMprtZsot9Lf9O5F/GeIdKjN3q1dhNoAAyKvRfmgNtFp8LaG4hmFtGVp+5ZQwzeVySi2ifhJzlWVv7f7voIFOyWBD2Vxw6iDv0vVSmR1043KRxWC8KLWPi2NnW/Y31mKLDLom2h0x0cR4Q+otEAhVJmppOVXUWrJd/7PPUtyZUlRGoBjKcsqMn0PZlmqHO9G0grw5YB/75UCLjcApCTjsci2v4weGyusKapnRFwxofgdWYez5dSFkNe0b96WJAJKov4mnK2HdfxHOZMjqdKiPXK1TWK2fsenWWFY1XGelgyign2mxIxBPHynm8trwddlzxdqAUUzFRh2F6VpeyXeyrTbccxpk2nKf1WsuGYZZVMzSDTOB4eiYvQBs6Ac9TCO+xeKR9jTa1cx9/8J2kxuX76zl/QKbdbCS79+hquEfjSE0beD7iGvlQ6wIYeLt0SLhbhSttW9xpOgK4U+v0j7NOnq+mupN0Wrx8jlVXxdB1c7OSzWYmVTTYSxCDznw5YkyplYppzLGw3dSE6agmZgSyAU9JlOaDDRh6fPOnTz4D6Wito41OmGDHFApCiIQx6DneIEMj0wjijQYxw4dYM8bnSbWoRmwlgUl/a8ZIM6JET7RbARzF5YU2Jl9ENtREm0C65PEBa2/WibBJee48OPtNYL2zJj8vYbhoubzBQpLCtV7ZT9ItraA5rlEEHU05ru6DjsuUkXaqjJrjGoUyNuW4FEXAIF2FiO1AkgudTXIhw8SQ5RIAWzHLRSD9oYPvy/21WI5oWrDhTK/RN48DNM01uGrwQDzP7kPV6DfvZA6eER9B3inPK5XTSjUuvy5fNVo6qzefiQnwcYTaE3ouoSW7oWlbihgt9jQByCk6/fOQRU9doIbTMmEeRXNa18xpuU05LR6/jWQiZsJVD5rT6i9LRBAgtQYPLq6mA8towaBxrzDeLEu5U0dI1nZJiQdu2zCMTb1OXqDIAVvXouXnRrrvwDp66qRHndEaBx3ENpSshmljG8MGmVU0RV395Ggtzh3bJNOlBuzy5doTdG5m1iWwbZhluHDWqjg7ayoh2c6a13EegesvqtHeJjZXyJ4cUzPHBAr6mHM/2tS8uB32VvlTEXEJPVptVNPS1Tl5viInBFfz6Pz3MM1Yyot1h1/mYUIKjPfhapaa+Lgrr4kB8g1AmI6E5lImS6vTJTsD+7CP2wIWuWIHWEzOXgLD0GRi19+uK8bAEDsNYj6g1MFZsKXU5yD9XY9MBzSInVoK4AoCy0GthqxXh2Kd0jnDotjc8unclKEYxVNiv2sUIyARZNvnL4lnoIhw7e06gjHMu0/mT+oBwiB59AtT6VcGIOx+D95sJeeReC9MjXHV1JDQ3VS61kkCccr2KLM6pYUGhrjxVvyqh7jKaOcMgHMmYqtOeLfO8I0KD+oB3+AKmFYU+EeGL7jcgfYakcE+7k+76XYWB9PUJzGouKgls84b1asAEZbAIB4lD5Y4sjpddTCsD/MRC9QdmQ9zbYd47zlMx6YJ8wNn7jdDsHvyPh15L/CYflIMMFG6j88bpcY/rvMSW9MOnl6w31WS1LcrkqSOOzWGJPrU45YtSS+cGXwTgmTZBTXBN+E9eHpMLtgwP1qdUfBfgllK/mhgOw1sZSoPw47MMlxTllOwYf5ot5l7iVzGgXZdtRJ2DZHgM3nRwWBCht1CtJAva7zI5ZcXOkESt18Rw8m2FvGFk2ckmjFMqPQrYzUr9o0J7aQtlJ6MV+hcbVEa6qnKGvEtNY9NMhtFvq35/IrnqA1W4UxbI+CFy66xGMkgyIH1CoG8MEqckUBBHnEvfRs1bRQkaeMrS/UK+ZJ855eaojJ1glK0Ph6fVYaNr6kMe7zZmfoy7KYNGB01Hq+to7Jx2Wqc01XYYhcKpwe3xp/ZsSEVrOi4tKH/Zeof0ozqrYOMwjosodU4d/VV5iGeT8wT9gGvtuVPqwtOOzM0mAnTocU+6BMEQzDkYvqNLdQsj9eQOGZI7LtjsxykXEMogbJKNEzt+SRbndgMEl1MpE/rDRI7XaGvVzcORXqKhMT5lCcyxC0AlBGeJx6lNuCGbZh1HJB+S4XyT8I34xgsDlCz0lvHAWOOAzrfj+Y8cCJI7porgVO8gGSlmM0iAUSGDgXgWnCNilmSXBAs7/I+FCxSmCP/7MdxurBRZEF+/8bO+2XNoGfNBFRfhV0jzeDZW/p32w17yRmIbQocYZy++/Uy9AvE4RUImJRnv0iy+SYiEvkqFDAsyyotvhfF+DXcLQPJ+AnTNepNd/ySQ2LigpfQ1Vimsv0fKEwxFRezGJ+Yy2KImDKWWmaZFdFmRJZ1wsNfJDNkYCA0IIS04/pmTO8XG+Wrb7LK6Orgvlwcj2xJD0ze87r43qmp6L1T+Nq/7MIkuN0Ge0o/DKZfovkuNZZLLAL2j2B/stDY2P+l36y8+T4qH8SganS03s64XTUzK2VQiAzoxMZjVi3fcmtrc5o5Iqrq7buwVuY+1fc4fQ+Z5xnIqgaZfdaoaL1VDWdGkmA/jKb7jUd+9xe3j/cFkXYbTHw4Ib+VhTmUzF09v+sy4nRccY2sJPTpd4Ln6gles6UWYuoFN0u9qDNNPcFrKzPsQHPrdYLnwg4TGk4vmMCjEeIppPQ0nkqXvIwOUCHtoAG1RmiDIyrViNqlgM0RMhAaUM9aJjo8A6HxtK3MhmcgEJxVACFcPwNhNqIgHPjilVEQCMF185qDqNXS6+EgEAxUxmNYLd/y1XEQ+SJhTUI00tERkxAINp+oJiG0SFuIdCSUA0Iwbvw4SEls2Nyl32IIxDuL60ikieRpheTHG4nwH3BN9lX1lq8vEsknMToSaaKjI45EuNnoSKRrkY4lEsEfIhmCsCWwaJKkfs8xCNbZkBaWdX3ZEHx92ZDKt3yFMYjOhjQR+BVkQ7DOhigS6VhiEAKBse3CY+k6Xcl6XskCZf651QJlY4LscpdPftCmxef17FNXvyF7llGp3zH6cF53C44v1L3Le3JcoHvlbdCNZrqHSnpXuzj+PSlZQx3LuZCh++FxHM2Q1im3q6k9H9EediVAxB4P/DrnNYhw2tnAiJsb19pArtu1VoDH0dvBpoJSu6dbO4jnm24PXWoQubz5yUV+wNGOQIUjUNQT6owdudt5AnBBT67g8pb4Q7uCllZw1a6gcZsfYo7EDISuPXW+AFxg9tH6HhF3EDOo0Gj8foC9AWCridyhZpmWHJE737yh6kGVOi9e0A/08/TOFUP/R5oF4KazANJ5wuVMMzDLWo2NGugXL+Dbfik2A1i7oHdZVsHhVlJHLljwmKWHi1Rtr5ssS4rF/O3WW6YKwUmlS4iH8csEVfjK8loCWVc+S5FMkAOlAqRwoFcb//dbs5y3wnuhHD2K78VCMuJT3XtpUNDRqqdkieQqYYkEt0ZCguXK0YrX7FtUNqwCflqx/9v85shsXoIrY6QnW0gmb7ddjhuwZO9qOfioYywlTa0PknsM4vTi9SL2mKvfzZJdDOt3xhEWnLDraJ3wuP+qVcd2xJbncMtJZEk0R1kogTpost0T1y1mFahhMN8imS0RIiuHqHS79XMb3oSrPltgj4PiwETYxgL3MWOnsDBmFj2nb/xuPX/ykwa4U7Tl/fJ+uBljY8QJvakf3nuzv5d7dZVV0jS2dP4/uzhCSzdfNYQmHIejC/NHt4QI2JJXz/GbRIvF1lfSPRzRESVrGlfMmA4t5mtumWwQrknZyHfGLeJMNRxdgEB8Bd4QCHSZf5F0JHgnoJD3Ye8AFAi1URmxOwEFLOwaSvsDBHItgKDeeLkwlVMIIjlFxS32Otu6R9wdo5/4AmZY3guUdFbnzqAkX0yaCecWdRRg4NJtTSENoRJMBq2JHBmYOD2BidiWitrCyqOuwER8ECZ1REzFL1OLPpK2vO8Ffaoq9M9CH9LVfEZg2wTtUwk3g9aflkuw0Um4KUxmCLKLkxlj4tZVaQwzleFepJ4n7jxqqpjIsMM4ipKiEsXeZvUlmvvpGf8H</diagram></mxfile>
\ No newline at end of file
diff --git a/src/main/resources/Documentation/git-replication-healthy.txt b/src/main/resources/Documentation/sources/git-replication-healthy.txt
similarity index 100%
rename from src/main/resources/Documentation/git-replication-healthy.txt
rename to src/main/resources/Documentation/sources/git-replication-healthy.txt
diff --git a/src/main/resources/Documentation/git-replication-split-brain-detected.txt b/src/main/resources/Documentation/sources/git-replication-split-brain-detected.txt
similarity index 63%
rename from src/main/resources/Documentation/git-replication-split-brain-detected.txt
rename to src/main/resources/Documentation/sources/git-replication-split-brain-detected.txt
index b249b9f..79b0c69 100644
--- a/src/main/resources/Documentation/git-replication-split-brain-detected.txt
+++ b/src/main/resources/Documentation/sources/git-replication-split-brain-detected.txt
@@ -2,16 +2,16 @@
participant Client1
participant Instance1
-participant Ref-DB Coordinator
+participant Global Ref-DB
participant Instance2
participant Client2
state over Client1, Client2, Instance1, Instance2: W0
state over Client1 : W0 -> W1
Client1 -> +Instance1: Push W1
-Instance1 -> +Ref-DB Coordinator: CAS if state == W0 set state W0 -> W1
-state over Ref-DB Coordinator : W0 -> W1
-Ref-DB Coordinator -> -Instance1 : ACK
+Instance1 -> +Global Ref-DB: CAS if state == W0 set state W0 -> W1
+state over Global Ref-DB : W0 -> W1
+Global Ref-DB -> -Instance1 : ACK
state over Instance1 : W0 -> W1
Instance1 -> -Client1: Ack W1
@@ -20,8 +20,8 @@
state over Client2 : W0 -> W2
Client2 -> +Instance2: Push W2
-Instance2 -> +Ref-DB Coordinator: CAS if state == W0 set state W0 -> W2
-Ref-DB Coordinator -> -Instance2 : NACK
+Instance2 -> +Global Ref-DB: CAS if state == W0 set state W0 -> W2
+Global Ref-DB -> -Instance2 : NACK
Instance2 -> -Client2 : Push failed -- RO Mode
@@ -36,9 +36,9 @@
Client2 -> Instance2: Pull W1
state over Client2 : W0 -> W1 -> W2
Client2 -> Instance2: Push W2
-Instance2 -> +Ref-DB Coordinator: CAS if state == W1 set state W1 -> W2
-state over Ref-DB Coordinator: W0 -> W1 -> W2
-Ref-DB Coordinator -> -Instance2 : ACK
+Instance2 -> +Global Ref-DB: CAS if state == W1 set state W1 -> W2
+state over Global Ref-DB: W0 -> W1 -> W2
+Global Ref-DB -> -Instance2 : ACK
state over Instance2: W0 -> W1 -> W2
Instance2 -> -Client2 : ACK
diff --git a/src/main/resources/Documentation/git-replication-split-brain.txt b/src/main/resources/Documentation/sources/git-replication-split-brain.txt
similarity index 100%
rename from src/main/resources/Documentation/git-replication-split-brain.txt
rename to src/main/resources/Documentation/sources/git-replication-split-brain.txt
diff --git a/src/test/java/com/googlesource/gerrit/plugins/multisite/broker/kafka/BrokerPublisherTest.java b/src/test/java/com/googlesource/gerrit/plugins/multisite/broker/kafka/BrokerPublisherTest.java
index fe73065..8131a3f 100644
--- a/src/test/java/com/googlesource/gerrit/plugins/multisite/broker/kafka/BrokerPublisherTest.java
+++ b/src/test/java/com/googlesource/gerrit/plugins/multisite/broker/kafka/BrokerPublisherTest.java
@@ -140,7 +140,7 @@
assertThat(publisher.eventToJson(event).equals(expectedCommentEventJsonObject)).isTrue();
}
- private class TestBrokerSession implements BrokerSession {
+ private static class TestBrokerSession implements BrokerSession {
@Override
public boolean isOpen() {
diff --git a/src/test/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/EventConsumerIT.java b/src/test/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/EventConsumerIT.java
index 3e8b3ca..2cc7c16 100644
--- a/src/test/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/EventConsumerIT.java
+++ b/src/test/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/EventConsumerIT.java
@@ -75,7 +75,7 @@
public static class KafkaTestContainerModule extends LifecycleModule {
- public class KafkaStopAtShutdown implements LifecycleListener {
+ public static class KafkaStopAtShutdown implements LifecycleListener {
private final KafkaContainer kafka;
public KafkaStopAtShutdown(KafkaContainer kafka) {
diff --git a/src/test/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/KafkaEventDeserializerTest.java b/src/test/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/KafkaEventDeserializerTest.java
index ce958cf..2da0c64 100644
--- a/src/test/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/KafkaEventDeserializerTest.java
+++ b/src/test/java/com/googlesource/gerrit/plugins/multisite/kafka/consumer/KafkaEventDeserializerTest.java
@@ -15,6 +15,7 @@
package com.googlesource.gerrit.plugins.multisite.kafka.consumer;
import static com.google.common.truth.Truth.assertThat;
+import static java.nio.charset.StandardCharsets.UTF_8;
import com.google.gson.Gson;
import com.googlesource.gerrit.plugins.multisite.broker.GsonProvider;
@@ -44,7 +45,8 @@
+ "\"body\": {}"
+ "}",
eventId, eventType, sourceInstanceId, eventCreatedOn);
- final SourceAwareEventWrapper event = deserializer.deserialize("ignored", eventJson.getBytes());
+ final SourceAwareEventWrapper event =
+ deserializer.deserialize("ignored", eventJson.getBytes(UTF_8));
assertThat(event.getBody().entrySet()).isEmpty();
assertThat(event.getHeader().getEventId()).isEqualTo(eventId);
@@ -55,11 +57,11 @@
@Test(expected = RuntimeException.class)
public void kafkaEventDeserializerShouldFailForInvalidJson() {
- deserializer.deserialize("ignored", "this is not a JSON string".getBytes());
+ deserializer.deserialize("ignored", "this is not a JSON string".getBytes(UTF_8));
}
@Test(expected = RuntimeException.class)
public void kafkaEventDeserializerShouldFailForInvalidObjectButValidJSON() {
- deserializer.deserialize("ignored", "{}".getBytes());
+ deserializer.deserialize("ignored", "{}".getBytes(UTF_8));
}
}
diff --git a/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/CustomSharedRefEnforcementByProjectTest.java b/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/CustomSharedRefEnforcementByProjectTest.java
index e4d7861..d63436c 100644
--- a/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/CustomSharedRefEnforcementByProjectTest.java
+++ b/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/CustomSharedRefEnforcementByProjectTest.java
@@ -18,7 +18,7 @@
import static com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.SharedRefDatabase.newRef;
import com.googlesource.gerrit.plugins.multisite.Configuration;
-import com.googlesource.gerrit.plugins.multisite.Configuration.ZookeeperConfig;
+import com.googlesource.gerrit.plugins.multisite.Configuration.SharedRefDatabase;
import com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.CustomSharedRefEnforcementByProject;
import com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.SharedRefEnforcement;
import com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.SharedRefEnforcement.EnforcePolicy;
@@ -34,22 +34,22 @@
@Before
public void setUp() {
- Config multiSiteConfig = new Config();
- multiSiteConfig.setStringList(
- ZookeeperConfig.SECTION,
- ZookeeperConfig.SUBSECTION_ENFORCEMENT_RULES,
+ Config sharedRefDbConfig = new Config();
+ sharedRefDbConfig.setStringList(
+ SharedRefDatabase.SECTION,
+ SharedRefDatabase.SUBSECTION_ENFORCEMENT_RULES,
EnforcePolicy.DESIRED.name(),
Arrays.asList(
"ProjectOne",
"ProjectTwo:refs/heads/master/test",
"ProjectTwo:refs/heads/master/test2"));
- multiSiteConfig.setString(
- ZookeeperConfig.SECTION,
- ZookeeperConfig.SUBSECTION_ENFORCEMENT_RULES,
+ sharedRefDbConfig.setString(
+ SharedRefDatabase.SECTION,
+ SharedRefDatabase.SUBSECTION_ENFORCEMENT_RULES,
EnforcePolicy.IGNORED.name(),
":refs/heads/master/test");
- refEnforcement = newCustomRefEnforcement(multiSiteConfig);
+ refEnforcement = newCustomRefEnforcement(sharedRefDbConfig);
}
@Test
@@ -138,18 +138,18 @@
private SharedRefEnforcement newCustomRefEnforcementWithValue(
EnforcePolicy policy, String... projectAndRefs) {
- Config multiSiteConfig = new Config();
- multiSiteConfig.setStringList(
- ZookeeperConfig.SECTION,
- ZookeeperConfig.SUBSECTION_ENFORCEMENT_RULES,
+ Config sharedRefDbConfiguration = new Config();
+ sharedRefDbConfiguration.setStringList(
+ SharedRefDatabase.SECTION,
+ SharedRefDatabase.SUBSECTION_ENFORCEMENT_RULES,
policy.name(),
Arrays.asList(projectAndRefs));
- return newCustomRefEnforcement(multiSiteConfig);
+ return newCustomRefEnforcement(sharedRefDbConfiguration);
}
- private SharedRefEnforcement newCustomRefEnforcement(Config multiSiteConfig) {
+ private SharedRefEnforcement newCustomRefEnforcement(Config sharedRefDbConfig) {
return new CustomSharedRefEnforcementByProject(
- new Configuration(multiSiteConfig, new Config()));
+ new Configuration(sharedRefDbConfig, new Config()));
}
@Override
diff --git a/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/RefFixture.java b/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/RefFixture.java
index 126efa1..72ea236 100644
--- a/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/RefFixture.java
+++ b/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/RefFixture.java
@@ -14,7 +14,7 @@
package com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.zookeeper;
-import com.google.gerrit.reviewdb.client.Project.NameKey;
+import com.google.gerrit.reviewdb.client.Project;
import com.google.gerrit.reviewdb.client.RefNames;
import org.eclipse.jgit.lib.ObjectId;
import org.junit.Ignore;
@@ -27,7 +27,7 @@
static final String ALLOWED_NAME_CHARS =
ALLOWED_CHARS + ALLOWED_CHARS.toUpperCase() + ALLOWED_DIGITS;
static final String A_TEST_PROJECT_NAME = "A_TEST_PROJECT_NAME";
- static final NameKey A_TEST_PROJECT_NAME_KEY = new NameKey(A_TEST_PROJECT_NAME);
+ static final Project.NameKey A_TEST_PROJECT_NAME_KEY = new Project.NameKey(A_TEST_PROJECT_NAME);
static final ObjectId AN_OBJECT_ID_1 = new ObjectId(1, 2, 3, 4, 5);
static final ObjectId AN_OBJECT_ID_2 = new ObjectId(1, 2, 3, 4, 6);
static final ObjectId AN_OBJECT_ID_3 = new ObjectId(1, 2, 3, 4, 7);
diff --git a/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/ZookeeperTestContainerSupport.java b/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/ZookeeperTestContainerSupport.java
index 70f6428..3237e3a 100644
--- a/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/ZookeeperTestContainerSupport.java
+++ b/src/test/java/com/googlesource/gerrit/plugins/multisite/validation/dfsrefdb/zookeeper/ZookeeperTestContainerSupport.java
@@ -17,7 +17,7 @@
import static com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.zookeeper.ZkSharedRefDatabase.pathFor;
import static com.googlesource.gerrit.plugins.multisite.validation.dfsrefdb.zookeeper.ZkSharedRefDatabase.writeObjectId;
-import com.googlesource.gerrit.plugins.multisite.Configuration;
+import com.googlesource.gerrit.plugins.multisite.ZookeeperConfig;
import org.apache.curator.framework.CuratorFramework;
import org.eclipse.jgit.lib.Config;
import org.eclipse.jgit.lib.ObjectId;
@@ -38,7 +38,7 @@
}
private ZookeeperContainer container;
- private Configuration configuration;
+ private ZookeeperConfig configuration;
private CuratorFramework curator;
public CuratorFramework getCurator() {
@@ -49,32 +49,23 @@
return container;
}
- public Configuration getConfig() {
- return configuration;
- }
-
@SuppressWarnings("resource")
public ZookeeperTestContainerSupport(boolean migrationMode) {
container = new ZookeeperContainer().withExposedPorts(2181).waitingFor(Wait.forListeningPort());
container.start();
Integer zkHostPort = container.getMappedPort(2181);
- Config splitBrainconfig = new Config();
+ Config sharedRefDbConfig = new Config();
String connectString = container.getContainerIpAddress() + ":" + zkHostPort;
- splitBrainconfig.setBoolean("ref-database", null, "enabled", true);
- splitBrainconfig.setString("ref-database", "zookeeper", "connectString", connectString);
- splitBrainconfig.setString(
+ sharedRefDbConfig.setBoolean("ref-database", null, "enabled", true);
+ sharedRefDbConfig.setString("ref-database", "zookeeper", "connectString", connectString);
+ sharedRefDbConfig.setString(
"ref-database",
- Configuration.ZookeeperConfig.SUBSECTION,
- Configuration.ZookeeperConfig.KEY_CONNECT_STRING,
+ ZookeeperConfig.SUBSECTION,
+ ZookeeperConfig.KEY_CONNECT_STRING,
connectString);
- splitBrainconfig.setBoolean(
- "ref-database",
- Configuration.ZookeeperConfig.SUBSECTION,
- Configuration.ZookeeperConfig.KEY_MIGRATE,
- migrationMode);
- configuration = new Configuration(splitBrainconfig, new Config());
- this.curator = configuration.getZookeeperConfig().buildCurator();
+ configuration = new ZookeeperConfig(sharedRefDbConfig);
+ this.curator = configuration.buildCurator();
}
public void cleanup() {