Gerrit replica on Kubernetes

Gerrit is a web-based code review tool, which acts as a Git server. On large setups Gerrit servers can see a sizable amount of traffic from git operations performed by developers and build servers. The major part of requests are read-only requests (e.g. by git fetch operations). To take some load of the Gerrit server, Gerrit replicas can be deployed to serve read-only requests.

This helm chart provides a Gerrit replica setup that can be deployed on Kubernetes. The Gerrit replica is capable of receiving replicated git repositories from a Gerrit. The Gerrit replica can then serve authenticated read-only requests.

Gerrit versions before 3.0 are no longer supported, since the support of ReviewDB was removed.

Prerequisites

  • Helm (>= version 3.0)

    (Check out this guide how to install and use helm.)

  • Access to a provisioner for persistent volumes with Read Write Many (RWM)- capability.

    A list of applicaple volume types can be found here. This project was developed using the NFS-server-provisioner helm chart, a NFS-provisioner deployed in the Kubernetes cluster itself. Refer to this guide of how to deploy it in context of this project.

  • A domain name that is configured to point to the IP address of the node running the Ingress controller on the kubernetes cluster (as described here).

  • (Optional: Required, if SSL is configured) A Java keystore to be used by Gerrit.

Installing the Chart

ATTENTION: The value for ingress.host is required for rendering the chart's templates. The nature of the value does not allow defaults. Thus a custom values.yaml-file setting this value is required!

To install the chart with the release name gerrit-replica, execute:

cd $(git rev-parse --show-toplevel)/helm-charts
helm install \
  gerrit-replica \  # release name
  ./gerrit-replica \  # path to chart
  -f <path-to-custom-values>.yaml

The command deploys the Gerrit replica on the current Kubernetes cluster. The configuration section lists the parameters that can be configured during installation.

The Gerrit replica requires the replicated All-Projects.git- and All-Users.git- repositories to be present in the /var/gerrit/git-directory. The gerrit-init- InitContainer will wait for this being the case. A way to do this is to access the Gerrit replica pod and to clone the repositories from the primary Gerrit (Make sure that you have the correct access rights do so.):

kubectl exec -it <gerrit-replica-pod> -c gerrit-init ash
gerrit@<gerrit-replica-pod>:/var/tools$ cd /var/gerrit/git
gerrit@<gerrit-replica-pod>:/var/gerrit/git$ git clone "http://gerrit.com/All-Projects" --mirror
Cloning into bare repository 'All-Projects.git'...
gerrit@<gerrit-replica-pod>:/var/gerrit/git$ git clone "http://gerrit.com/All-Users" --mirror
Cloning into bare repository 'All-Users.git'...

Configuration

The following sections list the configurable values in values.yaml. To configure a Gerrit replica setup, make a copy of the values.yaml-file and change the parameters as needed. The configuration can be applied by installing the chart as described above.

In addition, single options can be set without creating a custom values.yaml:

cd $(git rev-parse --show-toplevel)/helm-charts
helm install \
  gerrit-replica \  # release name
  ./gerrit-replica \  # path to chart
  --set=gitRepositoryStorage.size=100Gi,gitBackend.replicas=2

Container images

ParameterDescriptionDefault
images.busybox.registryThe registry to pull the busybox container images fromdocker.io
images.busybox.tagThe busybox image tag to uselatest
images.registry.nameThe image registry to pull the container images from``
images.registry.ImagePullSecret.nameName of the ImagePullSecretimage-pull-secret (if empty no image pull secret will be deployed)
images.registry.ImagePullSecret.createWhether to create an ImagePullSecretfalse
images.registry.ImagePullSecret.usernameThe image registry usernamenil
images.registry.ImagePullSecret.passwordThe image registry passwordnil
images.versionThe image version (image tag) to uselatest
images.imagePullPolicyImage pull policyAlways
images.additionalImagePullSecretsAdditional image pull policies that pods should use[]

Labels

ParameterDescriptionDefault
additionalLabelsAdditional labels for resources managed by this Helm chart{}

Storage classes

For information of how a StorageClass is configured in Kubernetes, read the official Documentation.

ParameterDescriptionDefault
storageClasses.default.nameThe name of the default StorageClass (RWO)default
storageClasses.default.createWhether to create the StorageClassfalse
storageClasses.default.provisionerProvisioner of the StorageClasskubernetes.io/aws-ebs
storageClasses.default.reclaimPolicyWhether to Retain or Delete volumes, when they become unboundDelete
storageClasses.default.parametersParameters for the provisionerparameters.type: gp2, parameters.fsType: ext4
storageClasses.default.mountOptionsThe mount options of the default StorageClass[]
storageClasses.default.allowVolumeExpansionWhether to allow volume expansion.false
storageClasses.shared.nameThe name of the shared StorageClass (RWM)shared-storage
storageClasses.shared.createWhether to create the StorageClassfalse
storageClasses.shared.provisionerProvisioner of the StorageClassnfs
storageClasses.shared.reclaimPolicyWhether to Retain or Delete volumes, when they become unboundDelete
storageClasses.shared.parametersParameters for the provisionerparameters.mountOptions: vers=4.1
storageClasses.shared.mountOptionsThe mount options of the shared StorageClass[]
storageClasses.shared.allowVolumeExpansionWhether to allow volume expansion.false

CA certificate

Some application may require TLS verification. If the default CA built into the containers is not enough a custom CA certificate can be given to the deployment. Note, that Gerrit will require its CA in a JKS keytore, which is described below.

ParameterDescriptionDefault
caCertCA certificate for TLS verification (if not set, the default will be used)None

Workaround for NFS

Kubernetes will not always be able to adapt the ownership of the files within NFS volumes. Thus, a workaround exists that will add init-containers to adapt file ownership. Note, that only the ownership of the root directory of the volume will be changed. All data contained within will be expected to already be owned by the user used by Gerrit. Also the ID-domain will be configured to ensure correct ID-mapping.

ParameterDescriptionDefault
nfsWorkaround.enabledWhether the volume used is an NFS-volumefalse
nfsWorkaround.chownOnStartupWhether to chown the volume on pod startupfalse
nfsWorkaround.idDomainThe ID-domain that should be used to map user-/group-IDs for the NFS mountlocaldomain.com

Network policies

ParameterDescriptionDefault
networkPolicies.enabledWhether to enable preconfigured NetworkPoliciesfalse
networkPolicies.dnsPortsList of ports used by DNS-service (e.g. KubeDNS)[53, 8053]

The NetworkPolicies provided here are quite strict and do not account for all possible scenarios. Thus, custom NetworkPolicies have to be added, e.g. for connecting to a database. On the other hand some defaults may be not restrictive enough. By default, the ingress traffic of the git-backend pod is not restricted. Thus, every source (with the right credentials) could push to the git-backend. To add an additional layer of security, the ingress rule could be defined more finegrained. The chart provides the possibility to define custom rules for ingress- traffic of the git-backend pod under gitBackend.networkPolicy.ingress. Depending on the scenario, there are different ways to restrict the incoming connections.

If the replicator (e.g. Gerrit) is running in a pod on the same cluster, a podSelector (and namespaceSelector, if the pod is running in a different namespace) can be used to whitelist the traffic:

gitBackend:
  networkPolicy:
    ingress:
    - from:
      - podSelector:
          matchLabels:
            app: gerrit

If the replicator is outside the cluster, the IP of the replicator can also be whitelisted, e.g.:

gitBackend:
  networkPolicy:
    ingress:
    - from:
      - ipBlock:
          cidr: xxx.xxx.0.0/16

The same principle also applies to other use cases, e.g. connecting to a database. For more information about the NetworkPolicy resource refer to the Kubernetes documentation.

Storage for Git repositories

ParameterDescriptionDefault
gitRepositoryStorage.externalPVC.useWhether to use a PVC deployed outside the chartfalse
gitRepositoryStorage.externalPVC.nameName of the external PVCgit-repositories-pvc
gitRepositoryStorage.sizeSize of the volume storing the Git repositories5Gi

If the git repositories should be persisted even if the chart is deleted and in a way that the volume containing them can be mounted by the reinstalled chart, the PVC claiming the volume has to be created independently of the chart. To use the external PVC, set gitRepositoryStorage.externalPVC.enabled to true and give the name of the PVC under gitRepositoryStorage.externalPVC.name.

Storage for Logs

In addition to collecting logs with a log collection tool like Promtail, the logs can also be stored in a persistent volume. This volume has to be a read-write-many volume to be able to be used by multiple pods.

ParameterDescriptionDefault
logStorage.enabledWhether to enable persistence of logsfalse
logStorage.externalPVC.useWhether to use a PVC deployed outside the chartfalse
logStorage.externalPVC.nameName of the external PVCgerrit-logs-pvc
logStorage.sizeSize of the volume5Gi
logStorage.cleanup.enabledWhether to regularly delete old logsfalse
logStorage.cleanup.scheduleCron schedule defining when to run the cleanup job0 0 * * *
logStorage.cleanup.retentionDaysNumber of days to retain the logs14
logStorage.cleanup.resourcesResources the container is allowed to userequests.cpu: 100m
logStorage.cleanup.additionalPodLabelsAdditional labels for pods{}
requests.memory: 256Mi
limits.cpu: 100m
limits.memory: 256Mi

Each pod will create a separate folder for its logs, allowing to trace logs to the respective pods.

Istio

Istio can be used as an alternative to Kubernetes Ingresses to manage the traffic into the cluster and also inside the cluster. This requires istio to be installed beforehand. Some guidance on how to set up istio can be found here. The helm chart expects istio-injection to be enabled in the namespace, in which it will be installed.

In the case istio is used, all configuration for ingresses in the chart will be ignored.

ParameterDescriptionDefault
istio.enabledWhether istio should be used (requires istio to be installed)false
istio.hostHostname (CNAME must point to istio ingress gateway loadbalancer service)nil
istio.tls.enabledWhether to enable TLSfalse
istio.tls.secret.createWhether to create TLS certificate secrettrue
istio.tls.secret.nameName of external secret containing TLS certificatesnil
istio.tls.certTLS certificate-----BEGIN CERTIFICATE-----
istio.tls.keyTLS key-----BEGIN RSA PRIVATE KEY-----
istio.ssh.enabledWhether to enable SSHfalse

Ingress

As an alternative to istio the Nginx Ingress controller can be used to manage ingress traffic.

ParameterDescriptionDefault
ingress.enabledWhether to deploy an Ingressfalse
ingress.hostHost name to use for the Ingress (required for Ingress)nil
ingress.maxBodySizeMaximum request body size allowed (Set to 0 for an unlimited request body size)50m
ingress.additionalAnnotationsAdditional annotations for the Ingressnil
ingress.tls.enabledWhether to enable TLS termination in the Ingressfalse
ingress.tls.secret.createWhether to create a TLS-secrettrue
ingress.tls.secret.nameName of an external secret that will be used as a TLS-secretnil
ingress.tls.certPublic SSL server certificate-----BEGIN CERTIFICATE-----
ingress.tls.keyPrivate SSL server certificate-----BEGIN RSA PRIVATE KEY-----
For graceful shutdown to work with an ingress, the ingress controller has to be configured to gracefully close the connections as well.

Promtail Sidecar

To collect Gerrit logs, a Promtail sidecar can be deployed into the Gerrit replica pods. This can for example be used together with the gerrit-monitoring project.

ParameterDescriptionDefault
promtailSidecar.enabledWhether to install the Promatil sidecar containerfalse
promtailSidecar.imageThe promtail container image to usegrafana/promtail
promtailSidecar.versionThe promtail container image version1.3.0
promtailSidecar.resourcesConfigure the amount of resources the container requests/is allowedrequests.cpu: 100m
requests.memory: 128Mi
limits.cpu: 200m
limits.memory: 128Mi
promtailSidecar.tls.skipverifyWhether to skip TLS verificationtrue
promtailSidecar.tls.caCertCA certificate for TLS verification-----BEGIN CERTIFICATE-----
promtailSidecar.loki.urlURL to reach Lokiloki.example.com
promtailSidecar.loki.userLoki useradmin
promtailSidecar.loki.passwordLoki passwordsecret

Apache-Git-HTTP-Backend (Git-Backend)

ParameterDescriptionDefault
gitBackend.imageImage name of the Apache-git-http-backend container imagek8sgerrit/apache-git-http-backend
gitBackend.additionalPodLabelsAdditional labels for Pods{}
gitBackend.tolerationsTaints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. For more information, please refer to the following documents. Taints and Tolerations[]
gitBackend.topologySpreadConstraintsControl how Pods are spread across your cluster among failure-domains. For more information, please refer to the following documents. Pod Topology Spread Constraints{}
gitBackend.nodeSelectorAssigns a Pod to the specified Nodes. For more information, please refer to the following documents. Assign Pods to Nodes. Assigning Pods to Nodes{}
gitBackend.affinityAssigns a Pod to the specified NodespodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight: 100
podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey: “topology.kubernetes.io/zone”
podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key: app
podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator: In
podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]: git-backend
gitBackend.replicasNumber of pod replicas to deploy1
gitBackend.maxSurgeMax. percentage or number of pods allowed to be scheduled above the desired number25%
gitBackend.maxUnavailableMax. percentage or number of pods allowed to be unavailable at a time100%
gitBackend.networkPolicy.ingressCustom ingress-network policy for git-backend pods[{}] (allow all)
gitBackend.networkPolicy.egressCustom egress-network policy for git-backend podsnil
gitBackend.resourcesConfigure the amount of resources the pod requests/is allowedrequests.cpu: 100m
requests.memory: 256Mi
limits.cpu: 100m
limits.memory: 256Mi
gitBackend.livenessProbeConfiguration of the liveness probe timings{initialDelaySeconds: 10, periodSeconds: 5}
gitBackend.readinessProbeConfiguration of the readiness probe timings{initialDelaySeconds: 5, periodSeconds: 1}
gitBackend.credentials.htpasswd.htpasswd-file containing username/password-credentials for accessing gitgit:$apr1$O/LbLKC7$Q60GWE7OcqSEMSfe/K8xU. (user: git, password: secret)
gitBackend.service.additionalAnnotationsAdditional annotations for the Service{}
gitBackend.service.loadBalancerSourceRangesThe list of allowed IPs for the Service[]
gitBackend.service.typeWhich kind of Service to deployLoadBalancer
gitBackend.service.externalTrafficPolicySpecify how traffic from external is handledCluster
gitBackend.service.http.enabledWhether to serve HTTP-requests (needed for Ingress)true
gitBackend.service.http.portPort over which to expose HTTP80
gitBackend.service.https.enabledWhether to serve HTTPS-requestsfalse
gitBackend.service.https.portPort over which to expose HTTPS443
At least one endpoint (HTTP and/or HTTPS) has to be enabled in the service!

Project creation, project deletion and HEAD update can also replicated. To enable this feature configure the replication plugin to use an adminUrl using the format gerrit+https://<apache-git-http-backend host>.

Git garbage collection

ParameterDescriptionDefault
gitGC.imageImage name of the Git-GC container imagek8sgerrit/git-gc
gitGC.scheduleCron-formatted schedule with which to run Git garbage collection0 6,18 * * *
gitGC.resourcesConfigure the amount of resources the pod requests/is allowedrequests.cpu: 100m
requests.memory: 256Mi
limits.cpu: 100m
limits.memory: 256Mi
gitGC.tolerationsTaints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. For more information, please refer to the following documents. Taints and Tolerations[]
gitGC.nodeSelectorAssigns a Pod to the specified Nodes. For more information, please refer to the following documents. Assign Pods to Nodes. Assigning Pods to Nodes{}
gitGC.affinityAssigns a Pod to the specified Nodes. For more information, please refer to the following documents. Assign Pods to Nodes using Node Affinity. Assigning Pods to Nodes{}
gitGC.additionalPodLabelsAdditional labels for Pods{}

Gerrit replica

The way the Jetty servlet used by Gerrit works, the Gerrit replica component of the gerrit-replica chart actually requires the URL to be known, when the chart is installed. The suggested way to do that is to use the provided Ingress resource. This requires that a URL is available and that the DNS is configured to point the URL to the IP of the node the Ingress controller is running on!
Setting the canonical web URL in the gerrit.config to the host used for the Ingress is mandatory, if access to the Gerrit replica is required!
ParameterDescriptionDefault
gerritReplica.images.gerritInitImage name of the Gerrit init container imagek8sgerrit/gerrit-init
gerritReplica.images.gerritReplicaImage name of the Gerrit replica container imagek8sgerrit/gerrit
gerritReplica.tolerationsTaints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. For more information, please refer to the following documents. Taints and Tolerations[]
gerritReplica.topologySpreadConstraintsControl how Pods are spread across your cluster among failure-domains. For more information, please refer to the following documents. Pod Topology Spread Constraints{}
gerritReplica.nodeSelectorAssigns a Pod to the specified Nodes. For more information, please refer to the following documents. Assign Pods to Nodes. Assigning Pods to Nodes{}
gerritReplica.affinityAssigns a Pod to the specified Nodes. By default, gerrit-replica is evenly distributed on topology.kubernetes.io/zone. For more information, please refer to the following documents. Assign Pods to Nodes using Node Affinity. Assigning Pods to NodespodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight: 100
podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey: “topology.kubernetes.io/zone”
podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key: app
podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator: In
podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]: gerrit-replica
gerritReplica.replicasNumber of pod replicas to deploy1
gerritReplica.additionalAnnotationsAdditional annotations for the Pods{}
gerritReplica.additionalPodLabelsAdditional labels for the Pods{}
gerritReplica.maxSurgeMax. percentage or number of pods allowed to be scheduled above the desired number25%
gerritReplica.maxUnavailableMax. percentage or number of pods allowed to be unavailable at a time100%
gerritReplica.livenessProbeConfiguration of the liveness probe timings{initialDelaySeconds: 60, periodSeconds: 5}
gerritReplica.probeSchemeScheme for probes, for example HTTPSnil
gerritReplica.readinessProbeConfiguration of the readiness probe timings{initialDelaySeconds: 10, periodSeconds: 10}
gerritReplica.startupProbeConfiguration of the startup probe timings{initialDelaySeconds: 10, periodSeconds: 5}
gerritReplica.gracefulStopTimeoutTime in seconds Kubernetes will wait until killing the pod during termination (has to be longer then Gerrit's httpd.gracefulStopTimeout to allow graceful shutdown of Gerrit)90
gerritReplica.resourcesConfigure the amount of resources the pod requests/is allowedrequests.cpu: 1
requests.memory: 5Gi
limits.cpu: 1
limits.memory: 6Gi
gerritReplica.networkPolicy.ingressCustom ingress-network policy for gerrit-replica podsnil
gerritReplica.networkPolicy.egressCustom egress-network policy for gerrit-replica podsnil
gerritReplica.service.additionalAnnotationsAdditional annotations for the Service{}
gerritReplica.service.loadBalancerSourceRangesThe list of allowed IPs for the Service[]
gerritReplica.service.typeWhich kind of Service to deployNodePort
gerritReplica.service.externalTrafficPolicySpecify how traffic from external is handledCluster
gerritReplica.service.http.portPort over which to expose HTTP80
gerritReplica.service.ssh.enabledWhether to enable SSH for the Gerrit replicafalse
gerritReplica.service.ssh.portPort for SSH29418
gerritReplica.keystorebase64-encoded Java keystore (cat keystore.jks | base64) to be used by Gerrit, when using SSLnil
gerritReplica.pluginManagement.pluginsList of Gerrit plugins to install[]
gerritReplica.pluginManagement.plugins[0].nameName of pluginnil
gerritReplica.pluginManagement.plugins[0].urlDownload url of plugin. If given the plugin will be downloaded, otherwise it will be installed from the gerrit.war-file.nil
gerritReplica.pluginManagement.plugins[0].sha1SHA1 sum of plugin jar used to ensure file integrity and version (optional)nil
gerritReplica.pluginManagement.plugins[0].installAsLibraryWhether the plugin should be symlinked to the lib-dir in the Gerrit site.nil
gerritReplica.pluginManagement.libsList of Gerrit library modules to install[]
gerritReplica.pluginManagement.libs[0].nameName of the lib modulenil
gerritReplica.pluginManagement.libs[0].urlDownload url of lib module.nil
gerritReplica.pluginManagement.libs[0].sha1SHA1 sum of plugin jar used to ensure file integrity and versionnil
gerritReplica.pluginManagement.cache.enabledWhether to cache downloaded pluginsfalse
gerritReplica.pluginManagement.cache.sizeSize of the volume used to store cached plugins1Gi
gerritReplica.priorityClassNameName of the PriorityClass to apply to replica podsnil
gerritReplica.etc.configMap of config files (e.g. gerrit.config) that will be mounted to $GERRIT_SITE/etcby a ConfigMap{gerrit.config: ..., replication.config: ...}see here
gerritReplica.etc.secretMap of config files (e.g. secure.config) that will be mounted to $GERRIT_SITE/etcby a Secret{secure.config: ...} see here
gerritReplica.additionalConfigMapsAllows to mount additional ConfigMaps into a subdirectory of $SITE/data[]
gerritReplica.additionalConfigMaps[*].nameName of the ConfigMapnil
gerritReplica.additionalConfigMaps[*].subDirSubdirectory under $SITE/data into which the files should be symlinkednil
gerritReplica.additionalConfigMaps[*].dataData of the ConfigMap. If not set, ConfigMap has to be created manuallynil

Gerrit config files

The gerrit-replica chart provides a ConfigMap containing the configuration files used by Gerrit, e.g. gerrit.config and a Secret containing sensitive configuration like the secure.config to configure the Gerrit installation in the Gerrit component. The content of the config files can be set in the values.yaml under the keys gerritReplica.etc.config and gerritReplica.etc.secret respectively. The key has to be the filename (eg. gerrit.config) and the file's contents the value. This way an arbitrary number of configuration files can be loaded into the $GERRIT_SITE/etc-directory, e.g. for plugins. All configuration options for Gerrit are described in detail in the official documentation of Gerrit. Some options however have to be set in a specified way for Gerrit to work as intended with the chart:

  • gerrit.basePath

    Path to the directory containing the repositories. The chart mounts this directory from a persistent volume to /var/gerrit/git in the container. For Gerrit to find the correct directory, this has to be set to git.

  • gerrit.serverId

    In Gerrit-version higher than 2.14 Gerrit needs a server ID, which is used by NoteDB. Gerrit would usually generate a random ID on startup, but since the gerrit.config file is read only, when mounted as a ConfigMap this fails. Thus the server ID has to be set manually!

  • gerrit.canonicalWebUrl

    The canonical web URL has to be set to the Ingress host.

  • httpd.listenURL

    This has to be set to proxy-http://*:8080/ or proxy-https://*:8080, depending of TLS is enabled in the Ingress or not, otherwise the Jetty servlet will run into an endless redirect loop.

  • httpd.gracefulStopTimeout / sshd.gracefulStopTimeout

    To enable graceful shutdown of the embedded jetty server and SSHD, a timeout has to be set with this option. This will be the maximum time, Gerrit will wait for HTTP requests to finish before shutdown.

  • container.user

    The technical user in the Gerrit replica container is called gerrit. Thus, this value is required to be gerrit.

  • container.replica

    Since this chart is meant to install a Gerrit replica, this naturally has to be true.

  • container.javaHome

    This has to be set to /usr/lib/jvm/java-11-openjdk-amd64, since this is the path of the Java installation in the container.

  • container.javaOptions

    The maximum heap size has to be set. And its value has to be lower than the memory resource limit set for the container (e.g. -Xmx4g). In your calculation allow memory for other components running in the container.

To enable liveness- and readiness probes, the healthcheck plugin will be installed by default. Note, that by configuring to use a packaged or downloaded version of the healthcheck plugin, the configured version will take precedence over the default version. The plugin is by default configured to disable the querychanges and auth healthchecks, since the Gerrit replica does not index changes and a new Gerrit server will not yet necessarily have an user to validate authentication.

The default configuration can be overwritten by adding the healthcheck.config file as a key-value pair to gerritReplica.etc.config as for every other configuration.

SSH keys should be configured via the helm-chart using the gerritReplica.etc.secret map. Gerrit will create its own keys, if none are present in the site, but if multiple Gerrit pods are running, each Gerrit instance would have its own keys. Users accessing Gerrit via a load balancer would get issues due to changing host keys.

Upgrading the Chart

To upgrade an existing installation of the gerrit-replica chart, e.g. to install a newer chart version or to use an updated custom values.yaml-file, execute the following command:

cd $(git rev-parse --show-toplevel)/helm-charts
helm upgrade \
  <release-name> \
  ./gerrit-replica \ # path to chart
  -f <path-to-custom-values>.yaml \

Uninstalling the Chart

To delete the chart from the cluster, use:

helm delete <release-name>