For this step, you need Java 11 and Maven installed.
To build all components of the operator run:
cd operator
mvn clean install
This step compiles the Java source code into .class
bytecode files in a newly generated operator/target
folder. A gerrit-operator
image is also created locally. Moreover, the CRD helm chart is updated with the latest CRDs as part of this build step.
The jar-version and container image tag can be set using the revision
property:
mvn clean install -Drevision=$(git describe --always --dirty)
The Gerrit Operator is still in an early state of development. The operator is thus at the moment not semantically versioned. The CustomResources are as of now independently versioned, i.e. the GerritCluster
resource can have a different version than the GitGarbageCollection
resource, although they are in the same group. At the moment, only the current version will be supported by the operator, i.e. there won't be a migration path. As soon as the API reaches some stability, this will change.
Currently, there does not exist a container image for the operator in the docker.io/k8sgerrit
registry. You must build your own image in order to run the operator in your cluster. To publish the container image of the Gerrit Operator:
docker.registry
and docker.org
tags in the operator/pom.xml
file to point to your own Docker registry and org that you have permissions to push to.<docker.registry>my-registry</docker.registry> <docker.org>my-org</docker.org>
cd operator mvn clean install -P publish
This will build the operator source code, create an image out of the built artifacts, and publish this image to the registry specified in the pom.xml
file. The built image is multi-platform - it will run on both amd64
and arm64
architectures. It is okay to run this build command from an ARM Mac.
Executing the E2E tests has a few infrastructure requirements that have to be provided:
tls-secret
and be present in the istio-system
namespace. For the Ingress controller, the secret has to be either set as the default secret to be used or somehow automatically be provided in the namespaces created by the tests and named tls-secret
, e.g. by using Gardener to manage DNS and certificates.A sample setup for components required in the cluster is provided under $REPO_ROOT/supplements/test-cluster
. Some configuration has to be done manually (marked by #TODO
) and the deploy.sh
-script can be used to install/update all components.
In addition, some properties have to be set to configure the tests:
rwmStorageClass
: Name of the StorageClass providing RWM-access (default:nfs-client)registry
: Registry to pull container images fromRegistryOrg
: Organization of the container imagestag
: Container tagregistryUser
: User for the container registryregistryPwd
: Password for the container registryingressDomain
: Domain to be used for the ingressistioDomain
: Domain to be used for istioldapAdminPwd
: Admin password for LDAP servergerritUser
: Username of a user in LDAPgerritPwd
: The password of gerritUser
The properties should be set in the test.properties
file. Alternatively, a path of a properties file can be configured by using the -Dproperties=<path to properties file>
-option.
To run all E2E tests, use:
cd operator mvn clean install -P integration-test -Dproperties=<path to properties file>
Note, that running the E2E tests will also involve pushing the container image to the repository configured in the properties file.
Deploying Gerrit using the operator requires some additional prerequisites to be fulfilled:
Gerrit instances share the repositories and other data using shared volumes. Thus, a StorageClass and a suitable provisioner have to be available in the cluster. An example for such a provisioner would be the NFS-subdir-external-provisioner.
The Gerrit Operator will also set up network routing rules and an ingress point for the Gerrit instances it manages. The network routing rules ensure that requests will be routed to the intended GerritCluster component, e.g. in case a primary Gerrit and a Gerrit Replica exist in the cluster, git fetch/clone requests will be sent to the Gerrit Replica and all other requests to the primary Gerrit.
You may specify the ingress provider by setting the INGRESS
environment variable in the operator Deployment manifest. That is, the choice of an ingress provider is an operator-level setting. However, you may specify some ingress configuration options (host, tls, etc) at the GerritCluster
level, via GerritClusterIngressConfig.
The Gerrit Operator currently supports the following Ingress providers:
NONE
The operator will install no Ingress components. Services will still be available. No prerequisites are required for this case.
If spec.ingress.enabled
is set to true
in GerritCluster, the operator will still configure network related options like http.listenUrl
in Gerrit based on the other options in spec.ingress
.
INGRESS
The operator will install an Ingress. Currently only the Nginx-Ingress-Controller is supported, which will have to be installed in the cluster and has to be configured to allow snippet configurations. An example of a working deployment can be found here.
SSH support is not fully managed by the operator, since it has to be enabled and configured in the nginx ingress controller itself.
ISTIO
The operator supports the use of Istio as a service mesh. An example on how to set up Istio can be found here.
AMBASSADOR
The operator also supports Ambassador for setting up ingress to the Gerrits deployed by the operator. If you use Ambassador's “Edge Stack” or “Emissary Ingress” to provide ingress to your k8s services, you should set INGRESS=AMBASSADOR. Currently, SSH is not directly supported when using INGRESS=AMBASSADOR.
You will need to have admin privileges for your k8s cluster in order to be able to deploy the following resources.
You may choose to deploy the operator resources using helm, or directly via kubectl apply
.
Make sure you have helm installed in your environment.
There are two relevant helm charts.
This chart installs the CRDs (k8s API extensions) to your k8s cluster. No chart values need to be modified. The build initiated by the mvn install
command from the Publish section includes a step that updates the CRDs in this helm chart to reflect any changes made to them in the operator source code. The CRDs installed are: GerritCluster, Gerrit, GitGarbageCollection, Receiver.
You do not need to manually helm install
this chart; this chart is installed as a dependency of the second gerrit-operator
helm chart as described in the next subheading.
This chart installs the gerrit-operator-crds
chart as a dependency, and the following k8s resources:
The operator itself creates a Service resource and a ValidationWebhookConfigurations resource behind the scenes.
You will need to modify the values in helm-charts/gerrit-operator/values.yaml
to point the chart to the registry/org that is hosting the Docker container image for the operator (from the Publish step earlier). Now,
run:
# Create a namespace for the gerrit-operator kubectl create ns gerrit-operator # Build the gerrit-operator-crds chart and store it in the charts/ subdirectory helm dependency build helm-charts/gerrit-operator/ # Install the gerrit-operator-crds chart and the gerrit-operator chart helm -n gerrit-operator install gerrit-operator helm-charts/gerrit-operator/
The chart itself, and all the bundled namespaced resources, are installed in the gerrit-operator
namespace, as per the -n
option in the helm command.
First all CustomResourceDefinitions have to be deployed:
kubectl apply -f operator/target/classes/META-INF/fabric8/*-v1.yml
Note that these do not include the -v1beta1.yaml files, as those are for old Kubernetes versions.
The operator requires a Java Keystore with a keypair inside to allow TLS verification for Kubernetes Admission Webhooks. To create a keystore and encode it with base64, run:
keytool \ -genkeypair \ -alias operator \ -keystore keystore \ -keyalg RSA \ -keysize 2048 \ -validity 3650 cat keystore | base64 -b 0
Add the result to the Secret in k8s/operator.yaml
(see comments in the file) and also add the base64-encoded password for the keystore to the secret.
Then the operator and associated RBAC rules can be deployed:
kubectl apply -f operator/k8s/rbac.yaml kubectl apply -f operator/k8s/operator.yaml
k8s/operator.yaml
contains a basic deployment of the operator. Resources, docker image name etc. might have to be adapted. For example, the ingress provider has to be configured by setting the INGRESS
environment variable in operator/k8s/operator.yaml
to either NONE
, INGRESS
, ISTIO
, or AMBASSADOR
.
The operator manages several CustomResources that are described in more detail below.
The API reference for all CustomResources can be found here.
The GerritCluster CustomResource installs one or multiple Gerrit instances. The operator takes over managing the state of all Gerrit instances within the cluster and ensures that the state stays in sync. To this end it manages additional resources that are shared between Gerrit instances or are required to synchronize the state between Gerrit instances. These additional resources include:
Installing Gerrit with the GerritCluster resource is highly recommended over using the Gerrit CustomResource directly, even if only a single deployment is installed, since this reduces the requirements that have to be managed manually. The same holds true for the Receiver CustomResource, which without a Gerrit instance using the same site provides little value.
For now, only a single Gerrit CustomResource using each mode can be deployed in a GerritCluster, e.g. one primary Gerrit and one Gerrit Replica. The reason for that is, that there is currently no sharding implemented and thus multiple deployments don't bring any more value than just scaling the existing deployment. Instead of a primary Gerrit also a Receiver can be installed.
The Gerrit CustomResource deploys a Gerrit, which can run in multiple modes.
The Gerrit-CustomResource is mainly meant to be used by the GerritCluster-reconciler to install Gerrit-instances managed by a GerritCluster. Gerrit-CustomResources can however also be applied separately. Note, that the Gerrit operator will then not create any storage resources or setup any network resources in addition to the service.
The GitGarbageCollection-CustomResource is used by the operator to set up CronJobs that regularly run Git garbage collection on the git repositories that are served by a GerritCluster.
A GitGarbageCollection can either handle all repositories, if no specific repository is configured or a selected set of repositories. Multiple GitGarbageCollections can exist as part of the same GerritCluster, but no two GitGarbageCollections can work on the same project. This is prevented in three ways:
NOTE: A Receiver should never be installed for a GerritCluster that is already managing a primary Gerrit to avoid conflicts when writing into repositories.
The Receiver-CustomResource installs a Deployment running Apache with a git-http- backend that is meant to receive pushes performed by Gerrit's replication plugin. It can only be installed into a GerritCluster that does not include a primary Gerrit, but only Gerrit Replicas.
The Receiver-CustomResource is mainly meant to be used by the GerritCluster-reconciler to install a Receiver-instance managed by a GerritCluster. Receiver-CustomResources can however also be applied separately. Note, that the Gerrit operator will then not create any storage resources or setup any network resources in addition to the service.
The GerritNetwork CustomResource deploys network components depending on the configured ingress provider to enable ingress traffic to GerritCluster components.
The GerritNetwork CustomResource is not meant to be installed manually, but will be created by the Gerrit Operator based on the GerritCluster CustomResource.
The operator takes care of all configuration in Gerrit that depends on the infrastructure, i.e. Kubernetes and the GerritCluster. This avoids duplicated configuration and misconfiguration.
This means that some options in the gerrit.config are not allowed to be changed. If these values are set and are not matching the expected value, a ValidationWebhook will reject the resource creation/update. Thus, it is best to not set these values at all. To see which values the operator assigned check the ConfigMap created by the operator for the respective Gerrit.
These options are:
cache.directory
This should stay in the volume mounted to contain the Gerrit site and will thus be set to cache
.
container.javaHome
This has to be set to /usr/lib/jvm/java-11-openjdk-amd64
, since this is the path of the Java installation in the container.
container.javaOptions = -Djavax.net.ssl.trustStore
The keystore will be mounted to /var/gerrit/etc/keystore
.
container.replica
This has to be set in the Gerrit-CustomResource under spec.isReplica
.
container.user
The technical user in the Gerrit container is called gerrit
.
gerrit.basePath
The git repositories are mounted to /var/gerrit/git
in the container.
gerrit.canonicalWebUrl
The canonical web URL has to be set to the hostname used by the Ingress/Istio.
httpd.listenURL
This has to be set to proxy-http://*:8080/
or proxy-https://*:8080
, depending of TLS is enabled in the Ingress or not, otherwise the Jetty servlet will run into an endless redirect loop.
sshd.advertisedAddress
This is only enforced, if Istio is enabled. It can be configured otherwise.
sshd.listenAddress
Since the container port for SSH is fixed, this will be set automatically. If no SSH port is configured in the service, the SSHD is disabled.