[Operator] Add support for Ambassador ingress type

This change adds an Ambassador-based GerritNetworkReconciler, which is
used when the `INGRESS` environment variable for the operator is set to
"ambassador". In this case, the precondition is that the Ambassador CRDs
must already be pre-deployed in the k8s cluster. This change uses the
CRD version `getambassador.io/v2`.

Ambassador (also known as "Emissary") is an open-source ingress provider
that sets up ingresses for services via Custom Resources such as
`Mapping`, `TLSContext`, etc. The newly created
GerritAmbassadorReconciler creates and manages these resources as josdk
"dependent resources". The mappings are created to direct traffic to
Primary Gerrit and/or Replica Gerrit and/or Receivers in the
GerritCluster. If a GerritCluster has both a Primary and a Replica, then
all read traffic (git fetch/clone requests) is directed to the Replica,
and all write traffic (git push) is directed to the Primary.

Because there does not exist a fabric8 extension for Ambassador custom
resources, managing Ambassador CRs in the operator is a little tricky.
Two options were considered:
1. Use fabric8 k8s client's `GenericKubernetesResource` class.
`GenericKubernetesResource` implements the `HasMetadata` interface, just
like the `CustomResource` parent class that is used to define custom
resources like GerritCluster etc. `GenericKubernetesResource` can be
used to create custom resources by defining the resource spec as a Java
Map<String, String>. However, with this option, we would need to
subclass `GenericKubernetesResource` (e.g. `OperatorMappingWrapper`) to
be able to provide an apiVersion and/or group (expected by josdk). This
would introduce an unnecessary CRD to the operator, which is not
desirable.
2. Generate Ambassador custom resource POJOs from the CRD yaml using
`java-generator-maven-plugin`. This makes it such that it appears to the
operator that the Ambassador resources were manually defined in the
source code as Java classes, just like the other resources -
GerritCluster, Gerrit, etc.

We went with option 2.

Ambassador CRDs are fetched from
https://github.com/emissary-ingress/emissary/blob/master/manifests/emissary/emissary-crds.yaml.in
and stored in the repo as a yaml file. The `java-generator-maven-plugin`
(fabric8 project) is used to generate POJOs from this CRD yaml. The
POJOs represent the Ambassador CRs in Java classes.

Manual edits to the Ambassador CRDs
- The yaml file defines many CRDs but we only need `Mapping` and
`TLSContext` for this change so the rest of the CRDs are deleted from
the file
- The generator plugin has a bug while converting enum types
(https://github.com/fabric8io/kubernetes-client/issues/5457). To avoid
hitting this bug, the `v2ExplicitTLS` field in the Mapping CRD
`v3alpha1` version is commented out
- Emissary CRD apiVersions are not self-contained. There is a field
`ambassador_id` in `Mapping` and `TLSContext` CRD that is not defined in
apiVersion v2 but users are still able to create v2 Mappings with
ambassador_id field (Emissary converts v2 resources to v3 via webhooks
defined in emissary service). To be able to define `ambassador_id` in
the Mapping/TLSContext CRs created via the operator, we manually add
`ambassador_id` to the v2 Mapping and TLSContext CRD.

Currently, the operator is watching for changes in resources in all
namespaces. If you have Ambassador resources deployed in your k8s
cluster that use both v1 and v2 apiVersions, you might run into
deserialization errors as the operator attempts to deserialize the
existing v1 resources into the v2 POJOs.

Change-Id: I23d446e21da87e33c71fe3dad481ec34cd963bbe
36 files changed
tree: 94b7baaa96c1e4defc993aeced8cdadcecad30da
  1. .github/
  2. container-images/
  3. Documentation/
  4. helm-charts/
  5. istio/
  6. operator/
  7. supplements/
  8. tests/
  9. .gitignore
  10. .mailmap
  11. .pylintrc
  12. build
  13. get_version.sh
  14. LICENSE
  15. Pipfile
  16. Pipfile.lock
  17. publish
  18. README.md
  19. setup.cfg
README.md

Gerrit Deployment on Kubernetes

Container images, configurations, helm charts and a Kubernetes Operator for installing Gerrit on Kubernetes.

Deploying Gerrit on Kubernetes

This project provides helm-charts to install Gerrit either as a primary instance or a replica on Kubernetes.

The helm-charts are located in the ./helm-charts-directory. Currently, the charts are not published in a registry and have to be deployed from local sources.

For a detailed guide of how to install the helm-charts refer to the respective READMEs in the helm-charts directories:

These READMEs detail the prerequisites required by the charts as well as all configuration options currently provided by the charts.

To evaluate and test the helm-charts, they can be installed on a local machine running Minikube. Follow this guide to get a detailed description how to set up the Minikube cluster and install the charts.

Alternatively, a Gerrit Operator can be used to install and operate Gerrit in a Kubernetes cluster. The documentation describes how to build and deploy the Gerrit Operator and how to use it to install Gerrit.

Docker images

This project provides the sources for docker images used by the helm-charts. The images are also provided on Dockerhub.

The project also provides scripts to build and publish the images so that custom versions can be used by the helm-charts. This requires however a docker registry that can be accessed from the Kubernetes cluster, on which Gerrit will be deployed. The functionality of the scripts is described in the following sub- sections.

Building images

To build all images, the build-script in the root directory of the project can be used:

./build

If a specific image should be built, the image name can be specified as an argument. Multiple images can be specified at once:

./build gerrit git-gc

The build-script usually uses the latest-tag to tag the images. By using the --tag TAG-option, a custom tag can be defined:

./build --tag test

The version of Gerrit built into the images can be changed by providing a download URL for a .war-file containing Gerrit:

./build --gerrit-url https://example.com/gerrit.war

The version of a health-check plugin built into the images can be changed by providing a download URL for a .jar-file containing the plugin:

./build --healthcheck-jar-url https://example.com/healthcheck.jar

The build script will in addition tag the image with the output of git describe --dirty.

The single component images inherit a base image. The Dockerfile for the base image can be found in the ./base-directory. It will be automatically built by the ./build-script. If the component images are built manually, the base image has to be built first with the target base:latest, since it is not available in a registry and thus has to exist locally.

Publishing images

The publish script in the root directory of the project can be used to push the built images to the configured registry. To do so, log in first, before executing the script.

docker login <registry>

To configure the registry and image version, the respective values can be configured via env variables REGISTRY and TAG. In addition, these values can also be passed as command line options named --registry and --tag in which case they override the values from env variables:

./publish <component-name>

The <component-name> is one of: apache-git-http-backend, git-gc, gerrit or gerrit-init.

Adding the --update-latest-flag will also update the images tagged latest in the repository:

./publish --update-latest <component-name>

Running images in Docker

The container images are meant to be used by the helm-charts provided in this project. The images are thus not designed to be used in a standalone setup. To run Gerrit on Docker use the docker-gerrit project.

Running tests

The tests are implemented using Python and pytest. To ensure a well-defined test-environment, pipenv is meant to be used to install packages and provide a virtual environment in which to run the tests. To install pipenv, use brew:

brew install pipenv

More detailed information can be found in the pipenv GitHub repo.

To create the virtual environment with all required packages, run:

pipenv install

To run all tests, execute:

pipenv run pytest -m "not smoke"
The -m "not smoke"-option excludes the smoke tests, which will fail, since no Gerrit-instance will be running, when they are executed.

Some tests will need to create files in a temporary directory. Some of these files will be mounted into docker containers by tests. For this to work make either sure that the system temporary directory is accessible by the Docker daemon or set the base temporary directory to a directory accessible by Docker by executing:

pipenv run pytest --basetemp=/tmp/k8sgerrit -m "not smoke"

By default the tests will build all images from scratch. This will greatly increase the time needed for testing. To use already existing container images, a tag can be provided as follows:

pipenv run pytest --tag=v0.1 -m "not smoke"

The tests will then use the existing images with the provided tag. If an image does not exist, it will still be built by the tests.

By default the build of the container images will not use the build cache created by docker. To enable the cache, execute:

pipenv run pytest --build-cache -m "not smoke"

Slow tests may be marked with the decorator @pytest.mark.slow. These tests may then be skipped as follows:

pipenv run pytest --skip-slow -m "not smoke"

There are also other marks, allowing to select tests (refer to this section).

To run specific tests, execute one of the following:

# Run all tests in a directory (including subdirectories)
pipenv run pytest tests/container-images/base

# Run all tests in a file
pipenv run pytest tests/container-images/base/test_container_build_base.py

# Run a specific test
pipenv run \
  pytest tests/container-images/base/test_container_build_base.py::test_build_base

# Run tests with a specific marker
pipenv run pytest -m "docker"

For a more detailed description of how to use pytest, refer to the official documentation.

Test marks

docker

Marks tests which start up docker containers. These tests will interact with the containers by either using docker exec or sending HTTP-requests. Make sure that your system supports this kind of interaction.

incremental

Marks test classes in which the contained test functions have to run incrementally.

integration

Marks integration tests. These tests test interactions between containers, between outside clients and containers and between the components installed by a helm chart.

kubernetes

Marks tests that require a Kubernetes cluster. These tests are used to test the functionality of the helm charts in this project and the interaction of the components installed by them. The cluster should not be used for other purposes to minimize unforeseen interactions.

These tests require a storage class with ReadWriteMany access mode within the cluster. The name of the storage class has to be provided with the --rwm-storageclass-option (default: shared-storage).

slow

Marks tests that need an above average time to run.

structure

Marks structure tests. These tests are meant to test, whether certain components exist in a container. These tests ensure that components expected by the users of the container, e.g. the helm charts, are present in the containers.

Running smoke tests

To run smoke tests, use the following command:

pipenv run pytest \
  -m "smoke" \
  --basetemp="<tmp-dir for tests>" \
  --ingress-url="<Gerrit URL>" \
  --gerrit-user="<Gerrit user>" \
  --gerrit-pwd

The smoke tests require a Gerrit user that is allowed to create and delete projects. The username has to be given by --gerit-user. Setting the --gerrit-pwd-flag will cause a password prompt to enter the password of the Gerrit-user.

Contributing

Contributions to this project are welcome. If you are new to the Gerrit workflow, refer to the Gerrit-documentation for guidance on how to contribute changes.

The contribution guidelines for this project can be found here.

Roadmap

The roadmap of this project can be found here.

Feature requests can be made by pushing a change for the roadmap. This can also be done to announce/discuss features that you would like to provide.

Contact

The Gerrit Mailing List can be used to post questions and comments on this project or Gerrit in general.