|author||Antonio Barone <firstname.lastname@example.org>||Tue Mar 02 21:38:50 2021 +0100|
|committer||Antonio Barone <email@example.com>||Thu Mar 04 17:35:51 2021 +0100|
Fix kafka-events replay messages feature When requesting to reset the offset and consume messages from the beginning, the subscriber has to wait first for the assignment of partitions. Failing to do so will cause the subscriber to consume zero records, since no partitions have yet been assigned. Make an explicit poll() call before the seekToBeginning() to ensure that the consumer heartbeat is sent to kafka and thus a partition is assigned. Bug: Issue 14136 Change-Id: Ibc6a66507ebfc9bb6c67df9e576114bed8973e74
This plugins allows to define a distributed stream of events published by Gerrit.
Events can be anything, from the traditional stream events to the Gerrit metrics.
This plugin requires Gerrit 2.13 or laster.
Kafka plugin can be build as a regular ‘in-tree’ plugin. That means that is required to clone a Gerrit source tree first and then to have the Kafka plugin source directory into the /plugins path. Additionally, the plugins/external_plugin_deps.bzl file needs to be updated to match the Kafka plugin one.
git clone --recursive https://gerrit.googlesource.com/gerrit git clone https://gerrit.googlesource.com/plugins/kafka-events gerrit/plugins/kafka-events cd gerrit rm plugins/external_plugin_deps.bzl ln -s ./kafka-events/external_plugin_deps.bzl plugins/.
To build the kafka-events plugins, issue the command from the Gerrit source path:
bazel build plugins/kafka-events
The output is created in
Assuming a running Kafka broker on the same Gerrit host, add the following settings to gerrit.config:
[plugin "kafka-events"] bootstrapServers = localhost:9092