Fix kafka-events replay messages feature

When requesting to reset the offset and consume messages from the
beginning, the subscriber has to wait first for the assignment of

Failing to do so will cause the subscriber to consume zero records,
since no partitions have yet been assigned.

Make an explicit poll() call before the seekToBeginning() to ensure that
the consumer heartbeat is sent to kafka and thus a partition is

Bug: Issue 14136
Change-Id: Ibc6a66507ebfc9bb6c67df9e576114bed8973e74
2 files changed
tree: 99a423af3814c3e74136e715a320b4b36937dc5b
  1. src/
  2. BUILD
  3. external_plugin_deps.bzl
  4. Jenkinsfile

Kafka: Gerrit event producer for Apache Kafka

Build Status


This plugins allows to define a distributed stream of events published by Gerrit.

Events can be anything, from the traditional stream events to the Gerrit metrics.

This plugin requires Gerrit 2.13 or laster.


  • linux
  • java-1.8
  • Bazel


Kafka plugin can be build as a regular ‘in-tree’ plugin. That means that is required to clone a Gerrit source tree first and then to have the Kafka plugin source directory into the /plugins path. Additionally, the plugins/external_plugin_deps.bzl file needs to be updated to match the Kafka plugin one.

git clone --recursive
git clone gerrit/plugins/kafka-events
cd gerrit
rm plugins/external_plugin_deps.bzl
ln -s ./kafka-events/external_plugin_deps.bzl plugins/.

To build the kafka-events plugins, issue the command from the Gerrit source path:

bazel build plugins/kafka-events

The output is created in


Minimum Configuration

Assuming a running Kafka broker on the same Gerrit host, add the following settings to gerrit.config:

  [plugin "kafka-events"]
    bootstrapServers = localhost:9092