Spark ETL to extra analytics data from Gerrit Projects using the Analytics plugin

Clone this repo:


  1. 5269028 Adding num_distinct_files field to collected analytics by Stefano Galarraga · 5 days ago master
  2. 2065e3b Add support for "is_merge" field to the ETL and Kibana configuration by Stefano Galarraga · 9 days ago
  3. 26d76af Lowercase aliased organizations by Fabio Ponciroli · 10 days ago
  4. e70a141 Adjust to the project new home by Luca Milanesio · 12 days ago
  5. e84e401 Keep both id and project name when parsing JSON by Fabio Ponciroli · 3 weeks ago

Gerrit Analytics ETL

Spark ETL to extra analytics data from Gerrit Projects.

Requires a Gerrit 2.13.x or later with the analytics plugin installed and Apache Spark 2.11 or later.

Job can be launched with the following parameters:

bin/spark-submit \
    --conf \
    $JARS/SparkAnalytics-assembly-1.0.jar \
    --since 2000-06-01 \
    --aggregate email_hour \
    --url \
    -e gerrit/analytics

Should ElasticSearch need authentication (i.e.: if X-Pack is enabled), credentials can be passed through the and parameters.


  • since, until, aggregate are the same defined in Gerrit Analytics plugin see:

  • -u --url Gerrit server URL with the analytics plugins installed

  • -p --prefix (optional) Projects prefix. Limit the results to those projects that start with the specified prefix.

  • -e --elasticIndex specify as / to be loaded in Elastic Search if not provided no ES export will be performed

  • -o --out folder location for storing the output as JSON files if not provided data is saved to /analytics- where is the system temporary directory

  • -a --email-aliases (optional) “emails to author alias” input data path.

    CSVs with 3 columns are expected in input.

    Here an example of the required files structure:

    John Smith,,John's Company
    John Smith,,John's Company
    David Smith,,Indipendent
    David Smith,,Indipendent

    You can use the following command to quickly extract the list of authors and emails to create part of an input CSV file:

    echo -e "author,email\n$(git log --pretty="%an,%ae%n%cn,%ce"|sort |uniq )" > /tmp/my_aliases.csv

    Once you have it, you just have to add the organization column.


    • organization will be extracted from the committer email if not specified
    • author will be defaulted to the committer name if not specified

Development environment

A docker compose file is provided to spin up an instance of Elastisearch with Kibana locally. Just run docker-compose up.

Kibana will run on port 5601 and Elastisearch on port 9200


If Elastisearch dies with exit code 137 you might have to give Docker more memory (check this article for more details)