Allow running ETL into a Docker container

Use a GerritForge's simplified Spark setup
to simplify the deployment and execution of the ETL

Change-Id: I673ff55fe2745bd91f8cf60f429cd100362068f8
3 files changed
tree: 5b26b01d477d2d60abddad40e9cc0b9b0ac9d993
  1. dashboard-importer/
  2. kibana/
  3. project/
  4. src/
  5. .gitignore
  6. analytics-etl.yaml
  7. build.sbt
  8. docker-compose.yaml
  9. Dockerfile
  10. LICENSE
  11. README.md
README.md

Gerrit Analytics ETL

Spark ETL to extra analytics data from Gerrit Projects.

Requires a Gerrit 2.13.x or later with the analytics plugin installed and Apache Spark 2.11 or later.

Job can be launched with the following parameters:

bin/spark-submit \
    --conf spark.es.nodes=es.mycompany.com \
    $JARS/SparkAnalytics-assembly-1.0.jar \
    --since 2000-06-01 \
    --aggregate email_hour \
    --url http://gerrit.mycompany.com \
    -e gerrit/analytics

Should ElasticSearch need authentication (i.e.: if X-Pack is enabled), credentials can be passed through the spark.es.net.http.auth.pass and spark.es.net.http.auth.user parameters.

Parameters

  • since, until, aggregate are the same defined in Gerrit Analytics plugin see: https://gerrit.googlesource.com/plugins/analytics/+/master/README.md

  • -u --url Gerrit server URL with the analytics plugins installed

  • -p --prefix (optional) Projects prefix. Limit the results to those projects that start with the specified prefix.

  • -e --elasticIndex specify as / to be loaded in Elastic Search if not provided no ES export will be performed

  • -o --out folder location for storing the output as JSON files if not provided data is saved to /analytics- where is the system temporary directory

  • -a --email-aliases (optional) “emails to author alias” input data path.

    CSVs with 3 columns are expected in input.

    Here an example of the required files structure:

    author,email,organization
    John Smith,john@email.com,John's Company
    John Smith,john@anotheremail.com,John's Company
    David Smith,david.smith@email.com,Indipendent
    David Smith,david@myemail.com,Indipendent
    

    You can use the following command to quickly extract the list of authors and emails to create part of an input CSV file:

    echo -e "author,email\n$(git log --pretty="%an,%ae%n%cn,%ce"|sort |uniq )" > /tmp/my_aliases.csv
    

    Once you have it, you just have to add the organization column.

    NOTE:

    • organization will be extracted from the committer email if not specified
    • author will be defaulted to the committer name if not specified

Development environment

A docker compose file is provided to spin up an instance of Elastisearch with Kibana locally. Just run docker-compose up.

Kibana will run on port 5601 and Elastisearch on port 9200

Caveats

If Elastisearch dies with exit code 137 you might have to give Docker more memory (check this article for more details)

Distribute as Docker Container

To Distribute the gerritforge/spark-gerrit-analytics-etl docker container just run:

sbt clean assembly
docker-compose -f analytics-etl.yaml build
docker push
docker push gerritforge/spark-gerrit-analytics-etl:1.0