More meaningful example in README.md

Inclue a more real-life example using spark-submit,
including the ES credentials and a more real-life Gerrit URL

Change-Id: I6366152203c24bd3cc42a9081dcfb30297fa578f
1 file changed
tree: 06131a19bc521dc35d715445fc5eada7650e9ea6
  1. project/
  2. src/
  3. .gitignore
  4. build.sbt
  5. LICENSE
  6. README.md
README.md

spark-gerrit-analytics-etl

Spark ETL to extra analytics data from Gerrit Projects.

Job can be launched with the following parameters:

bin/spark-submit \
    --conf spark.es.nodes=es.mycompany.com \
    --conf spark.es.net.http.auth.user=elastic \
    --conf spark.es.net.http.auth.pass=changeme \
    $JARS/SparkAnalytics-assembly-1.0.jar \
    --since 2000-06-01 \
    --aggregate email_hour \
    --url http://gerrit.mycompany.com \
    -e gerrit/analytics

Parameters

  • since, until, aggregate are the same defined in Gerrit Analytics plugin see: https://gerrit.googlesource.com/plugins/analytics/+/master/README.md
  • -u --url Gerrit server URL with the analytics plugins installed
  • -e --elasticIndex specify as / to be loaded in Elastic Search if not provided no ES export will be performed
  • -o --out folder location for storing the output as JSON files if not provided data is saved to /analytics- where is the system temporary directory