commit | 243eb5c48719b3409b784d42e17f0c05416c833f | [log] [tgz] |
---|---|---|
author | Fabio Ponciroli <ponch78@gmail.com> | Wed Dec 06 13:04:14 2017 +0000 |
committer | Fabio Ponciroli <ponch78@gmail.com> | Thu Dec 14 14:20:04 2017 +0100 |
tree | 32bf30c290a8e8a833665a268613adde75ae7d0f | |
parent | 785e93bd7ce571c34d8465d1e0466e18132a9117 [diff] |
Automatically import Kibana dashboards Facilitate the spin up of the environment by importing the Kibana dashboard and creating the empty gerrit elasticsearch index Change-Id: I921c9c469d740d79a3f75bd54e7221bfd81c694b
Spark ETL to extra analytics data from Gerrit Projects.
Job can be launched with the following parameters:
bin/spark-submit \ --conf spark.es.nodes=es.mycompany.com \ --conf spark.es.net.http.auth.user=elastic \ --conf spark.es.net.http.auth.pass=changeme \ $JARS/SparkAnalytics-assembly-1.0.jar \ --since 2000-06-01 \ --aggregate email_hour \ --url http://gerrit.mycompany.com \ -e gerrit/analytics
{"author": "John", "emails": ["john@email.com", "john@anotheremail.com"]} {"author": "David", "emails": ["david.smith@email.com", "david@myemail.com"]}
A docker compose file is provided to spin up an instance of Elastisearch with Kibana locally. Just run docker-compose up
.
Kibana will run on port 5601
and Elastisearch on port 9200
The Elastisearch default user is elastic
and the default password changeme
If Elastisearch dies with exit code 137
you might have to give Docker more memory (check this article for more details)