README: Improve markdown formatting

A bunch of the code blocks at the bottom and the tables at the top were
missing formatting or had invalid formatting. Fix them and apply
consistent code block formatting with shell syntax highlighting where
appropriate.

The 3 blocks with example log messages were left with indent-style code
blocks since that seems to read better when viewing the raw text.

Change-Id: Ib26ceab6d7b384d67e2c9007426c6396a9f8b4cc
diff --git a/README.md b/README.md
index fec861b..369a2f0 100644
--- a/README.md
+++ b/README.md
@@ -4,9 +4,9 @@
 be automatically deployed by Zuul after they merge.  This is the
 preferred way to make changes.
 
-To add new projects, edit zuul/main.yaml.
+To add new projects, edit `zuul/main.yaml`.
 
-To add new node types, edit nodepool/nodepool.yaml.
+To add new node types, edit `nodepool/nodepool.yaml`.
 
 # Debugging
 
@@ -14,29 +14,30 @@
 and correct the problem.
 
 Zuul consists of several related processes:
-
-|zuul-scheduler   | the main decision-making process; 
+|Process          | Description
+|-------          | -----------
+|zuul-scheduler   | the main decision-making process;
 |                 | listens to events and dispatches jobs
 |zuul-executor    | runs jobs; this runs the Ansible processes which do the
 |                 | work, but the actual work happens on ephemeral cloud nodes
 |zuul-web         | serves the web interface
-|nodepool-launcher| creates and deletes cloud resources as needed for
-|                 | test nodes.
+|nodepool-launcher| creates and deletes cloud resources as needed for test nodes
 
 And this installation has one extra component not normally used in Zuul:
-
+|Component        | Description
+|---------------- | -----------
 |gcloud-authdaemon| keeps updated credentials available to
-|                 | zuul-executor for storing logs in Google Cloud
-|                 | Storage
+|                 | zuul-executor for storing logs in Google Cloud Storage
 
 To obtain a `.kube/config` file suitable for using with `kubectl` run:
-
-    gcloud container clusters get-credentials --project gerritcodereview-ci \
-	       --zone us-central1-a zuul-control-plane
+```sh
+gcloud container clusters get-credentials --project gerritcodereview-ci \
+    --zone us-central1-a zuul-control-plane
+```
 
 After that, verify it works by listing the Zuul pods:
 
-```
+```sh
 $ kubectl -n zuul get pod
 NAME                                     READY   STATUS    RESTARTS   AGE
 gcloud-authdaemon-4klk5                  1/1     Running   0          23d
@@ -51,8 +52,9 @@
 ## Logs
 
 All components log to stderr, so to see the logs, run something like:
-
-    kubectl -n zuul logs zuul-scheduler-0
+```sh
+kubectl -n zuul logs zuul-scheduler-0
+```
 
 To trace events through the various components, there are two helpful
 identifiers: the event ID and, once builds have been started, the
@@ -66,7 +68,7 @@
 
     2021-06-11 09:29:01,019 INFO zuul.Pipeline.gerrit.check: [e: a5d9669ace3b438782a05f82faa63daa] Adding change <Change 0x7ff3040a94c0 plugins/code-owners 309042,2> to queue <ChangeQueue check: plugins/code-owners> in <Pipeline check>
 
-The event ID is in brackets and prefixed with "e:" for brevity.
+The event ID is in brackets and prefixed with `e:` for brevity.
 
 An event ID may cause multiple builds of jobs to run.  You can narrow
 build-related log entries down by using the build UUID.  Here's an example with both kinds of IDs:
@@ -80,27 +82,31 @@
 If you are debugging a problem with a job, the executor has the
 ability to enable verbose logs from Ansible (equivalent to passing
 -vvv to the ansible-playbook command).  To turn this on run:
+```sh
+kubectl -n zuul exec zuul-executor-0 -- zuul-executor verbose
+```
 
-    kubectl -n zuul exec zuul-executor-0 -- zuul-executor verbose
-	
 To disable it, run:
-
-    kubectl -n zuul exec zuul-executor-0 -- zuul-executor unverbose
+```sh
+kubectl -n zuul exec zuul-executor-0 -- zuul-executor unverbose
+```
 
 ## Restarting
 
 To perform a complete hard-restart of the system, run the following commands:
-
-    kubectl -n zuul delete pod -l app.kubernetes.io/name=nodepool
-    kubectl -n zuul delete pod -l app.kubernetes.io/name=zuul
+```sh
+kubectl -n zuul delete pod -l app.kubernetes.io/name=nodepool
+kubectl -n zuul delete pod -l app.kubernetes.io/name=zuul
+```
 
 This will delete all of the running pods, and Kubernetes will recreate
 them from the deployment configuration.  Note that the scheduler takes
 some time to become ready, and during this time the web interface may
 be available but contain no data (it may say "Something went wrong" or
 display error toasts).  You can monitor the progress with:
-
-    kubectl -n zuul logs -f zuul-scheduler-0
+```sh
+kubectl -n zuul logs -f zuul-scheduler-0
+```
 
 The log line that indicates success is:
 
@@ -119,8 +125,9 @@
 updates its configuration.  However, if something goes wrong and Zuul
 somehow misses such an event, you can tell it to reload its config
 from scratch with this command:
-
-    kubectl -n zuul exec zuul-scheduler-0 zuul-scheduler full-reconfigure
+```sh
+kubectl -n zuul exec zuul-scheduler-0 zuul-scheduler full-reconfigure
+```
 
 This will pause Zuul's processing while it performs the
 reconfiguration, but it will not miss any events and will pick up
@@ -128,7 +135,7 @@
 
 Note that there is currently a bug where Zuul will not see changes to
 its config files if those changes are due to merge commits (unless the
-.zuul.yaml file is changed as content in the merge commit).
+`.zuul.yaml` file is changed as content in the merge commit).
 
 ## Deleting ZooKeeper State
 
@@ -136,13 +143,15 @@
 Zuul or some other source of data corruption, it may be necessary to
 delete the state from ZooKeeper and restart the cluster.  To do this,
 stop all of the Pods and then run:
-
-    kubectl -n zuul apply -f k8s/delete-state.yaml
+```sh
+kubectl -n zuul apply -f k8s/delete-state.yaml
+```
 
 This will run a k8s job to delete the state.  Once that pod has
 exited, run the following command to clean it up:
-
-    kubectl -n zuul delete job zuul-delete-state
+```sh
+kubectl -n zuul delete job zuul-delete-state
+```
 
 Then restart all of the Zuul services.
 
@@ -160,30 +169,37 @@
 Zuul.
 
 ## Install certmanager
-
+```sh
 kubectl create namespace cert-manager
 kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
 kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
 kubectl apply -n cert-manager -f letsencrypt.yaml
+```
 
 ## Install mariadb
-
+```sh
 kubectl create namespace mariadb
+```
 
 Use Google cloud click to deploy
-TODO: find a better HA sql database operator
 
+**TODO**: find a better HA SQL database operator
+
+```sh
 kubectl port-forward svc/mariadb-mariadb --namespace mariadb 3306
 mysql -h 127.0.0.1 -P 3306 -u root -p
 create database zuul;
 GRANT ALL PRIVILEGES ON zuul.* TO 'zuul'@'%' identified by '<password>' WITH GRANT OPTION;
+```
 
 ## Install Zuul
-
+```sh
 gcloud compute addresses create zuul-static-ip --global
 kubectl create namespace zuul
+```
 
 ## Bind k8s service accounts to gcp service accounts
+```sh
 kubectl create serviceaccount --namespace zuul logs
 kubectl create serviceaccount --namespace zuul nodepool
 kubectl create serviceaccount --namespace zuul zuul
@@ -214,10 +230,12 @@
 kubectl annotate serviceaccount \
   --namespace zuul zuul \
   iam.gke.io/gcp-service-account=zuul-63@gerritcodereview-ci.iam.gserviceaccount.com
+```
 
 ## Create a service account for self-deployment
-
+```sh
 kubectl -n zuul create serviceaccount zuul-deployment
 kubectl create clusterrolebinding zuul-deployment-cluster-admin-binding \
   --clusterrole cluster-admin \
   --user system:serviceaccount:zuul:zuul-deployment
+```
\ No newline at end of file