Make the indexing operation fail upon StorageException(s)

Change 270450 caused the blanking of the Lucene document
upon reindexing if any field caused a StorageException.

Whilst the overall intention was good, the implementation
caused the Lucene index replace operation to continue with
a Document without any fields instead of making the whole
operation fail.

StorageExceptions are thrown when the underlying storage,
being a filesystem or anything else, returns some errors.
Whether the failure is permanent or temporary (e.g. concurrent
GCs, repacking or pruning may cause sporadic StorageException(s))
returning a blank Lucene document was incorrect because, instead
of failing the operation, it was causing the change entry to be
completely removed from the index.

Let the StorageException fail the indexing operation, so that
existing entries may continue to exist, allowing the caller
to retry the operation.

The previous implementation, returning an empty Document, did
not allow any retry, because once the change entry was removed
from the index, it could not be discovered and reindexed anymore
for example by a `gerrit index changes <changenum>`.

Tested manually, applying a randomic StorageException thrown
during the fetching of the ChangeField extensions:

public static Set<String> getExtensions(ChangeData cd) {
  if (new Random().nextBoolean()) {
    throw new StorageException("Simulated storage exception");
  }
  return extensions(cd).collect(toSet());
}

Before this change, every time one change indexing throws a
StorageException, it disappears from the index. Eventually,
all changes will be eliminated and only an off-line reindex
or the reindex of all changes in the project would allow to
recover them.

After this change, some of the indexing operations are
successful and other fails. However, retrying the reindexing
operation would allow to reindex all of them.

Even if the above test case looks strange at first sight,
it simulates a real-life scenario where a low percentage
of indexing operation (0.1%) may fail because of sporadic
StorageExceptions. Before this change, some index entries
were missing on a daily basis (5 to 10 changes per day),
whilst after this change all indexing operation can be
retried and do not result in any indexing entry loss.

Bug: Issue 314113030
Release-Notes: Fail the change reindex operation upon StorageException(s)
Change-Id: Ia121f47f7a68c290849a22dea657804743a26b0d
diff --git a/java/com/google/gerrit/index/Schema.java b/java/com/google/gerrit/index/Schema.java
index 3aa9de0..ec14a15 100644
--- a/java/com/google/gerrit/index/Schema.java
+++ b/java/com/google/gerrit/index/Schema.java
@@ -207,21 +207,19 @@
   /**
    * Build all fields in the schema from an input object.
    *
-   * <p>Null values are omitted, as are fields which cause errors, which are logged.
+   * <p>Null values are omitted, as are fields which cause errors, which are logged. If any of the
+   * fields cause a StorageException, the whole operation fails and the exception is propagated to
+   * the caller.
    *
    * @param obj input object.
    * @param skipFields set of field names to skip when indexing the document
    * @return all non-null field values from the object.
    */
   public final Iterable<Values<T>> buildFields(T obj, ImmutableSet<String> skipFields) {
-    try {
-      return fields.values().stream()
-          .map(f -> fieldValues(obj, f, skipFields))
-          .filter(Objects::nonNull)
-          .collect(toImmutableList());
-    } catch (StorageException e) {
-      return ImmutableList.of();
-    }
+    return fields.values().stream()
+        .map(f -> fieldValues(obj, f, skipFields))
+        .filter(Objects::nonNull)
+        .collect(toImmutableList());
   }
 
   @Override