Refactor LuceneFailureTest to use the DataModel, not LuceneTestBase#3010
Refactor LuceneFailureTest to use the DataModel, not LuceneTestBase#3010ohadzeliger wants to merge 2 commits intoFoundationDB:mainfrom
Conversation
Result of fdb-record-layer-pr on Linux CentOS 7
|
Result of fdb-record-layer-pr on Linux CentOS 7
|
| recordStore.saveRecord(createComplexDocument(8888L, WAYLON, 2, Instant.now().plus(1, ChronoUnit.DAYS).toEpochMilli())); | ||
| recordStore.saveRecord(createComplexDocument(9999L, "hello world!", 1, Instant.now().plus(2, ChronoUnit.DAYS).toEpochMilli())); | ||
| injectedFailures.addFailure(LUCENE_GET_FILE_REFERENCE_CACHE_ASYNC, | ||
| new FDBExceptions.FDBStoreTransactionIsTooOldException("Blah", new FDBException("Blah", 7)), |
There was a problem hiding this comment.
Why change the exception being thrown?
There was a problem hiding this comment.
The new method replaces two test methods: One with AsyncToSyncTimeoutException and one with DBStoreTransactionIsTooOldException. The new test method has an additional parameter (isGrouped) and throws DBStoreTransactionIsTooOldException.
| try (FDBRecordContext context = openContext(contextProps)) { | ||
| rebuildIndexMetaData(context, COMPLEX_DOC, COMPLEX_PARTITIONED_NOGROUP); | ||
| final LuceneScanBounds scanBounds = fullTextSearch(COMPLEX_PARTITIONED_NOGROUP, "text:propose"); | ||
| injectedFailures.addFailure(LUCENE_GET_FDB_LUCENE_FILE_REFERENCE_ASYNC, |
There was a problem hiding this comment.
Before this would inject the failure before saving any records, but the injected failure wouldn't fire until it did the search, is that because the fullTextSearch above would have already loaded the file reference, and it wouldn't need to do so again until it got to the search below?
(Just trying to make sure I understand what's going on, I think it's better to start injecting after the save)
There was a problem hiding this comment.
Before, there were two test methods (one for Grouped and one for Ungrouped search). I don't remember exactly why the injection was added at the top and I believe your reasoning is correct - the bind call for the creation of the scan bounds loaded the cache.
Once refactored the code and consolidated the two method, it made sense to move the injection below the save as it offers more direct relationship to the action action being carried out.
| .build(); | ||
|
|
||
| // create/save documents | ||
| long docGroupFieldValue = groupingKey.isEmpty() ? 0L : groupingKey.getLong(0); |
There was a problem hiding this comment.
groupingKey is always Tuple.from(1L), would it make sense to reorder the assignments:
long docGroupFieldValue = 1L;
Tuple groupingKey = Tuple.from(docGroupFieldValue);| long docGroupFieldValue = groupingKey.isEmpty() ? 0L : groupingKey.getLong(0); | ||
| final LuceneIndexTestDataModel dataModel = new LuceneIndexTestDataModel.Builder(seed, this::getStoreBuilderWithRegistry, pathManager) | ||
| .setIsGrouped(true) | ||
| .setPartitionHighWatermark(10) |
There was a problem hiding this comment.
It might be worth extracting this into a variable, and making totalDocCount a function of it, e.g.:
final int partitionHighWatermark = 10;
final int totalDocCount = partitionHighWatermark * 2;| @@ -273,106 +272,122 @@ void repartitionAndMerge(boolean useLegacyAsyncToSync) throws IOException { | |||
| segmentCounts); | |||
|
|
|||
| try (FDBRecordContext context = openContext(contextProps)) { | |||
There was a problem hiding this comment.
You could probably just call:
dataModel.validate(() -> openContext(contextProps));It will assert that the documents align, and that the partitions are all an acceptable size. It is less strict than the assertions here, but it is probably less brittle in the long run.
There was a problem hiding this comment.
Do you mean to replace the entire try block with both validateDocsInPartition in it? In that case, it would probably also make sense to change the original in LuceneIndexTest?
|
|
||
| final LuceneIndexTestDataModel dataModel = new LuceneIndexTestDataModel.Builder(seed, this::getStoreBuilderWithRegistry, pathManager) | ||
| .setIsGrouped(true) | ||
| .setPartitionHighWatermark(10) |
There was a problem hiding this comment.
Based on the comment above, should this be:
| .setPartitionHighWatermark(10) | |
| .setPartitionHighWatermark(totalDocCount) |
| .setPartitionHighWatermark(10) | ||
| .build(); | ||
|
|
||
| int docGroupFieldValue = groupingKey.isEmpty() ? 0 : (int)groupingKey.getLong(0); |
There was a problem hiding this comment.
Here, also, would it be worth rearranging so that the groupingKey is derived from docGroupFieldValue
| for (int i = 0; i < 20; i++) { | ||
| recordStore.saveRecord(createComplexDocument(1000L + totalDocCount + i, ENGINEER_JOKE, docGroupFieldValue, start - i - 1)); | ||
| for (int i = 0; i < 10; i++) { | ||
| dataModel.saveRecord(start - i - 1, store, docGroupFieldValue); |
There was a problem hiding this comment.
Why are you decrementing the start here?
| 5); | ||
| // this should fail with injected exception | ||
| recordStore.saveRecord(createComplexDocument(1000L , ENGINEER_JOKE, docGroupFieldValue, 2)); | ||
| dataModel.saveRecord(2, store, docGroupFieldValue); |
There was a problem hiding this comment.
This is not updating a record, it is saving a new record. You'll want to use
dataModel.recordsUnderTest().forEach(record -> record.updateOtherValue(store));| } | ||
| } | ||
|
|
||
| private void setupExceptionMapping(boolean useExceptionMapping) { |
There was a problem hiding this comment.
Why have the two different methods?
No description provided.