-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor LuceneFailreTest to use the DataModel, not LuceneTestBase #3010
base: main
Are you sure you want to change the base?
Conversation
Result of fdb-record-layer-pr on Linux CentOS 7
|
Result of fdb-record-layer-pr on Linux CentOS 7
|
recordStore.saveRecord(createComplexDocument(8888L, WAYLON, 2, Instant.now().plus(1, ChronoUnit.DAYS).toEpochMilli())); | ||
recordStore.saveRecord(createComplexDocument(9999L, "hello world!", 1, Instant.now().plus(2, ChronoUnit.DAYS).toEpochMilli())); | ||
injectedFailures.addFailure(LUCENE_GET_FILE_REFERENCE_CACHE_ASYNC, | ||
new FDBExceptions.FDBStoreTransactionIsTooOldException("Blah", new FDBException("Blah", 7)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why change the exception being thrown?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new method replaces two test methods: One with AsyncToSyncTimeoutException
and one with DBStoreTransactionIsTooOldException
. The new test method has an additional parameter (isGrouped
) and throws DBStoreTransactionIsTooOldException
.
try (FDBRecordContext context = openContext(contextProps)) { | ||
rebuildIndexMetaData(context, COMPLEX_DOC, COMPLEX_PARTITIONED_NOGROUP); | ||
final LuceneScanBounds scanBounds = fullTextSearch(COMPLEX_PARTITIONED_NOGROUP, "text:propose"); | ||
injectedFailures.addFailure(LUCENE_GET_FDB_LUCENE_FILE_REFERENCE_ASYNC, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before this would inject the failure before saving any records, but the injected failure wouldn't fire until it did the search, is that because the fullTextSearch
above would have already loaded the file reference, and it wouldn't need to do so again until it got to the search below?
(Just trying to make sure I understand what's going on, I think it's better to start injecting after the save)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before, there were two test methods (one for Grouped and one for Ungrouped search). I don't remember exactly why the injection was added at the top and I believe your reasoning is correct - the bind
call for the creation of the scan bounds loaded the cache.
Once refactored the code and consolidated the two method, it made sense to move the injection below the save
as it offers more direct relationship to the action action being carried out.
|
||
// create/save documents | ||
long docGroupFieldValue = groupingKey.isEmpty() ? 0L : groupingKey.getLong(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
groupingKey
is always Tuple.from(1L)
, would it make sense to reorder the assignments:
long docGroupFieldValue = 1L;
Tuple groupingKey = Tuple.from(docGroupFieldValue);
long docGroupFieldValue = groupingKey.isEmpty() ? 0L : groupingKey.getLong(0); | ||
final LuceneIndexTestDataModel dataModel = new LuceneIndexTestDataModel.Builder(seed, this::getStoreBuilderWithRegistry, pathManager) | ||
.setIsGrouped(true) | ||
.setPartitionHighWatermark(10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be worth extracting this into a variable, and making totalDocCount
a function of it, e.g.:
final int partitionHighWatermark = 10;
final int totalDocCount = partitionHighWatermark * 2;
@@ -273,106 +272,122 @@ void repartitionAndMerge(boolean useLegacyAsyncToSync) throws IOException { | |||
segmentCounts); | |||
|
|||
try (FDBRecordContext context = openContext(contextProps)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could probably just call:
dataModel.validate(() -> openContext(contextProps));
It will assert that the documents align, and that the partitions are all an acceptable size. It is less strict than the assertions here, but it is probably less brittle in the long run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean to replace the entire try
block with both validateDocsInPartition
in it? In that case, it would probably also make sense to change the original in LuceneIndexTest
?
|
||
final LuceneIndexTestDataModel dataModel = new LuceneIndexTestDataModel.Builder(seed, this::getStoreBuilderWithRegistry, pathManager) | ||
.setIsGrouped(true) | ||
.setPartitionHighWatermark(10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on the comment above, should this be:
.setPartitionHighWatermark(10) | |
.setPartitionHighWatermark(totalDocCount) |
.setPartitionHighWatermark(10) | ||
.build(); | ||
|
||
int docGroupFieldValue = groupingKey.isEmpty() ? 0 : (int)groupingKey.getLong(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, also, would it be worth rearranging so that the groupingKey
is derived from docGroupFieldValue
for (int i = 0; i < 20; i++) { | ||
recordStore.saveRecord(createComplexDocument(1000L + totalDocCount + i, ENGINEER_JOKE, docGroupFieldValue, start - i - 1)); | ||
for (int i = 0; i < 10; i++) { | ||
dataModel.saveRecord(start - i - 1, store, docGroupFieldValue); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you decrementing the start
here?
// Inject failures (using partition 0) | ||
injectedFailures.addFailure(LUCENE_READ_BLOCK, | ||
new LuceneConcurrency.AsyncToSyncTimeoutException("Blah", new TimeoutException("Blah")), | ||
5); | ||
// this should fail with injected exception | ||
recordStore.saveRecord(createComplexDocument(1000L , ENGINEER_JOKE, docGroupFieldValue, 2)); | ||
dataModel.saveRecord(2, store, docGroupFieldValue); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not updating a record, it is saving a new record. You'll want to use
dataModel.recordsUnderTest().forEach(record -> record.updateOtherValue(store));
context.getDatabase().setAsyncToSyncExceptionMapper(this::mapExceptions); | ||
} else { | ||
context.getDatabase().setAsyncToSyncExceptionMapper((ex, ev) -> FDBExceptions.wrapException(ex)); | ||
} | ||
} | ||
|
||
private void setupExceptionMapping(boolean useExceptionMapping) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why have the two different methods?
No description provided.