Skip to content

Conversation

@ohadzeliger
Copy link
Contributor

No description provided.

@ohadzeliger ohadzeliger self-assigned this Dec 20, 2024
@foundationdb-ci
Copy link
Contributor

Result of fdb-record-layer-pr on Linux CentOS 7

  • Commit ID: b591aa5
  • Duration 0:36:26
  • Result: ❌ FAILED
  • Error: Error while executing command: ./gradlew --no-daemon --console=plain -b ./build.gradle build destructiveTest -PcoreNotStrict -PreleaseBuild=false -PpublishBuild=false -PspotbugsEnableHtmlReport. Reason: exit status 1
  • Build Log terminal output (available for 30 days)
  • Build Workspace zip file of the working directory (available for 30 days)

@foundationdb-ci
Copy link
Contributor

Result of fdb-record-layer-pr on Linux CentOS 7

  • Commit ID: e3cdd50
  • Duration 0:47:27
  • Result: ✅ SUCCEEDED
  • Error: N/A
  • Build Log terminal output (available for 30 days)
  • Build Workspace zip file of the working directory (available for 30 days)

@ohadzeliger ohadzeliger marked this pull request as ready for review January 2, 2025 20:57
@ohadzeliger ohadzeliger requested a review from ScottDugas January 2, 2025 20:57
recordStore.saveRecord(createComplexDocument(8888L, WAYLON, 2, Instant.now().plus(1, ChronoUnit.DAYS).toEpochMilli()));
recordStore.saveRecord(createComplexDocument(9999L, "hello world!", 1, Instant.now().plus(2, ChronoUnit.DAYS).toEpochMilli()));
injectedFailures.addFailure(LUCENE_GET_FILE_REFERENCE_CACHE_ASYNC,
new FDBExceptions.FDBStoreTransactionIsTooOldException("Blah", new FDBException("Blah", 7)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why change the exception being thrown?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new method replaces two test methods: One with AsyncToSyncTimeoutException and one with DBStoreTransactionIsTooOldException. The new test method has an additional parameter (isGrouped) and throws DBStoreTransactionIsTooOldException.

try (FDBRecordContext context = openContext(contextProps)) {
rebuildIndexMetaData(context, COMPLEX_DOC, COMPLEX_PARTITIONED_NOGROUP);
final LuceneScanBounds scanBounds = fullTextSearch(COMPLEX_PARTITIONED_NOGROUP, "text:propose");
injectedFailures.addFailure(LUCENE_GET_FDB_LUCENE_FILE_REFERENCE_ASYNC,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before this would inject the failure before saving any records, but the injected failure wouldn't fire until it did the search, is that because the fullTextSearch above would have already loaded the file reference, and it wouldn't need to do so again until it got to the search below?
(Just trying to make sure I understand what's going on, I think it's better to start injecting after the save)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before, there were two test methods (one for Grouped and one for Ungrouped search). I don't remember exactly why the injection was added at the top and I believe your reasoning is correct - the bind call for the creation of the scan bounds loaded the cache.
Once refactored the code and consolidated the two method, it made sense to move the injection below the save as it offers more direct relationship to the action action being carried out.

.build();

// create/save documents
long docGroupFieldValue = groupingKey.isEmpty() ? 0L : groupingKey.getLong(0);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

groupingKey is always Tuple.from(1L), would it make sense to reorder the assignments:

long docGroupFieldValue = 1L;
Tuple groupingKey = Tuple.from(docGroupFieldValue);

long docGroupFieldValue = groupingKey.isEmpty() ? 0L : groupingKey.getLong(0);
final LuceneIndexTestDataModel dataModel = new LuceneIndexTestDataModel.Builder(seed, this::getStoreBuilderWithRegistry, pathManager)
.setIsGrouped(true)
.setPartitionHighWatermark(10)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth extracting this into a variable, and making totalDocCount a function of it, e.g.:

final int partitionHighWatermark = 10;
final int totalDocCount = partitionHighWatermark * 2;

.collect(Collectors.toMap(Function.identity(), partitionId -> 2)),
segmentCounts);

try (FDBRecordContext context = openContext(contextProps)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could probably just call:

dataModel.validate(() -> openContext(contextProps));

It will assert that the documents align, and that the partitions are all an acceptable size. It is less strict than the assertions here, but it is probably less brittle in the long run.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean to replace the entire try block with both validateDocsInPartition in it? In that case, it would probably also make sense to change the original in LuceneIndexTest?


final LuceneIndexTestDataModel dataModel = new LuceneIndexTestDataModel.Builder(seed, this::getStoreBuilderWithRegistry, pathManager)
.setIsGrouped(true)
.setPartitionHighWatermark(10)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the comment above, should this be:

Suggested change
.setPartitionHighWatermark(10)
.setPartitionHighWatermark(totalDocCount)

.setPartitionHighWatermark(10)
.build();

int docGroupFieldValue = groupingKey.isEmpty() ? 0 : (int)groupingKey.getLong(0);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, also, would it be worth rearranging so that the groupingKey is derived from docGroupFieldValue

for (int i = 0; i < 20; i++) {
recordStore.saveRecord(createComplexDocument(1000L + totalDocCount + i, ENGINEER_JOKE, docGroupFieldValue, start - i - 1));
for (int i = 0; i < 10; i++) {
dataModel.saveRecord(start - i - 1, store, docGroupFieldValue);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you decrementing the start here?

5);
// this should fail with injected exception
recordStore.saveRecord(createComplexDocument(1000L , ENGINEER_JOKE, docGroupFieldValue, 2));
dataModel.saveRecord(2, store, docGroupFieldValue);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not updating a record, it is saving a new record. You'll want to use

dataModel.recordsUnderTest().forEach(record -> record.updateOtherValue(store));

}
}

private void setupExceptionMapping(boolean useExceptionMapping) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why have the two different methods?

@ScottDugas ScottDugas changed the title Refactor LuceneFailreTest to use the DataModel, not LuceneTestBase Refactor LuceneFailureTest to use the DataModel, not LuceneTestBase Feb 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants