Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Effort to fix testDataStreamLifecycleDownsampleRollingRestart #123769 #125478

Merged
merged 8 commits into from
Mar 28, 2025

Conversation

gmarouli
Copy link
Contributor

@gmarouli gmarouli commented Mar 24, 2025

In this PR we try to improve DataStreamLifecycleDownsampleDisruptionIT.testDataStreamLifecycleDownsampleRollingRestart, by applying the following changes:

  • The test had a Thread.sleep in order to time the disruption during the downsampling. We switched that with a listener that waits until it witnesses in the target index in the cluster state with a status different than unknown meaning the downsampling has started or finished.
  • We also used the same technic to detect when the downsampling has finished.
  • We reduced the amount of indexed docs from 50_000 to 25_000 which appears to still be enough when running locally for the disruption to happen, we did not observe a big drop during the test run but it's less data than the original so it should make it more stable when it comes to shard relocation.
  • Last but not least, we introduced a bit more logging during the test execution in an effort to remove the trace logging.

Fixes: #123769

@gmarouli gmarouli added >test Issues or PRs that are addressing/adding tests :Data Management/Data streams Data streams and their lifecycles auto-backport Automatically create backport pull requests when merged v8.19.0 v9.0.1 labels Mar 24, 2025
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-data-management (Team:Data Management)

@elasticsearchmachine elasticsearchmachine added Team:Data Management Meta label for data/management team v9.1.0 labels Mar 24, 2025
@masseyke masseyke self-requested a review March 24, 2025 13:29

@ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST, numDataNodes = 0, numClientNodes = 4)
public class DataStreamLifecycleDownsampleDisruptionIT extends ESIntegTestCase {
private static final Logger logger = LogManager.getLogger(DataStreamLifecycleDownsampleDisruptionIT.class);
public static final int DOC_COUNT = 50_000;
public static final int DOC_COUNT = 25_000;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know why we need so many docs in the first place? Is the reason purely that we want the downsample operation to take some time so we have a chance to disrupt the cluster during the downsampling? If so, I feel like a more targeted approach would be better. For instance, we could delay some actions by intercepting them - that's a fairly common practice in internal cluster tests. It's going to require some more complexity, but I think it'll have a higher value as we'll be more in control of when the disruption happens. What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure about that. Reducing the amount of indexed documents very much and accessing internal state to determine when to introduce the disruption can also reduce the value of the test because it becomes more staged.

On the other hand, I do not know what is the lowest count that makes sense in this test and if there is an internal task or something that we could leverage to better position the rolling restart. That is why this PR is on the conservative side. But I can follow up on it and see where it goes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mentioned that you ran the test locally to check which doc counts still caused the disruption to happen. Do we have an idea of whether the disruption even happens in CI - with the 50k docs and the 25k docs? Because the test also handles the situation where the downsampling already completed before we start the disruption, I feel we have no proof that the test actually tests what it's supposed to do.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kind of, I sampled it but this does not guarantee that it will always be like that. I checked if the status after the disruption was started at least once.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you check on CI as well? If it runs - based on timing - on our laptops in a certain way, it doesn't prove it runs on CI in the same way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I haven't checked it because I do not know how, as far as I know we do not have test logs from successful builds. So unless I make it fail I do not know how to get that data. How would you test it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only way I can think of is to change the test to change the first ensureDownsamplingStatus to only expect the started status. That will cause the test to fail if the downsampling already completed (which makes the test worthless).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, we can monitor if it starts failing now with this assertion and evaluate how to change the test. I will keep an eye on it.

ensureDownsamplingStatus(
targetIndex,
Set.of(IndexMetadata.DownsampleTaskStatus.STARTED, IndexMetadata.DownsampleTaskStatus.SUCCESS),
TimeValue.timeValueMillis(4500)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd personally be inclined to up this to TimeValue.timeValueSeconds(10) to allow for even more leniency. In happy flows, that doesn't have a (negative) impact as you've already optimized to use a cluster state listener instead of an exponential backoff. If there is an actual bug, waiting a few seconds won't have a negative impact either. If there's just a timing issue (i.e. slow CI server or w/e), waiting a few seconds more can have a positive impact. What do you think?

@gmarouli gmarouli requested a review from nielsbauman March 28, 2025 11:40
Copy link
Contributor

@nielsbauman nielsbauman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the iterations and discussions, Mary!

@gmarouli gmarouli merged commit 1943844 into elastic:main Mar 28, 2025
17 checks passed
@gmarouli gmarouli deleted the test-fix-123769 branch March 28, 2025 13:26
@elasticsearchmachine
Copy link
Collaborator

💔 Backport failed

Status Branch Result
8.x Commit could not be cherrypicked due to conflicts
9.0 Commit could not be cherrypicked due to conflicts

You can use sqren/backport to manually backport by running backport --upstream elastic/elasticsearch --pr 125478

gmarouli added a commit to gmarouli/elasticsearch that referenced this pull request Mar 28, 2025
…ic#123769 (elastic#125478)

(cherry picked from commit 1943844)

# Conflicts:
#	muted-tests.yml
#	x-pack/plugin/downsample/src/internalClusterTest/java/org/elasticsearch/xpack/downsample/DataStreamLifecycleDownsampleDisruptionIT.java
@gmarouli
Copy link
Contributor Author

gmarouli commented Mar 28, 2025

💚

Status Branch Result
8.x
9.0

elasticsearchmachine pushed a commit that referenced this pull request Mar 28, 2025
… (#125478) (#125845)

(cherry picked from commit 1943844)

# Conflicts:
#	muted-tests.yml
#	x-pack/plugin/downsample/src/internalClusterTest/java/org/elasticsearch/xpack/downsample/DataStreamLifecycleDownsampleDisruptionIT.java
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport Automatically create backport pull requests when merged :Data Management/Data streams Data streams and their lifecycles Team:Data Management Meta label for data/management team >test Issues or PRs that are addressing/adding tests v8.19.0 v9.0.1 v9.1.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[CI] DataStreamLifecycleDownsampleDisruptionIT testDataStreamLifecycleDownsampleRollingRestart failing
4 participants