Skip to content

Conversation

@joe-redpanda
Copy link
Contributor

@joe-redpanda joe-redpanda commented Feb 2, 2026

"Why?"
This was prompted by the magic number '7' in partition balancer.
I added validation on the relative sizes of time spans in the partition balancer when updating configs.
Generally node status < node unresponsiveness < node drain < auto decommission
Problem: node unresponsiveness is defined as 7 * node status interval
This number lives (lived) hardcoded in partition_balancer_backend

Should this multiplier be a configuration?
maybe?

Is it okay to have hardcoded constants?
IMO yes, just so long as theres ONE definition.

So, to have one definition usable by config validation and partition balancer, I can

  1. have validation ingest partition balancer (no)
  2. have partition balancer ingest validation (no)
  3. have a common module for magic numbers (yes)

I scraped together the usages of 32 as well mostly as a demonstration that this folder can be more widely useful than just partition balancer.

Backports Required

  • none - not a bug fix
  • none - this is a backport
  • none - issue does not exist in previous branches
  • none - papercut/not impactful enough to backport
  • v25.3.x
  • v25.2.x
  • v25.1.x

Release Notes

  • none

@joe-redpanda
Copy link
Contributor Author

rebase

@joe-redpanda joe-redpanda marked this pull request as ready for review February 3, 2026 16:22
Copilot AI review requested due to automatic review settings February 3, 2026 16:22
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a new constants module to centralize commonly used magic numbers across the Redpanda codebase. The change replaces hardcoded concurrency limits (32) and a partition balancer constant (7) with named constants defined in new header files.

Changes:

  • Created constants::common::default_concurrency to replace hardcoded 32 values used in concurrent operation limits
  • Created constants::balancer::missed_statuses_until_unresponsive to replace hardcoded 7 in partition balancer logic
  • Updated all usages across multiple modules to reference these constants

Reviewed changes

Copilot reviewed 23 out of 23 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
src/v/constants/common.h Defines default_concurrency constant (32) for concurrent operations
src/v/constants/balancer_constants.h Defines missed_statuses_until_unresponsive constant (7) for partition balancer
src/v/constants/BUILD Bazel build definitions for the new constants libraries
src/v/redpanda/admin/partition.cc Uses default_concurrency in max_concurrent_for_each call
src/v/kafka/server/handlers/describe_transactions.cc Uses default_concurrency for transaction description concurrency
src/v/kafka/data/rpc/service.cc Replaces hardcoded concurrency limit with constant
src/v/datalake/translation/scheduling.cc Uses default_concurrency in translator cleanup
src/v/datalake/translation/tests/scheduler_fixture.h Uses constant for test fixture configuration
src/v/cluster_link/replication/link_replication_mgr.h Uses constant for semaphore initialization
src/v/cluster_link/group_mirroring_task.h Uses constant for concurrent request limit
src/v/cluster/topics_frontend.cc Uses default_concurrency for partition move cancellation
src/v/cluster/rm_stm.cc Uses constant in multiple producer management operations
src/v/cluster/partition_balancer_backend.cc Uses both concurrency and balancer constants
src/v/config/validators.cc Uses balancer constant in validation logic and error messages
Multiple BUILD files Add dependencies on new constants libraries

public:
// The partition balancer will declare a node unresponsive if it misses this
// many node statuses in a row
static constexpr uint8_t missed_statuses_until_unresponsive = 7u;
Copy link

Copilot AI Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The type uint8_t is unnecessarily restrictive for a counter value. Consider using size_t or int for consistency with typical integer constants and to avoid potential overflow issues if the value needs to increase in the future.

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I want more than 255 I'll just change the type : )

Adds a bazel module for capturing codebase-wide constants called
common_constants. To this, adds a constant called "default_concurrency"
set to 32. This value is wiely used as a magic number in the cluster. A
future commit will replace usages of the magic number with this constant
Replaces all usages of the magic number 32 with the common constant
'default_concurrency'
Adds balancer constants to encapsulate the constants used in the various
balancers (leader and partition). Extracts a constant to represent the
number of node statuses that may be missed before the balancer considers
a node unresponsive (7). Uses that constant in the partition balancer
planner
Use the balancer constant rather than a magic number
@joe-redpanda
Copy link
Contributor Author

[23,557 / 23,559] 895 / 900 tests; Testing //src/v/cloud_topics/level_one/domain/tests:db_domain_manager_test; 1200s remote-cache, linux-sandbox ... (2 actions running)
ERROR: /var/lib/buildkite-agent/builds/redpanda-bk-agent-v6-core-m8gd12xlarge-unpartitioned-i-0e6f579c536c548bc-1/redpanda/redpanda/src/v/cloud_topics/level_one/domain/tests/BUILD:3:18: Testing //src/v/cloud_topics/level_one/domain/tests:db_domain_manager_test failed: (Segmentation fault): generate-xml.sh failed: error executing TestRunner command (from target //src/v/cloud_topics/level_one/domain/tests:db_domain_manager_test) external/bazel_tools/tools/test/generate-xml.sh bazel-out/aarch64-opt/testlogs/src/v/cloud_topics/level_one/domain/tests/db_domain_manager_test/test.log ... (remaining 3 arguments skipped)

Probably unrelated but will take a peek

@joe-redpanda
Copy link
Contributor Author


TRACE 2026-02-03 16:43:45,948 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002378-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,948 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002379-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,948 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002380-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,948 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,948 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:86 - ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 matches prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,948 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] replicate_entries_stm.cc:146 - Self append entries - {group: 123, commit_index: 1974, term: 2, prev_log_index: 1974, prev_log_term: 2, last_visible_index: 1974, dirty_offset: 1974, prev_log_delta: 0}
TRACE 2026-02-03 16:43:45,948 [shard 0:main] storage - disk_log_impl.cc:2112 - creating log appender for: {kafka/node_2/0}, next offset: 1975, log offsets: {start_offset:1929, committed_offset:1974, committed_offset_term:2, dirty_offset:1974, dirty_offset_term:2}
TRACE 2026-02-03 16:43:45,948 [shard 0:main] storage-resources - storage_resources.cc:224 - stm_take_bytes 268140 += 410 (current 10737150100)
TRACE 2026-02-03 16:43:45,948 [shard 0:main] storage-resources - storage_resources.cc:211 - configuration_manager_take_bytes 808132 += 410 (current 10736610108)
TRACE 2026-02-03 16:43:45,948 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] replicate_entries_stm.cc:155 - Leader append result: {time_since_append: 0ms, base_offset: 1975, last_offset: 1975, last_term: 2, byte_size: 410}
TRACE 2026-02-03 16:43:45,948 [shard 0:main] storage - disk_log_impl.cc:2123 - flush on segment with offsets {term:2, base_offset:1321, committed_offset:1974, stable_offset:1974, dirty_offset:1975}
TRACE 2026-02-03 16:43:45,948 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] replicate_entries_stm.cc:70 - Sending append entries request {group: 123, commit_index: 1974, term: 2, prev_log_index: 1974, prev_log_term: 2, last_visible_index: 1974, dirty_offset: 1974, prev_log_delta: 0} to {id: 0, revision: 0}
TRACE 2026-02-03 16:43:45,948 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] replicate_entries_stm.cc:70 - Sending append entries request {group: 123, commit_index: 1974, term: 2, prev_log_index: 1974, prev_log_term: 2, last_visible_index: 1974, dirty_offset: 1974, prev_log_delta: 0} to {id: 1, revision: 0}
DEBUG 2026-02-03 16:43:45,948 [shard 0:main] cloud_topics - db_domain_manager.cc:973 - Not opening database, no longer term 1: 2
TRACE 2026-02-03 16:43:45,948 [shard 0:main] http - GET http://localhost:4442/?list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST. - client.cc:437 - chunk received, chunk length 770
DEBUG 2026-02-03 16:43:45,948 [shard 0:main] http - iobuf_body.cc:81 - reader - finish called
TRACE 2026-02-03 16:43:45,948 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:2893 - flushed offset updated: 1975
TRACE 2026-02-03 16:43:45,948 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1974, current: 1974
TRACE 2026-02-03 16:43:45,948 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] consensus.cc:2005 - Received append entries request: node_id: {id: 2, revision: 0}, target_node_id: {id: 0, revision: 0}, protocol metadata: {group: 123, commit_index: 1974, term: 2, prev_log_index: 1974, prev_log_term: 2, last_visible_index: 1974, dirty_offset: 1974, prev_log_delta: 0}, batch count: 1, offset range: [0,0]
TRACE 2026-02-03 16:43:45,948 [shard 0:main] storage - disk_log_impl.cc:2112 - creating log appender for: {kafka/node_0/0}, next offset: 1975, log offsets: {start_offset:1928, committed_offset:1974, committed_offset_term:2, dirty_offset:1974, dirty_offset_term:2}
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage-resources - storage_resources.cc:224 - stm_take_bytes 804985 += 410 (current 10736613255)
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage-resources - storage_resources.cc:211 - configuration_manager_take_bytes 808132 += 410 (current 10736610108)
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1974, current: 1973
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] consensus.cc:3275 - Follower commit index updated 1974
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage - disk_log_impl.cc:2123 - flush on segment with offsets {term:2, base_offset:11, committed_offset:1974, stable_offset:1974, dirty_offset:1975}
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] consensus.cc:2005 - Received append entries request: node_id: {id: 2, revision: 0}, target_node_id: {id: 1, revision: 0}, protocol metadata: {group: 123, commit_index: 1974, term: 2, prev_log_index: 1974, prev_log_term: 2, last_visible_index: 1974, dirty_offset: 1974, prev_log_delta: 0}, batch count: 1, offset range: [0,0]
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage - disk_log_impl.cc:2112 - creating log appender for: {kafka/node_1/0}, next offset: 1975, log offsets: {start_offset:1928, committed_offset:1974, committed_offset_term:2, dirty_offset:1974, dirty_offset_term:2}
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage-resources - storage_resources.cc:224 - stm_take_bytes 804985 += 410 (current 10736613255)
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage-resources - storage_resources.cc:211 - configuration_manager_take_bytes 808132 += 410 (current 10736610108)
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1974, current: 1973
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] consensus.cc:3275 - Follower commit index updated 1974
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage - disk_log_impl.cc:2123 - flush on segment with offsets {term:2, base_offset:11, committed_offset:1974, stable_offset:1974, dirty_offset:1975}
DEBUG 2026-02-03 16:43:45,949 [shard 0:main] client_pool - client_pool.cc:556 - releasing a client, pool size: 9, capacity: 10
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] state_machine_manager.cc:532 - reading batches in range [1974, 1974]
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage - readers_cache.cc:113 - {kafka/node_0/0} - trying to get reader for: start_offset:1974, max_offset:1974, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage - readers_cache.cc:146 - {kafka/node_0/0} - reader cache hit for: start_offset:1974, max_offset:1974, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] state_machine_manager.cc:132 - [default][l1_lsm_stm] applying batch with base 1974 and last 1974 offsets
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] state_machine_manager.cc:561 - updating _next offset with: 1975
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] state_machine_manager.cc:532 - reading batches in range [1974, 1974]
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage - readers_cache.cc:113 - {kafka/node_1/0} - trying to get reader for: start_offset:1974, max_offset:1974, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage - readers_cache.cc:146 - {kafka/node_1/0} - reader cache hit for: start_offset:1974, max_offset:1974, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] state_machine_manager.cc:132 - [default][l1_lsm_stm] applying batch with base 1974 and last 1974 offsets
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] state_machine_manager.cc:561 - updating _next offset with: 1975
INFO  2026-02-03 16:43:45,949 [shard 0:main] cloud_io - [fiber24465|0|9990ms] - remote.cc:751 - No keys to delete, returning
WARN  2026-02-03 16:43:45,949 [shard 0:main] lsm - impl.cc:333 - apply_edits_end error="lsm::io_error_exception (Replication error after persisting manifest: 1)"
WARN  2026-02-03 16:43:45,949 [shard 0:main] lsm - impl.cc:285 - flush_task_end error="lsm::io_error_exception (Replication error after persisting manifest: 1)"
TRACE 2026-02-03 16:43:45,949 [shard 0:main] lsm - impl.cc:281 - flush_task_start
TRACE 2026-02-03 16:43:45,949 [shard 0:main] lsm - flush_task.cc:48 - flush_memtable_start level=2 file_id=2381 mem_bytes=1072
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] consensus.cc:2893 - flushed offset updated: 1975
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] consensus.cc:3846 - update majority_replicated_index, new offset: -9223372036854775808, current: 1974
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] consensus.cc:2893 - flushed offset updated: 1975
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] consensus.cc:3846 - update majority_replicated_index, new offset: -9223372036854775808, current: 1974
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:382 - Append entries response: {node_id: {id: 0, revision: 0}, target_node_id: {id: 2, revision: 0}, group: 123, term: 2, last_dirty_log_index: 1975, last_flushed_log_index: 1975, last_term_base_offset: -9223372036854775808, result: success, may_recover: false}
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:476 - Updated node {id: 0, revision: 0} last committed log index: 1975
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:617 - Updated node {id: 0, revision: 0} match 1975 and next 1976 indices
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1975, current: 1974
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3244 - Leader commit index updated 1975
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1975, current: 1975
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1975, current: 1975
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:382 - Append entries response: {node_id: {id: 1, revision: 0}, target_node_id: {id: 2, revision: 0}, group: 123, term: 2, last_dirty_log_index: 1975, last_flushed_log_index: 1975, last_term_base_offset: -9223372036854775808, result: success, may_recover: false}
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:476 - Updated node {id: 1, revision: 0} last committed log index: 1975
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:617 - Updated node {id: 1, revision: 0} match 1975 and next 1976 indices
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1975, current: 1975
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] state_machine_manager.cc:532 - reading batches in range [1975, 1975]
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage - readers_cache.cc:113 - {kafka/node_2/0} - trying to get reader for: start_offset:1975, max_offset:1975, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage - readers_cache.cc:142 - {kafka/node_2/0} - reader cache miss for: start_offset:1975, max_offset:1975, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage - readers_cache.cc:81 - {kafka/node_2/0} - adding reader [1321,1975]
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] state_machine_manager.cc:132 - [default][l1_lsm_stm] applying batch with base 1975 and last 1975 offsets
TRACE 2026-02-03 16:43:45,949 [shard 0:main] storage - readers_cache.cc:305 - {kafka/node_2/0} - removing reader (reason: not reusable in ~entry_guard): [1321,1975] lower_bound: 1976
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] state_machine_manager.cc:561 - updating _next offset with: 1976
TRACE 2026-02-03 16:43:45,949 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] replicate_entries_stm.cc:342 - Replication result [offset: 1975, term: 2, commit_idx: 1975, current_term: 2], flushed: true, result: raft::errc:0
TRACE 2026-02-03 16:43:45,949 [shard 0:main] cloud_topics - replicated_db.cc:370 - Applying at seqno: 1975, key: 010edbc8494aa0410ea67dbd541b438292000000000000000000000000
TRACE 2026-02-03 16:43:45,949 [shard 0:main] cloud_topics - replicated_db.cc:370 - Applying at seqno: 1975, key: 04eb3baec59e714dbfa83674c23861d9fd
TRACE 2026-02-03 16:43:45,949 [shard 0:main] cloud_topics - replicated_db.cc:370 - Applying at seqno: 1975, key: 04e4dd93659aae42188ea8058cee439a17
TRACE 2026-02-03 16:43:45,949 [shard 0:main] cloud_topics - replicated_db.cc:388 - Applied write batch at seqno: 1975
DEBUG 2026-02-03 16:43:45,950 [shard 0:main] cloud_io - [fiber24467~0|1|10000ms] - remote.cc:206 - Uploading SST file upload to path "ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst", length 973
DEBUG 2026-02-03 16:43:45,950 [shard 0:main] client_pool - client_pool.cc:358 - client lease is acquired, own usage stat: 10, is-borrowed: false
TRACE 2026-02-03 16:43:45,950 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] replicate_entries_stm.cc:146 - Self append entries - {group: 123, commit_index: 1975, term: 2, prev_log_index: 1975, prev_log_term: 2, last_visible_index: 1975, dirty_offset: 1975, prev_log_delta: 0}
TRACE 2026-02-03 16:43:45,950 [shard 0:main] storage - disk_log_impl.cc:2112 - creating log appender for: {kafka/node_2/0}, next offset: 1976, log offsets: {start_offset:1929, committed_offset:1975, committed_offset_term:2, dirty_offset:1975, dirty_offset_term:2}
TRACE 2026-02-03 16:43:45,950 [shard 0:main] storage-resources - storage_resources.cc:224 - stm_take_bytes 268550 += 410 (current 10737149690)
TRACE 2026-02-03 16:43:45,950 [shard 0:main] storage-resources - storage_resources.cc:211 - configuration_manager_take_bytes 808542 += 410 (current 10736609698)
TRACE 2026-02-03 16:43:45,950 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] replicate_entries_stm.cc:155 - Leader append result: {time_since_append: 0ms, base_offset: 1976, last_offset: 1976, last_term: 2, byte_size: 410}
TRACE 2026-02-03 16:43:45,950 [shard 0:main] storage - disk_log_impl.cc:2123 - flush on segment with offsets {term:2, base_offset:1321, committed_offset:1975, stable_offset:1975, dirty_offset:1976}
TRACE 2026-02-03 16:43:45,950 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] replicate_entries_stm.cc:70 - Sending append entries request {group: 123, commit_index: 1975, term: 2, prev_log_index: 1975, prev_log_term: 2, last_visible_index: 1975, dirty_offset: 1975, prev_log_delta: 0} to {id: 0, revision: 0}
TRACE 2026-02-03 16:43:45,950 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] replicate_entries_stm.cc:70 - Sending append entries request {group: 123, commit_index: 1975, term: 2, prev_log_index: 1975, prev_log_term: 2, last_visible_index: 1975, dirty_offset: 1975, prev_log_delta: 0} to {id: 1, revision: 0}
TRACE 2026-02-03 16:43:45,950 [shard 0:main] cloud_roles - signature.cc:394 - Credentials updated:
[scope]
20260203/us-east-1/s3/aws4_request

TRACE 2026-02-03 16:43:45,950 [shard 0:main] cloud_roles - signature.cc:411 - 
[canonical-request]
PUT
/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst

content-length:973
content-type:text/plain
host:test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
user-agent:redpanda.vectorized.io
x-amz-content-sha256:[secret]
x-amz-date:20260203T164345Z

content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD

TRACE 2026-02-03 16:43:45,950 [shard 0:main] cloud_roles - signature.cc:425 - 
[signed-header]

PUT /ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst HTTP/1.1
User-Agent: redpanda.vectorized.io
Host: test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
Content-Type: text/plain
Content-Length: 973
x-amz-date: 20260203T164345Z
x-amz-content-sha256: [secret]
Authorization: [secret]


TRACE 2026-02-03 16:43:45,950 [shard 0:main] s3 - s3_client.cc:1188 - send https request:
PUT /ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst HTTP/1.1
User-Agent: redpanda.vectorized.io
Host: test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
Content-Type: text/plain
Content-Length: 973
x-amz-date: 20260203T164345Z
x-amz-content-sha256: [secret]
Authorization: [secret]


TRACE 2026-02-03 16:43:45,950 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst - client.cc:148 - client.make_request PUT /ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst HTTP/1.1
User-Agent: redpanda.vectorized.io
Host: test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
Content-Type: text/plain
Content-Length: 973
x-amz-date: 20260203T164345Z
x-amz-content-sha256: [secret]
Authorization: [secret]


DEBUG 2026-02-03 16:43:45,951 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst - client.cc:172 - shutdown connection, age 1968279, max idle time 0
DEBUG 2026-02-03 16:43:45,951 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst - client.cc:221 - about to start connecting, is_valid: false, connect gate closed: false, dispatch gate closed: false
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] consensus.cc:2005 - Received append entries request: node_id: {id: 2, revision: 0}, target_node_id: {id: 1, revision: 0}, protocol metadata: {group: 123, commit_index: 1975, term: 2, prev_log_index: 1975, prev_log_term: 2, last_visible_index: 1975, dirty_offset: 1975, prev_log_delta: 0}, batch count: 1, offset range: [0,0]
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - disk_log_impl.cc:2112 - creating log appender for: {kafka/node_1/0}, next offset: 1976, log offsets: {start_offset:1928, committed_offset:1975, committed_offset_term:2, dirty_offset:1975, dirty_offset_term:2}
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage-resources - storage_resources.cc:224 - stm_take_bytes 805395 += 410 (current 10736612845)
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage-resources - storage_resources.cc:211 - configuration_manager_take_bytes 808542 += 410 (current 10736609698)
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1975, current: 1974
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] consensus.cc:3275 - Follower commit index updated 1975
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - disk_log_impl.cc:2123 - flush on segment with offsets {term:2, base_offset:11, committed_offset:1975, stable_offset:1975, dirty_offset:1976}
DEBUG 2026-02-03 16:43:45,951 [shard 0:main] dns_resolver - Query name localhost (ANY)
TRACE 2026-02-03 16:43:45,951 [shard 0:main] dns_resolver - Created udp socket 1
TRACE 2026-02-03 16:43:45,951 [shard 0:main] dns_resolver - Connect 1(2)->127.0.0.1:0
TRACE 2026-02-03 16:43:45,951 [shard 0:main] dns_resolver - Close socket 1
TRACE 2026-02-03 16:43:45,951 [shard 0:main] dns_resolver - Release socket 1 -> -1
TRACE 2026-02-03 16:43:45,951 [shard 0:main] dns_resolver - Released socket 1
DEBUG 2026-02-03 16:43:45,951 [shard 0:main] dns_resolver - Query success: localhost/127.0.0.1
TRACE 2026-02-03 16:43:45,951 [shard 0:main] dns_resolver - Poll sockets
TRACE 2026-02-03 16:43:45,951 [shard 0:main] http - transport.cc:78 - Resolved address 127.0.0.1:4442
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] consensus.cc:2005 - Received append entries request: node_id: {id: 2, revision: 0}, target_node_id: {id: 0, revision: 0}, protocol metadata: {group: 123, commit_index: 1975, term: 2, prev_log_index: 1975, prev_log_term: 2, last_visible_index: 1975, dirty_offset: 1975, prev_log_delta: 0}, batch count: 1, offset range: [0,0]
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - disk_log_impl.cc:2112 - creating log appender for: {kafka/node_0/0}, next offset: 1976, log offsets: {start_offset:1928, committed_offset:1975, committed_offset_term:2, dirty_offset:1975, dirty_offset_term:2}
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:2893 - flushed offset updated: 1976
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1975, current: 1975
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] state_machine_manager.cc:532 - reading batches in range [1975, 1975]
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - readers_cache.cc:113 - {kafka/node_1/0} - trying to get reader for: start_offset:1975, max_offset:1975, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - readers_cache.cc:146 - {kafka/node_1/0} - reader cache hit for: start_offset:1975, max_offset:1975, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] state_machine_manager.cc:132 - [default][l1_lsm_stm] applying batch with base 1975 and last 1975 offsets
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] state_machine_manager.cc:561 - updating _next offset with: 1976
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage-resources - storage_resources.cc:224 - stm_take_bytes 805395 += 410 (current 10736612845)
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage-resources - storage_resources.cc:211 - configuration_manager_take_bytes 808542 += 410 (current 10736609698)
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1975, current: 1974
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] consensus.cc:3275 - Follower commit index updated 1975
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - disk_log_impl.cc:2123 - flush on segment with offsets {term:2, base_offset:11, committed_offset:1975, stable_offset:1975, dirty_offset:1976}
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] state_machine_manager.cc:532 - reading batches in range [1975, 1975]
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - readers_cache.cc:113 - {kafka/node_0/0} - trying to get reader for: start_offset:1975, max_offset:1975, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - readers_cache.cc:146 - {kafka/node_0/0} - reader cache hit for: start_offset:1975, max_offset:1975, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] state_machine_manager.cc:132 - [default][l1_lsm_stm] applying batch with base 1975 and last 1975 offsets
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] state_machine_manager.cc:561 - updating _next offset with: 1976
DEBUG 2026-02-03 16:43:45,951 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst - client.cc:267 - connected, true
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] consensus.cc:2893 - flushed offset updated: 1976
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_1/0}] consensus.cc:3846 - update majority_replicated_index, new offset: -9223372036854775808, current: 1975
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:382 - Append entries response: {node_id: {id: 1, revision: 0}, target_node_id: {id: 2, revision: 0}, group: 123, term: 2, last_dirty_log_index: 1976, last_flushed_log_index: 1976, last_term_base_offset: -9223372036854775808, result: success, may_recover: false}
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:476 - Updated node {id: 1, revision: 0} last committed log index: 1976
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:617 - Updated node {id: 1, revision: 0} match 1976 and next 1977 indices
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1976, current: 1975
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3244 - Leader commit index updated 1976
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1976, current: 1976
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1976, current: 1976
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] state_machine_manager.cc:532 - reading batches in range [1976, 1976]
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - readers_cache.cc:113 - {kafka/node_2/0} - trying to get reader for: start_offset:1976, max_offset:1976, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - readers_cache.cc:142 - {kafka/node_2/0} - reader cache miss for: start_offset:1976, max_offset:1976, max_bytes:18446744073709551615, strict_max_bytes:false, type_filter: {nullopt}, first_timestamp:{nullopt}, bytes_consumed:0, over_budget:false, skip_batch_cache:false, skip_readers_cache:false, abortable:false, client_address:{nullopt}
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - readers_cache.cc:81 - {kafka/node_2/0} - adding reader [1321,1976]
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] state_machine_manager.cc:132 - [default][l1_lsm_stm] applying batch with base 1976 and last 1976 offsets
TRACE 2026-02-03 16:43:45,951 [shard 0:main] storage - readers_cache.cc:305 - {kafka/node_2/0} - removing reader (reason: not reusable in ~entry_guard): [1321,1976] lower_bound: 1977
TRACE 2026-02-03 16:43:45,951 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] state_machine_manager.cc:561 - updating _next offset with: 1977
TRACE 2026-02-03 16:43:45,952 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] replicate_entries_stm.cc:342 - Replication result [offset: 1976, term: 2, commit_idx: 1976, current_term: 2], flushed: true, result: raft::errc:0
TRACE 2026-02-03 16:43:45,952 [shard 0:main] cloud_topics - replicated_db.cc:370 - Applying at seqno: 1976, key: 010edbc8494aa0410ea67dbd541b438292000000000000000000000000
TRACE 2026-02-03 16:43:45,952 [shard 0:main] cloud_topics - replicated_db.cc:370 - Applying at seqno: 1976, key: 04e4dd93659aae42188ea8058cee439a17
TRACE 2026-02-03 16:43:45,952 [shard 0:main] cloud_topics - replicated_db.cc:370 - Applying at seqno: 1976, key: 04778dccd6e1c94c55a09c9f28d925c390
TRACE 2026-02-03 16:43:45,952 [shard 0:main] cloud_topics - replicated_db.cc:388 - Applied write batch at seqno: 1976
TRACE 2026-02-03 16:43:45,952 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] consensus.cc:2893 - flushed offset updated: 1976
TRACE 2026-02-03 16:43:45,952 [shard 0:main] raft - [group_id:123, {kafka/node_0/0}] consensus.cc:3846 - update majority_replicated_index, new offset: -9223372036854775808, current: 1975
TRACE 2026-02-03 16:43:45,952 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:382 - Append entries response: {node_id: {id: 0, revision: 0}, target_node_id: {id: 2, revision: 0}, group: 123, term: 2, last_dirty_log_index: 1976, last_flushed_log_index: 1976, last_term_base_offset: -9223372036854775808, result: success, may_recover: false}
TRACE 2026-02-03 16:43:45,952 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:476 - Updated node {id: 0, revision: 0} last committed log index: 1976
TRACE 2026-02-03 16:43:45,952 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:617 - Updated node {id: 0, revision: 0} match 1976 and next 1977 indices
TRACE 2026-02-03 16:43:45,952 [shard 0:main] raft - [group_id:123, {kafka/node_2/0}] consensus.cc:3846 - update majority_replicated_index, new offset: 1976, current: 1976
TRACE 2026-02-03 16:43:45,952 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst - client.cc:576 - request_stream.send_some 973
TRACE 2026-02-03 16:43:45,952 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:226 - S3 imposter request /ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst - 973 - PUT
TRACE 2026-02-03 16:43:45,952 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:297 - Received PUT request to /ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst
TRACE 2026-02-03 16:43:45,952 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000002381-00000000000000000001.sst - client.cc:437 - chunk received, chunk length 129
DEBUG 2026-02-03 16:43:45,952 [shard 0:main] http - iobuf_body.cc:81 - reader - finish called
DEBUG 2026-02-03 16:43:45,952 [shard 0:main] client_pool - client_pool.cc:556 - releasing a client, pool size: 9, capacity: 10
TRACE 2026-02-03 16:43:45,952 [shard 0:main] lsm - flush_task.cc:82 - flush_memtable_end level=2 file_id=2381 file_bytes=973
TRACE 2026-02-03 16:43:45,952 [shard 0:main] lsm - impl.cc:328 - apply_edits_start
DEBUG 2026-02-03 16:43:45,952 [shard 0:main] client_pool - client_pool.cc:358 - client lease is acquired, own usage stat: 10, is-borrowed: false
DEBUG 2026-02-03 16:43:45,952 [shard 0:main] cloud_io - [fiber24468~0|1|10000ms] - remote.cc:1196 - Uploading LSM Manifest upload to path "ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001", length 150
TRACE 2026-02-03 16:43:45,952 [shard 0:main] cloud_roles - signature.cc:394 - Credentials updated:
[scope]
20260203/us-east-1/s3/aws4_request

TRACE 2026-02-03 16:43:45,952 [shard 0:main] cloud_roles - signature.cc:411 - 
[canonical-request]
PUT
/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001

content-length:150
content-type:text/plain
host:test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
user-agent:redpanda.vectorized.io
x-amz-content-sha256:[secret]
x-amz-date:20260203T164345Z

content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD

TRACE 2026-02-03 16:43:45,952 [shard 0:main] cloud_roles - signature.cc:425 - 
[signed-header]

PUT /ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 HTTP/1.1
User-Agent: redpanda.vectorized.io
Host: test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
Content-Type: text/plain
Content-Length: 150
x-amz-date: 20260203T164345Z
x-amz-content-sha256: [secret]
Authorization: [secret]


TRACE 2026-02-03 16:43:45,952 [shard 0:main] s3 - s3_client.cc:1188 - send https request:
PUT /ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 HTTP/1.1
User-Agent: redpanda.vectorized.io
Host: test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
Content-Type: text/plain
Content-Length: 150
x-amz-date: 20260203T164345Z
x-amz-content-sha256: [secret]
Authorization: [secret]


TRACE 2026-02-03 16:43:45,952 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 - client.cc:148 - client.make_request PUT /ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 HTTP/1.1
User-Agent: redpanda.vectorized.io
Host: test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
Content-Type: text/plain
Content-Length: 150
x-amz-date: 20260203T164345Z
x-amz-content-sha256: [secret]
Authorization: [secret]


DEBUG 2026-02-03 16:43:45,952 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 - client.cc:172 - shutdown connection, age 0, max idle time 0
DEBUG 2026-02-03 16:43:45,952 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 - client.cc:221 - about to start connecting, is_valid: false, connect gate closed: false, dispatch gate closed: false
DEBUG 2026-02-03 16:43:45,952 [shard 0:main] dns_resolver - Query name localhost (ANY)
TRACE 2026-02-03 16:43:45,952 [shard 0:main] dns_resolver - Created udp socket 1
TRACE 2026-02-03 16:43:45,952 [shard 0:main] dns_resolver - Connect 1(2)->127.0.0.1:0
TRACE 2026-02-03 16:43:45,952 [shard 0:main] dns_resolver - Close socket 1
TRACE 2026-02-03 16:43:45,952 [shard 0:main] dns_resolver - Release socket 1 -> -1
TRACE 2026-02-03 16:43:45,952 [shard 0:main] dns_resolver - Released socket 1
DEBUG 2026-02-03 16:43:45,952 [shard 0:main] dns_resolver - Query success: localhost/127.0.0.1
TRACE 2026-02-03 16:43:45,952 [shard 0:main] dns_resolver - Poll sockets
TRACE 2026-02-03 16:43:45,952 [shard 0:main] http - transport.cc:78 - Resolved address 127.0.0.1:4442
DEBUG 2026-02-03 16:43:45,952 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 - client.cc:267 - connected, true
TRACE 2026-02-03 16:43:45,952 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 - client.cc:576 - request_stream.send_some 150
TRACE 2026-02-03 16:43:45,952 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:226 - S3 imposter request /ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 - 150 - PUT
TRACE 2026-02-03 16:43:45,952 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:297 - Received PUT request to /ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001
TRACE 2026-02-03 16:43:45,953 [shard 0:main] http - PUT http://localhost:4442/ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.00000000000000000001 - client.cc:437 - chunk received, chunk length 129
DEBUG 2026-02-03 16:43:45,953 [shard 0:main] http - iobuf_body.cc:81 - reader - finish called
DEBUG 2026-02-03 16:43:45,953 [shard 0:main] client_pool - client_pool.cc:556 - releasing a client, pool size: 9, capacity: 10
DEBUG 2026-02-03 16:43:45,953 [shard 0:main] client_pool - client_pool.cc:358 - client lease is acquired, own usage stat: 10, is-borrowed: false
DEBUG 2026-02-03 16:43:45,953 [shard 0:main] cloud_io - [fiber24469~0|1|10000ms] - remote.cc:1049 - List objects test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9
TRACE 2026-02-03 16:43:45,953 [shard 0:main] cloud_roles - signature.cc:394 - Credentials updated:
[scope]
20260203/us-east-1/s3/aws4_request

TRACE 2026-02-03 16:43:45,953 [shard 0:main] cloud_roles - signature.cc:411 - 
[canonical-request]
GET
/
list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST.
content-length:0
host:test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
user-agent:redpanda.vectorized.io
x-amz-content-sha256:[secret]
x-amz-date:20260203T164345Z

content-length;host;user-agent;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

TRACE 2026-02-03 16:43:45,953 [shard 0:main] cloud_roles - signature.cc:425 - 
[signed-header]

GET /?list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST. HTTP/1.1
User-Agent: redpanda.vectorized.io
Host: test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
Content-Length: 0
x-amz-date: 20260203T164345Z
x-amz-content-sha256: [secret]
Authorization: [secret]


TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3 - s3_client.cc:259 - ListObjectsV2:
 GET /?list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST. HTTP/1.1
User-Agent: redpanda.vectorized.io
Host: test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
Content-Length: 0
x-amz-date: 20260203T164345Z
x-amz-content-sha256: [secret]
Authorization: [secret]


TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3 - s3_client.cc:1293 - send https request:
GET /?list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST. HTTP/1.1
User-Agent: redpanda.vectorized.io
Host: test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
Content-Length: 0
x-amz-date: 20260203T164345Z
x-amz-content-sha256: [secret]
Authorization: [secret]


TRACE 2026-02-03 16:43:45,953 [shard 0:main] http - GET http://localhost:4442/?list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST. - client.cc:148 - client.make_request GET /?list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST. HTTP/1.1
User-Agent: redpanda.vectorized.io
Host: test-bucket-949aa59d-fbaa-4b81-9ea8-302169879cd9.localhost
Content-Length: 0
x-amz-date: 20260203T164345Z
x-amz-content-sha256: [secret]
Authorization: [secret]


DEBUG 2026-02-03 16:43:45,953 [shard 0:main] http - GET http://localhost:4442/?list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST. - client.cc:172 - shutdown connection, age 0, max idle time 0
DEBUG 2026-02-03 16:43:45,953 [shard 0:main] http - GET http://localhost:4442/?list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST. - client.cc:221 - about to start connecting, is_valid: false, connect gate closed: false, dispatch gate closed: false
DEBUG 2026-02-03 16:43:45,953 [shard 0:main] dns_resolver - Query name localhost (ANY)
TRACE 2026-02-03 16:43:45,953 [shard 0:main] dns_resolver - Created udp socket 1
TRACE 2026-02-03 16:43:45,953 [shard 0:main] dns_resolver - Connect 1(2)->127.0.0.1:0
TRACE 2026-02-03 16:43:45,953 [shard 0:main] dns_resolver - Close socket 1
TRACE 2026-02-03 16:43:45,953 [shard 0:main] dns_resolver - Release socket 1 -> -1
TRACE 2026-02-03 16:43:45,953 [shard 0:main] dns_resolver - Released socket 1
DEBUG 2026-02-03 16:43:45,953 [shard 0:main] dns_resolver - Query success: localhost/127.0.0.1
TRACE 2026-02-03 16:43:45,953 [shard 0:main] dns_resolver - Poll sockets
TRACE 2026-02-03 16:43:45,953 [shard 0:main] http - transport.cc:78 - Resolved address 127.0.0.1:4442
DEBUG 2026-02-03 16:43:45,953 [shard 0:main] http - GET http://localhost:4442/?list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST. - client.cc:267 - connected, true
TRACE 2026-02-03 16:43:45,953 [shard 0:main] http - GET http://localhost:4442/?list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST. - client.cc:576 - request_stream.send_some 0
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:226 - S3 imposter request /?list-type=2&prefix=ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8%2FMANIFEST. - 0 - GET
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:272 - S3 imposter list request ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST. -  - {nullopt} - {nullopt} - GET
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000002-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000003-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000004-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000005-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000006-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000007-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000008-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000009-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000010-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000011-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000012-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000013-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000014-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000015-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000016-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000017-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000018-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000019-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000020-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000021-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000022-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000023-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000024-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000025-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000026-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000027-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000028-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000029-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000030-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000031-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000032-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000033-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000034-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000035-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000036-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000037-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000038-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000039-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000040-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000041-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000042-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000043-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000044-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000045-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000046-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000047-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000048-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000049-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000050-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000051-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000052-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000053-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000054-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000055-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.
TRACE 2026-02-03 16:43:45,953 [shard 0:main] s3_imposter_fixture - s3_imposter.cc:79 - Comparing ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/00000000000000000056-00000000000000000001.sst to prefix ceb562e7-9069-42ab-b5c4-cfcb4d84a6b8/MANIFEST.

@joe-redpanda
Copy link
Contributor Author

bazel test failure was unrelated, requesting feedback

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant