Skip to content

Conversation

@inikep
Copy link
Collaborator

@inikep inikep commented May 26, 2025

Revert the commit that causes the issue in MyRocks.

@inikep inikep requested a review from percona-ysorokin May 26, 2025 07:24
@inikep
Copy link
Collaborator Author

inikep commented May 26, 2025

Copy link
Collaborator

@percona-ysorokin percona-ysorokin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@inikep inikep force-pushed the PS-9680-8.0-GCA branch from 2bbcdc8 to 6567230 Compare May 26, 2025 13:11
gkodinov and others added 26 commits June 12, 2025 15:11
Introduced a new --commands option to mysql (default: ON) to
disable mysql client side commands in non-interactive mode.

Change-Id: Ie3e1f059bc79e87dbdab8161d7d6455f216d1a0c
Approved by: Erlend Dahl <[email protected]>

Change-Id: Iaf1f6a6885aefdddfd14e942381369010886987c
…ient

Problem

Server table_share invalidation vs schema change race condition results
in inconsistent schema views across MySQLDs.

Scenario

A participant MySQLD has a local session holding a Metadata lock (MDL)
which conflicts with an MDL required to participate in schema
distribution.
The Ndb Binlog Injector (BI) invalidates the Server's table_share,
and then waits for the MDL to become available.
If the session then attempts to access the table again, it can create
a new table_share, referring to the old schema.
If the BI later obtains the MDL and updates the DD then the Server
is left with :
  - New Ndb table
  - New DD Content
  - Old table_share state

This can cause further problems if further DDLs are executed based on
this hybrid state.

The ndb_ddl_open_trans and ndb_rpl_ddl_open_trans testcases are
extended to trigger these situations with :
 - Inplace ALTER add column @ Server X, MDL @ Server Y
 - Inplace ALTER add index @ Server X, MDL @ Server Y
 - Inplace ALTER add column @ Server Y, MDL @ Server X
 - Inplace ALTER add index @ Server Y, MDL @ Server X

This results in schema view divergence, and eventually an
assertion failure due to divergence between the Server DD + NdbApi
views.

ha_ndbcluster is improved with extra self-checking of alignment
between DD and table_share (DEBUG only).

Extra table_share invalidation is added :
 - Under protection of MDL Exclusive lock
 - Executed after commit of MySQL DD schema transaction

This should ensure that
 - There are no concurrent sessions accessing the table
   (due to the MDL Exclusive lock)
 - The old table_share is discarded
 - Any concurrent sessions attempting to create a new
   table_share can only do so after the commit of the
   DD changes, so will use the new schema content.

Change-Id: I7af2cff2929e07d07f84c12df4c2596c23dd9823
Change-Id: I02aaa73c419e96fbab1a25a431440ad546cf9983
Change-Id: Ie718f23a02989f73311ffb67f2d6c3c93aa76996
Change-Id: I0f920f655fa82b1d21a868a9f2978ca8dfbe0e79
Change-Id: Ie34598222bfb4debaad3f08fee83443b97de0d4f
Change-Id: I5caf6165d8731e8c1b6142f0182abecf15bbea02
Approved-by: Balasubramanian Kandasamy <[email protected]>
Change-Id: I538542e710b217ead2ca4e39cb5e01cae48f6537
Approved-by: Balasubramanian Kandasamy <[email protected]>
Change-Id: Iefe1067404e684046389ce7004c749d707ba2183
Change-Id: Ib67c929a81668552559080792876e59836c06e5b
Approved-by: Balasubramanian Kandasamy <[email protected]>
Change-Id: I9a838cbba47c500d23d26d4b962e96c775639537
Change-Id: I7ac39b0167f0c565a7f06c1899eaefadf5a578d2
Approved-by: Balasubramanian Kandasamy <[email protected]>
Change-Id: I9a838cbba47c500d23d26d4b962e96c775639537
(cherry picked from commit f0100d5395d160bea4274d4e43a1cdcc97c20f01)
In consume_optimizer_hints(): check for end-of-buffer before doing yyPeek().
End-of-buffer simply means that there is no hint to be consumed.

Change-Id: Ide42fcf5f1c08be182bda4ecb6c8bb2d5d0a248c
(cherry picked from commit 4f19b9a4d99fc087b93679dbdb6d0c7efcb67e51)
In consume_optimizer_hints(): check for end-of-buffer before doing yyPeek().
End-of-buffer simply means that there is no hint to be consumed.

Change-Id: Ide42fcf5f1c08be182bda4ecb6c8bb2d5d0a248c
(cherry picked from commit 4f19b9a4d99fc087b93679dbdb6d0c7efcb67e51)
Change-Id: Ie17095a0b71529d88773982c241393ba36fba648
Running

  $ ./runtime_output_directory/routertest_component_logging \
    --gtest_filter=*RouterLoggingTestConfigFilenameLoggingFolder*

leaks a temp-dir in the current workdir named 'router-....' containing
one file.

The test wants to remove the temp-dir in ~TempRelativeDirectory,
assumes that mysql_harness::get_tmp_dir() returns a relative directory,
but get_tmp_dir() returns a absolute directory instead.

Change
======

- Ensure TempRelativeDirectory really creates relative temp-dir.

Change-Id: I3401ffe0bef2005df1411fbd73366b51e4f80baa
$ ./runtime_output_directory/routertest_harness_keyring_manager \
  --gtest_filter=KeyringManager.symlink_dir

leaks a tempdir containing the folders:

- subdir
- symlink

Background
==========

When the TemporaryDirectory is cleaned up, it calls

  delete_dir_recursive()

which deletes all the contents of a directory and then the directory
itself.

If the directory that's deleted is a symlink-to-a-directory, the wrong
function is used to remove the empty directory:

- rmdir() for directories (delete_dir())
- unlink() for symlinks-to-directories (delete_file())

Change
======

- call delete_file() if delete_dir() fails with "not-a-directory"

Change-Id: If1ee1c8dcfa467b9ffe94f2041cc4dc1f8fccd2e
When a routertest is interrupted with ctrl-c it may leave mysqlrouter or
mysql_server_mock processes behind.

Background
==========

If the ctrl-c is received between spawn-signal-handler ...

  193     signal_handler_.spawn_signal_handler_thread();

and add-sig-handler:

  294       signal_handler_.add_sig_handler(...

the process may not be shutdown anymore, requiring a SIGKILL.

Change
======

- register the signal-handler before starting the signal-handler-thread.

Change-Id: I52a9fef3ae9381b22e4b7ccd8f6c125c30ae120a
               after adding it ..."

Symptoms:
---------
MySQL checks if maximum row size is within permissible limit whenever
it opens a table and prints a warning that says "Cannot add field".
This occurs with all statements like CREATE TABLE, ALTER TABLE ADD
COLUMN, ALTER TABLE DROP COLUMN,  USE database and SELECT. This warning
leads to confusion in case of ALTER TABLE DROP COLUMN, USE database and
SELECT statements as they do not try to add any field.

Also, when innodb_strict_mode=OFF and say we create a table with 5
columns (c1 - c5), where the row size is exceeded by while adding c5.
MySQL prints "Cannot add field `c5`", but the table with column c5 is
created successfully. Now, If we try do ALTER TABLE ADD COLUMN c6, we
still print "cannot add `c5`" warning when we are trying to add c6.

In case of ALTER TABLE command, MySQL prints 2 warnings, one with temp
table name ( #sql-ibxxx-xxx ) and one with actual table name. This may
also leads to confusion.

Root cause:
-----------
The method dict_index_validate_max_rec_size() that checks for the row
size is called when we create or alter or open a table and prints the
same warning in all these cases.

Fix:
----
* Removed the warning/error from the dict_index_validate_max_rec_size()
  and the modified the caller methods to print the relevant error.

Change-Id: Idf13972744c827ea16311ca69cecfa041c470440
This patch is for 5.7, which is EOL, but we would still like to be
able to build it, to compare features/results in 5.7 vs newer
releases.

Disable -Wold-style-definition warnings a few places

Change sig_return typedef in my_global.h to fix:
my_lock.c:200:19: error: assignment to 'sig_return' ...
-Wincompatible-pointer-types

One more -Wincompatible-pointer-types fix in a unit test.

Change-Id: Ia59e1c4b8d24ba8e678fa92ff2a1c67dae115866
ram1048 and others added 28 commits September 11, 2025 05:43
Description:
  The buffer pool of size innodb_buffer_pool_size is divided into
innodb_buffer_pool_instances instances and each instance is subdivided
into chunks of size innodb_buffer_pool_chunk_size. The chunks of each
instance is maintained using the buf_pool->n_chunks list which is
allocated when initializing the buffer pool instance. The buffer pool
size, buffer pool chunk size and number of buffer pool instances is
configurable by the user.

Issue:
  When the buffer pool size is very large, it can lead to a large number
of chunks required by each buffer pool instance. The allocation of
buf_pool->n_chunks list may fail if the operating system is unable to
allocate the requested memory. This failure was not checked, as the list
of chunks is generally small compared to the buffer pool.

Fix:
  This commit introduces checks to ensure that n_chunks list is used
only if it has been allocated successfully. Furthermore, it is verified
that allocation failures at other places are already handled.

Change-Id: I1407d3d6d24a0cb6634876b59cae62e4a313c538
Problem:

When running CREATE/INSERT statements with `sql_log_bin = 0` session
variable on a Ndb Binlog server, then replication would fail due to
Table <X> not existing.

Analysis:

By running some checks through mapping inserted Table_map entries with
the corresponding row changes, then some statements would cause the
checks to fail. The Table_map event is only injected in the Binlog if
the table had any changes to be logged. This is seen from the
any-value carried on the data signal from SUMA.

If some changes in the table result from more than 1 NdbOperation, the
multiple any-values are merged in the NdbApi and can be filtered. In
the Ndb plugin this filtering concludes, from the any-value carried,
that the changes can be: a REPLICA (applied) UPDATE or NOT REPLICA
UPDATE, or NO-LOGGING flag was set (prior to statement) or NOT
NO-LOGGING (i.e. other any-value flag).

Statements that include BLOB (and the like) columns, result in
subsequent NdbOperations to handle the BLOBs (reads/writes/deletes)
and these operations must include the same any-value as the main
NdbOperation of the statement. It was found that these NdbOperation,
created on-the-fly, do not have the any-value set on the Attribute
Info sections of the signal sent to the NDB Kernel.

Solution:

Apply a setAnyValue on every subsequent Blob NdbOperation that I/U/D,
by using the NdbBlob handle as a receiver of the value from the main
NdbOperation at the preExecution phase.

Change-Id: I730dcd84a406898dc28bfea993d91a49745624c4
Fixes a bad binlog signalling, i.e., an event with anyValue !=0 but
with no clear special word, that would cause data events to not be
logged.

Issue was a non-existent initialization of the m_any_value on the
NdbOperation that ultimately produced that data event, since it wasn't
used as it is now used ( OF_FLAGS protected the assignment, previously,
but with NdbRecAttr API these cannot be used, so the NdbBlob handle
must rely on m_any_value != 0 ).

Change-Id: I16c620f8b2e7dbf6535adfd1e3c87df1048cc810
Problem:

ndb.ndb_restart_restore fails on solaris/sparc

Test takes backup including a four row mysql.ndb_sql_metadata.
Then it only restore mysql.ndb_sql_metadata from node 1 backup.

On x86 node 2 happens to only have the snapshot row, which will
recreated when mysql servers restarts.

mysql.ndb_sql_metadata will be seen to have 3 rows after restore node 1,
and 4 rows after server started and snapshot row is created.

On sparc node 2 happens to have the "GRANT NDB_STORED_USER" row, which
will not be recreated by server starts.

And mysql.ndb_sql_metadata will have three rows unless node 2 backup is
also restored.

Fix:

Also restore grants from node 2 backup.

Change-Id: Ib2d535ef2ecdc888462ff0e1d72cf49e393685ba
                happens when create thread fail for parallel scan

Description:
When spawning thread fails during parallel scan, it will force to use the single thread mode to read data. However, it will crash when degrading to the single thread mode.

Investigation:
When we encounter `DB_OUT_OF_RESOURCES` while parallel processing(scan/check/count operations), we initiate `reader.run(0)`, which has the following key steps:
   a) we set the `m_sync` to `true`
   b) release the unused threads(the configured max threads size - number of worker threads that will be spawned)
   c) create a new thread and start the read operation.

However in `ddl0par-scan.cc`, before executing reader.run(0), we already free the unused threads with `reader.release_threads(n_threads)`.
As a result when reader.run(0) is executed, in step (b), while executing the `release_threads` function again we get the assertion failure as the condition `ut_a(active >= n_threads)` returns `false`.

Fix:
Removing the duplicate function call `reader.release_threads(n_threads)` before `reader.run(0)` solves the issue

Change-Id: I4ea790cff6992587de2e1ba6b6a7f315b45e8bf0
…bject

When new MTR Router tests are run on pb2 they fail with the following error:

Error: Loading plugin for config-section '[metadata_cache:my_cluster]' failed:
libmysqlrouter_metadata_cache.so.1: cannot open shared object file: No such file or directory

Looking at the logs one can see that the relevant files layout is as follows:
usr/lib/mysqlrouter/private/libmysqlrouter_routing.so
usr/lib/mysqlrouter/private/libmysqlrouter_metadata_cache.so.1
usr/lib/mysqlrouter/plugin/metadata_cache.so

The install script does this to set up rpath:
-- Set runtime path of "../debian/tmp/usr/lib/mysqlrouter/plugin/metadata_cache.so" to
"/usr/lib/mysqlrouter/private/:$ORIGIN/private:/usr/lib/mysqlrouter/plugin/:$ORIGIN/
:/usr/lib/mysqlrouter/private:$ORIGIN/../lib/mysqlrouter/private/"
-- Set runtime path of "../debian/tmp/usr/bin/mysqlrouter" to
"/usr/lib/mysqlrouter/private/:$ORIGIN/private:/usr/lib/mysqlrouter/plugin/:$ORIGIN/
:/usr/lib/mysqlrouter/private:$ORIGIN/../lib/mysqlrouter/private/"

You can see there isn't a proper path being set from plugin to libs in private dir.

This patch uses the proper rpath setting that was already done but conditinally,
now it is always done for LINUX platform.

Change-Id: I128eb757733a35235801049387beb958ad5f074d
(cherry picked from commit cb9f081a816b2b7f0da3ab3f79cc5385deeeaafc)
RPM builds on modern Linuxes are stricter about RPATH values for
binaries.  Any values containing $ORIGIN should come *before* absolute
paths.

The RPATH values of installed binaries are set according to the cmake
PROPERTY INSTALL_RPATH for each target.

Rewrite the cmake function ADD_INSTALL_RPATH to eliminate duplicate
entries in INSTALL_RPATH, and to always keep is sorted, so that
$ORIGIN entries come before entries starting with '/'.

Router cmake code sets CMAKE_INSTALL_RPATH, and this value will be
used to initialize INSTALL_RPATH for all router targets.  The cmake
list ROUTER_INSTALL_RPATH was identical to CMAKE_INSTALL_RPATH, but is
not needed, so remove it.

Also ensure that all paths in INSTALL_RPATH are stripped of the
trailing '/'.

Change-Id: Ib521e5ff5872c025383393177505f566e88ccd01
(cherry picked from commit 495e2988378620b934fc9784e5b1552d7b557368)
Additional patch for mysqld server:
Add absolute path to RPATH so that can run as setuid root.

Change-Id: I9111b9c173b1215775e505cd490d8452fe91bc7e
(cherry picked from commit 5bf432a57e2015267aed97590f0dfd810d63fd65)
- Add local logs for data node join/leave events

  Each data node logs when another data node joins/leaves
  the cluster, with the reason and resulting set of data
  nodes from their point of view.

  These logs help understand the causation and timeline
  of failure handling across the cluster.

- Improve API disconnect handling logging

  Enhance data node local logging of reasons for handling
  API failure.

  Reasons include :
  - Data node itself detected API HB failure
  - Data node itself detected API transporter disconnect
  - Data node requested to fail node by local block
    (SUMA, TC, CMVMI)
  - Data node requested to fail node by another data node
    (Failure propagation)

  These logs help understand the causation + timeline of
  API failure handling across the cluster.

Change-Id: Id7ac4433e9265a90feba3e88764b75822ebedb0f
Connection control component/plugin introduces a delay if a number of
failed login attempts for a given authID goes above threshold configured
by the administrator.

This worklog implements a way for the following legal usecase to be
exempted from introducing the delay:
 - connections that do not use MySQL protocol, just probing if the
   server is up and running (for example originating from load balancer)

This commit ports the existing changes from connection_control
component to the connection_control plugin.

There are following differences compared to component code:
- system variable renamed from
  "component_connection_control.exempt_unknown_users" to
  "connection_control_exempt_unknown_users"
- status variable renamed from
 "Component_connection_control_exempted_unknown_users" to
 "Connection_control_exempted_unknown_users"

Change-Id: I189125f423c3a91df1096f344cd943df20f53964
Homebrew on macOS currently has protoc 6.32.1.  It will generate
functions returning std::string_view, or absl::string_view, rather
than std::string.
see
https://protobuf.dev/editions/features/#string_type

We have application code which assume std::string.  The fix is to
convert string_view to string where necessary.

Our .proto files should probably be upgraded at some point,
but for now, fix the application code instead.

Note that our "bundled" protobuf will likely be upgraded to something
similar to what Homebrew has now.

Change-Id: Ib2a39c69afd0e37c1e38dd20a8a9b1ece6d73903
(cherry picked from commit 61cbee439476c65a7fb9f7b79871e638f64b39e9)
Summary:

Prevent errors through lock corruption during TRUNCATE by not closing
and reopening the NDB shared table handler while the table is
recreated.

Problem:

When running continuous TRUNCATE SQL commands, while concurrently
performing a NDB Backup, the MySQL Server could hang or ultimately
crash. The hang and crash occurred around the same code point, at
thr_lock(..), respectively with `wait_for_lock` and `has_old_lock`.

Analysis:

The order of operations of the TRUNCATE command, at the NDB plugin
layer, can be narrowed to three main operations:

1. CLOSE table (ha_ndbcluster::close)
2. CREATE TABLE (TRUNCATE <=> DROP AND CREATE with ha_ndbcluster::create)
3. OPEN table (ha_ndbcluster::open)

By observing the patterns, it is seen that if the Table undergoing
TRUNCATE is still under BACKUP then the DROP fails and thus step (2)
returns early. This causes the NDB_SHARE, a similar table handle as
TABLE_SHARE, to continue to exist. On a success case, DROP would cause
NDB_SHARE to be marked as dropped (ref count = 0) and CREATE would
create a new NDB_SHARE. On a failed DROP case, the original NDB_SHARE
is not marked as dropped.

At this point, the OPEN step (3) is run and one of the operations is
to initialize the thr_lock data for the handler, linking it with the
NDB_SHARE thr_lock, and setting it into TL_UNLOCK. But, the NDB_SHARE
and the handler's thr_lock data was LOCK'ed (specifically
TL_WRITE_ALLOW_WRITE) therefore the aforementioned operation (set to
TL_UNLOCK) effectively tramples the current lock data.

The result is a badly cleaned and "unlocked" lock. The subsequent
operations on that Table that require SQL locks might find the two
problematic scenarios:

1. Found old lock with an owner and will wait thus, but that owner has
already exited and trampled the lock

2. Will find old lock with some bad data and will SIGSEGV when reading
the owner.

Solution:

Going back the call trace for the TRUNCATE command, the following list
is an approximate order of operations:

- OPEN table (SQL code)
- LOCK table (1) (SQL code)
- CLOSE table (NDB code)
- TRUNCATE table, i.e., DROP/CREATE (2) (NDB code)
- OPEN table (3) (NDB code)
- UNLOCK table (4) (SQL code)
- CLOSE TABLE (SQL code)

The Table is already opened before TRUNCATE of SE is
called. Therefore, if CLOSE and OPEN around
TRUNCATE (ha_ndbcluster::create) can be ignored, then the LOCK
structure remains untouched. The Table is already CLOSED when
UNLOCKED, so it is expected that the resources the HANDLER has
established when OPEN (buffers, etc) are to be freed.

More specifically:

On success:
- DROP will mark the existing (locked) NDB_SHARE as Dropped (not yet physically released as the session has a reference to it)
- OPEN will create a new NDB_SHARE (2 shares will exist)
- UNLOCK will remove locks from old NDB_SHARE
- CLOSE will reduce ref count on old NDB_SHARE decrementing to
zero + freeing it (1 share exists)

On failure:
- DROP will do nothing
- UNLOCK will remove locks from old NDB_SHARE
- CLOSE will reduce ref count on old NDB_SHARE, but it will be
retained (1 share exists)

Change-Id: Ib2cbb3cf49ceac6ad2717e000b5b89beb026e4e4
…orrect

Description:
------------
In a source–replica setup, it was observed that the replica’s relay log
contained an incorrect logical clock when executing
'CREATE TABLE ... AS SELECT'

Analysis:
------------
Currently, in replication transaction tracking, 'CREATE TABLE ... AS SELECT' is
treated as a DML statement. In reality, it is both DDL and DML, but the
write sets do not capture all dependencies correctly. As a result, a logical
clock mismatch occurs when processing 'CREATE TABLE ... AS SELECT', since
dependency tracking relies on write sets, which are inadequate for this case.

Fix:
----
'CREATE TABLE ... AS SELECT' mixes both DDL and DML.
Treat 'CREATE TABLE ... AS SELECT' as a DDL-like transaction to ensure correct
logical clock handling.

Change-Id: I8d393b647523b04fa422cfe468487da79bd329f3
…va [noclose]

Improve debugability of clusterj tests by moving
JVM error log to location that will be captured
as part of PB2's vardirs-failures.

This makes it easier to determine the nature of
a JVM crash when analysing test results.

Change-Id: I55de95bed6c7cbc14cea29eefde200d67bea15d4
…a [noclose]

noclose used as further fix on this bug expected.

Enhance ndb_desc with an --embedded-metadata / -m option which
will show the content of a table's embedded metadata, which is
normally either :
 v1 : Binary FRM content (< MySQL 8.0)
 v2 : Text SDI (JSON) content (>= MySQL 8.0)

Binary content is shown as hex digits + printable chars.
Text content is shown as text.

Example binary output

...
-- Indexes --
PRIMARY KEY(server_id) - UniqueHashIndex
-- Embedded metadata
Packed len : 346
Metadata version : 1
Unpacked length : 8716
Metadata begin
0x00000000:  0xfe 0x01 0x0a 0x0e 0x03 0x00 0x00 0x10
0x00000008:  0x01 0x00 0x00 0x30 0x00 0x00 0x69 0x01        0  i
...
0x000021f8:  0xff 0x73 0x74 0x61 0x72 0x74 0x5f 0x70      start_p
0x00002200:  0x6f 0x73 0xff 0x65 0x6e 0x64 0x5f 0x70     os end_p
0x00002208:  0x6f 0x73 0xff 0x00                         os

Metadata end

Example text output

...
-- Indexes --
PRIMARY KEY(a) - UniqueHashIndex
PRIMARY(a) - OrderedIndex
-- Embedded metadata
Packed len : 934
Metadata version : 2
Unpacked length : 6750
Metadata begin
{"mysqld_version_id":90500,"dd_version":90200,"sdi_version":80019,"dd_object_type":"Table","dd_object":{"name":"blah","mysql_version_id":90500,"created":20250908211905,"last_altered":20250908211905,
...
"engine":"ndbcluster","comment":"","options":"secondary_load=0;","se_private_data":"","values":[],"indexes":[{"options":"","se_private_data":"","index_opx":0}],"subpartitions":[]}],"collation_id":255}}
Metadata end

This allows ndb_desc to be used to understand how the embedded
metadata affects the behaviour of the cluster, especially around
e.g. version compatibility limits.

MTR test ndb_desc_extra is enhanced to give some coverage of
this new behaviour.

Change-Id: Ic51ab3b18aed6dac536eecf3b0f806f0342cf686
Approved by: Erlend Dahl <[email protected]>

Change-Id: I6c3a7ae6517b686bb67d4c75c10f9f05fd051677
(cherry picked from commit 4ecf182a4c39cfd7ebf92b98a408d470b85f80a4)
Approved by: Erlend Dahl <[email protected]>

Change-Id: Ica8d31df8a0bcda2f94c98dea65266f1a2b8d06f
(cherry picked from commit 3ec252a6414cd5b60f0fa9d1fc157ad9e6096797)
…_timeout`

Fix the following issue:

```
CURRENT_TEST: group_replication.gr_ssl_socket_timeout
--- /tmp/work/extract/mysql-test/suite/group_replication/r/gr_ssl_socket_timeout.result	2025-10-29 13:32:19.000000000 +0300
+++ /tmp/work/extract/mysql-test/var/6/log/gr_ssl_socket_timeout.reject	2025-10-29 14:18:42.397243291 +0300
@@ -37,7 +37,7 @@
 SET @@GLOBAL.group_replication_communication_debug_options= @group_replication_communication_debug_options_save;
 include/assert_grep.inc [Assert that the mysql connection has been ended by the server]
 include/assert_grep.inc [Assert that message about aborting the connection has been logged to GCS_DEBUG_TRACE file]
-ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 2
+ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
```
Revert the commit that causes the issue in MyRocks.
@inikep inikep closed this Oct 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.