Skip to content

Concurrent Immix #311

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 22 commits into
base: master
Choose a base branch
from
Draft

Concurrent Immix #311

wants to merge 22 commits into from

Conversation

tianleq
Copy link
Collaborator

@tianleq tianleq commented Jul 28, 2025

Draft PR for concurrent immix

@@ -0,0 +1,536 @@
#define private public // too lazy to change openjdk...
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is copied from lxr branch. Basically when code needs to be patched, we need to call a private method. A proper solution is to define friend class in OpenJDK

@tianleq tianleq changed the title WIP Concurrent Immix Jul 28, 2025
#define __ ideal.

void MMTkSATBBarrierSetC2::object_reference_write_pre(GraphKit* kit, Node* src, Node* slot, Node* pre_val, Node* val) const {
if (can_remove_barrier(kit, &kit->gvn(), src, slot, val, /* skip_const_null */ false)) return;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Be careful of barrier elimination when implementing this in the binding. target == null does not mean the barrier can be eliminated. This bug takes me a while to find out.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this mmtkSATBBarrier is also an object remembering barrier. Theoretically, we should be able to reuse the existing mmtkObjectBarrier. But due to the current barrier api design, the mmtkObjectBarrier has the generational semantic baked in.

_pre_barrier_c1_runtime_code_blob = Runtime1::generate_blob(buffer_blob, -1, "mmtk_pre_write_code_gen_cl", false, &pre_write_code_gen_cl);
MMTkPostBarrierCodeGenClosure post_write_code_gen_cl;
_post_barrier_c1_runtime_code_blob = Runtime1::generate_blob(buffer_blob, -1, "mmtk_post_write_code_gen_cl", false, &post_write_code_gen_cl);
// MMTkBarrierCodeGenClosure write_code_gen_cl_patch_fix(true);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was in the lxr branch and after I discussed with wenyu, we believe it is redundant. The code patching has already been dealt with, no need to do any special things here

@@ -70,13 +72,16 @@ class MMTkBarrierSetRuntime: public CHeapObj<mtGC> {
static void object_reference_array_copy_pre_call(void* src, void* dst, size_t count);
Copy link
Collaborator Author

@tianleq tianleq Jul 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In OpenJDK, array copy does not give base address of src or dst arrays, so nothing can be remembered. I have a patch in Iso to pass down base address of both src and dst arrarys (they are required by the publication semantics). I am not sure if it is worthwhile to have that in our OpenJDK fork

@qinsoon
Copy link
Member

qinsoon commented Jul 29, 2025

All the tests failed. @tianleq Is there anything we need to do for running ConcurrentImmix?

@tianleq
Copy link
Collaborator Author

tianleq commented Jul 29, 2025

All the tests failed. @tianleq Is there anything we need to do for running ConcurrentImmix?

I did not test compressed pointers as in Iso, it is not supported yet. Other than that, the minheap is much larger due to it being non-moving immix

@qinsoon
Copy link
Member

qinsoon commented Jul 29, 2025

All the tests failed. @tianleq Is there anything we need to do for running ConcurrentImmix?

I did not test compressed pointers as in Iso, it is not supported yet. Other than that, the minheap is much larger due to it being non-moving immix

Thanks. I temporarily disabled compressed pointer for concurrent Immix. We should get compressed pointer working before merging the PR.

constexpr int kUnloggedValue = 1;

static inline intptr_t side_metadata_base_address() {
return UseCompressedOops ? SATB_METADATA_BASE_ADDRESS : SATB_METADATA_BASE_ADDRESS;
Copy link
Collaborator Author

@tianleq tianleq Jul 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line is problematic, and I believe the crash is due to this. If compressed pointers is enabled, then probably we need a different base address

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The address of unlog bits is unrelated to whether we use compressed oops or not. It should be just SATB_METADATA_BASE_ADDRESS

@qinsoon
Copy link
Member

qinsoon commented Jul 29, 2025

OOM is expected, as concurrent Immix is non-moving.

Tianle and Wenyu clarified the issue about compressed pointer. Currently the SATB barrier computes metadata address differently, based on whether compressed pointer is in used or not, which is incorrect. We should use the same approach as how metadata is computed in the object barrier. The current implementation of SATB barrier is derived from lxr which uses field barrier, and needs to deal with the difference of field slots with/without compressed pointer.

@qinsoon qinsoon force-pushed the concurrent-immix branch 5 times, most recently from ea4c968 to fc9d143 Compare July 30, 2025 07:20
@qinsoon qinsoon force-pushed the concurrent-immix branch from fc9d143 to 5018b70 Compare July 30, 2025 08:00
@qinsoon
Copy link
Member

qinsoon commented Jul 31, 2025

Current CI runs concurrent Immix with 4x min heap. There are still some benchmarks that ran out of memory. I will increase it to 5x.

There are a few correctness issues.

Segfault in mmtk_openjdk::abi::OopDesc::size: fastdebug h2, fastdebug jython

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007ff207be7228, pid=3100, tid=3110
#
# JRE version: OpenJDK Runtime Environment (11.0.19) (fastdebug build 11.0.19-internal+0-adhoc.runner.openjdk)
# Java VM: OpenJDK 64-Bit Server VM (fastdebug 11.0.19-internal+0-adhoc.runner.openjdk, mixed mode, tiered, third-party gc, linux-amd64)
# Problematic frame:
# C  [libmmtk_openjdk.so+0x1e7228]  mmtk_openjdk::abi::OopDesc::size::h58760be027304211+0x18
#
# Core dump will be written. Default location: Core dumps may be processed with "/lib/systemd/systemd-coredump %P %u %g %s %t 9223372036854775808 %h %d" (or dumping to /tmp/runbms-kt4q31ff/core.3100)
#
# An error report file with more information is saved as:
# /tmp/runbms-kt4q31ff/hs_err_pid3100.log
#
# If you would like to submit a bug report, please visit:
#   https://bugreport.java.com/bugreport/crash.jsp
#
Current thread is 3110
Dumping core ...

old_queue is not empty: fastdebug pmd

thread '<unnamed>' panicked at /home/runner/.cargo/git/checkouts/mmtk-core-783748a1e19f117d/100c049/src/scheduler/scheduler.rs:719:13:
assertion failed: old_queue.is_empty()
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread '<unnamed>' panicked at core/src/panicking.rs:221:5:
panic in a function that cannot unwind
stack backtrace:
   0:     0x7f441a685fca - std::backtrace_rs::backtrace::libunwind::trace::h5a5b8284f2d0c266
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5
   1:     0x7f441a685fca - std::backtrace_rs::backtrace::trace_unsynchronized::h76d4f1c9b0b875e3
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
   2:     0x7f441a685fca - std::sys::backtrace::_print_fmt::hc4546b8364a537c6
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:66:9
   3:     0x7f441a685fca - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h5b6bd5631a6d1f6b
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:39:26
   4:     0x7f441a6aa7d3 - core::fmt::rt::Argument::fmt::h270f6602a2b96f62
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/fmt/rt.rs:177:76
   5:     0x7f441a6aa7d3 - core::fmt::write::h7550c97b06c86515
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/fmt/mod.rs:1186:21
   6:     0x7f441a683693 - std::io::Write::write_fmt::h7b09c64fe0be9c84
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/io/mod.rs:1839:15
   7:     0x7f441a685e12 - std::sys::backtrace::BacktraceLock::print::h2395ccd2c84ba3aa
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:42:9
   8:     0x7f441a686efc - std::panicking::default_hook::{{closure}}::he19d4c7230e07961
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:268:22
   9:     0x7f441a686d42 - std::panicking::default_hook::hf614597d3c67bbdb
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:295:9
  10:     0x7f441a6874d7 - std::panicking::rust_panic_with_hook::h8942133a8b252070
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:801:13
  11:     0x7f441a687336 - std::panicking::begin_panic_handler::{{closure}}::hb5f5963570096b29
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:667:13
  12:     0x7f441a6864a9 - std::sys::backtrace::__rust_end_short_backtrace::h6208cedc1922feda
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:170:18
  13:     0x7f441a686ffc - rust_begin_unwind
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:665:5
  14:     0x7f4419ec8e2d - core::panicking::panic_nounwind_fmt::runtime::h1f507a806003dfb2
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:112:18
  15:     0x7f4419ec8e2d - core::panicking::panic_nounwind_fmt::h357fc035dc231634
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:122:5
  16:     0x7f4419ec8ec2 - core::panicking::panic_nounwind::hd0dad372654c389a
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:221:5
  17:     0x7f4419ec9025 - core::panicking::panic_cannot_unwind::h65aefd062253eb19
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:310:5
  18:     0x7f441a2a81ec - start_worker
                               at /home/runner/work/mmtk-openjdk/mmtk-openjdk/git/mmtk-openjdk/mmtk/src/api.rs:210:1
  19:     0x7f441c33492b - _ZN6Thread8call_runEv
  20:     0x7f441c0356c6 - _ZL19thread_native_entryP6Thread
  21:     0x7f441cc94ac3 - <unknown>
  22:     0x7f441cd26850 - <unknown>
  23:                0x0 - <unknown>

Segfault in mark_lines_for_object: release h2, release jython

This could be the same issue as the first one (mmtk_openjdk::abi::OopDesc::size). We need to get object size in mark_lines_for_object, which segfaults.

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007f4edd5e4d4d, pid=2945, tid=2952
#
# JRE version: OpenJDK Runtime Environment (11.0.19) (build 11.0.19-internal+0-adhoc.runner.openjdk)
# Java VM: OpenJDK 64-Bit Server VM (11.0.19-internal+0-adhoc.runner.openjdk, mixed mode, tiered, third-party gc, linux-amd64)
# Problematic frame:
# C  [libmmtk_openjdk.so+0x1e4d4d]  mmtk::policy::immix::line::Line::mark_lines_for_object::h6c322a8017999525+0xd
#
# Core dump will be written. Default location: Core dumps may be processed with "/lib/systemd/systemd-coredump %P %u %g %s %t 9223372036854775808 %h %d" (or dumping to /tmp/runbms-k75bcqcl/core.2945)
#
# An error report file with more information is saved as:
# /tmp/runbms-k75bcqcl/hs_err_pid2945.log
#
# If you would like to submit a bug report, please visit:
#   https://bugreport.java.com/bugreport/crash.jsp
#

@tianleq
Copy link
Collaborator Author

tianleq commented Jul 31, 2025

Current CI runs concurrent Immix with 4x min heap. There are still some benchmarks that ran out of memory. I will increase it to 5x.

There are a few correctness issues.

Segfault in mmtk_openjdk::abi::OopDesc::size: fastdebug h2, fastdebug jython

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007ff207be7228, pid=3100, tid=3110
#
# JRE version: OpenJDK Runtime Environment (11.0.19) (fastdebug build 11.0.19-internal+0-adhoc.runner.openjdk)
# Java VM: OpenJDK 64-Bit Server VM (fastdebug 11.0.19-internal+0-adhoc.runner.openjdk, mixed mode, tiered, third-party gc, linux-amd64)
# Problematic frame:
# C  [libmmtk_openjdk.so+0x1e7228]  mmtk_openjdk::abi::OopDesc::size::h58760be027304211+0x18
#
# Core dump will be written. Default location: Core dumps may be processed with "/lib/systemd/systemd-coredump %P %u %g %s %t 9223372036854775808 %h %d" (or dumping to /tmp/runbms-kt4q31ff/core.3100)
#
# An error report file with more information is saved as:
# /tmp/runbms-kt4q31ff/hs_err_pid3100.log
#
# If you would like to submit a bug report, please visit:
#   https://bugreport.java.com/bugreport/crash.jsp
#
Current thread is 3110
Dumping core ...

old_queue is not empty: fastdebug pmd

thread '<unnamed>' panicked at /home/runner/.cargo/git/checkouts/mmtk-core-783748a1e19f117d/100c049/src/scheduler/scheduler.rs:719:13:
assertion failed: old_queue.is_empty()
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread '<unnamed>' panicked at core/src/panicking.rs:221:5:
panic in a function that cannot unwind
stack backtrace:
   0:     0x7f441a685fca - std::backtrace_rs::backtrace::libunwind::trace::h5a5b8284f2d0c266
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5
   1:     0x7f441a685fca - std::backtrace_rs::backtrace::trace_unsynchronized::h76d4f1c9b0b875e3
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
   2:     0x7f441a685fca - std::sys::backtrace::_print_fmt::hc4546b8364a537c6
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:66:9
   3:     0x7f441a685fca - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h5b6bd5631a6d1f6b
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:39:26
   4:     0x7f441a6aa7d3 - core::fmt::rt::Argument::fmt::h270f6602a2b96f62
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/fmt/rt.rs:177:76
   5:     0x7f441a6aa7d3 - core::fmt::write::h7550c97b06c86515
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/fmt/mod.rs:1186:21
   6:     0x7f441a683693 - std::io::Write::write_fmt::h7b09c64fe0be9c84
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/io/mod.rs:1839:15
   7:     0x7f441a685e12 - std::sys::backtrace::BacktraceLock::print::h2395ccd2c84ba3aa
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:42:9
   8:     0x7f441a686efc - std::panicking::default_hook::{{closure}}::he19d4c7230e07961
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:268:22
   9:     0x7f441a686d42 - std::panicking::default_hook::hf614597d3c67bbdb
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:295:9
  10:     0x7f441a6874d7 - std::panicking::rust_panic_with_hook::h8942133a8b252070
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:801:13
  11:     0x7f441a687336 - std::panicking::begin_panic_handler::{{closure}}::hb5f5963570096b29
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:667:13
  12:     0x7f441a6864a9 - std::sys::backtrace::__rust_end_short_backtrace::h6208cedc1922feda
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:170:18
  13:     0x7f441a686ffc - rust_begin_unwind
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:665:5
  14:     0x7f4419ec8e2d - core::panicking::panic_nounwind_fmt::runtime::h1f507a806003dfb2
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:112:18
  15:     0x7f4419ec8e2d - core::panicking::panic_nounwind_fmt::h357fc035dc231634
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:122:5
  16:     0x7f4419ec8ec2 - core::panicking::panic_nounwind::hd0dad372654c389a
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:221:5
  17:     0x7f4419ec9025 - core::panicking::panic_cannot_unwind::h65aefd062253eb19
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:310:5
  18:     0x7f441a2a81ec - start_worker
                               at /home/runner/work/mmtk-openjdk/mmtk-openjdk/git/mmtk-openjdk/mmtk/src/api.rs:210:1
  19:     0x7f441c33492b - _ZN6Thread8call_runEv
  20:     0x7f441c0356c6 - _ZL19thread_native_entryP6Thread
  21:     0x7f441cc94ac3 - <unknown>
  22:     0x7f441cd26850 - <unknown>
  23:                0x0 - <unknown>

Segfault in mark_lines_for_object: release h2, release jython

This could be the same issue as the first one (mmtk_openjdk::abi::OopDesc::size). We need to get object size in mark_lines_for_object, which segfaults.

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007f4edd5e4d4d, pid=2945, tid=2952
#
# JRE version: OpenJDK Runtime Environment (11.0.19) (build 11.0.19-internal+0-adhoc.runner.openjdk)
# Java VM: OpenJDK 64-Bit Server VM (11.0.19-internal+0-adhoc.runner.openjdk, mixed mode, tiered, third-party gc, linux-amd64)
# Problematic frame:
# C  [libmmtk_openjdk.so+0x1e4d4d]  mmtk::policy::immix::line::Line::mark_lines_for_object::h6c322a8017999525+0xd
#
# Core dump will be written. Default location: Core dumps may be processed with "/lib/systemd/systemd-coredump %P %u %g %s %t 9223372036854775808 %h %d" (or dumping to /tmp/runbms-k75bcqcl/core.2945)
#
# An error report file with more information is saved as:
# /tmp/runbms-k75bcqcl/hs_err_pid2945.log
#
# If you would like to submit a bug report, please visit:
#   https://bugreport.java.com/bugreport/crash.jsp
#

I am aware of 4x is not enough, jython OOM even with 5x, this is true in the stop-the-world non-moving immix. The iython crash is what I saw when barrier is incorrectly eliminated. I fixed that and never saw it again. I also look at my test run, only see OOMs on jython. I will try to reproduce those locally

wks
wks previously requested changes Jul 31, 2025
Copy link
Contributor

@wks wks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the code were copied from the field logging barrier in the lxr branch. When not using compressed oops, each field is 64 bits, and the granularity of field-logging bits is one bit per 8 bytes. But when using compressed oops, fields become 32 bits, and the field-logging bits becomes one bit per 4 bytes. That's why the shift changes between 5 or 6 depending on whether compressed oops is enabled.

But here we are working on the object-remembering barrier, and using the regular global unlog bits. Its granularity is only related to the object alignment, not the field size. On 64-bit machines, objects are always 64-bit aligned. So we should always shift by 6 when computing the address of unlog bits. Similarly, when computing the in-byte shift, we should always shift by 3.

Change this and the segmentation fault disappears, even when using compressed oops.

constexpr int kUnloggedValue = 1;

static inline intptr_t side_metadata_base_address() {
return UseCompressedOops ? SATB_METADATA_BASE_ADDRESS : SATB_METADATA_BASE_ADDRESS;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The address of unlog bits is unrelated to whether we use compressed oops or not. It should be just SATB_METADATA_BASE_ADDRESS

@tianleq
Copy link
Collaborator Author

tianleq commented Jul 31, 2025

I guess the code were copied from the field logging barrier in the lxr branch. When not using compressed oops, each field is 64 bits, and the granularity of field-logging bits is one bit per 8 bytes. But when using compressed oops, fields become 32 bits, and the field-logging bits becomes one bit per 4 bytes. That's why the shift changes between 5 or 6 depending on whether compressed oops is enabled.

But here we are working on the object-remembering barrier, and using the regular global unlog bits. Its granularity is only related to the object alignment, not the field size. On 64-bit machines, objects are always 64-bit aligned. So we should always shift by 6 when computing the address of unlog bits. Similarly, when computing the in-byte shift, we should always shift by 3.

Change this and the segmentation fault disappears, even when using compressed oops.

Yes, @qinsoon and I @tianleq were aware of this.

@wks
Copy link
Contributor

wks commented Jul 31, 2025

I also observed a crash during a full GC (after patching the shifting operations for unlog bits as I suggested above).

[2025-07-31T03:38:33Z INFO  mmtk::util::heap::gc_trigger] [POLL] immix: Triggering collection (20484/20480 pages)
[2025-07-31T03:38:33Z INFO  mmtk::util::heap::gc_trigger] [POLL] immix: Triggering collection (20484/20480 pages)
[2025-07-31T03:38:33Z INFO  mmtk::policy::immix::defrag] Defrag: false
Full start
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007ffff4ffca18, pid=103107, tid=103139
#
# JRE version: OpenJDK Runtime Environment (11.0.19) (fastdebug build 11.0.19-internal+0-adhoc.wks.openjdk)
# Java VM: OpenJDK 64-Bit Server VM (fastdebug 11.0.19-internal+0-adhoc.wks.openjdk, mixed mode, tiered, compressed oops, third-party gc, linux-amd64)
# Problematic frame:
# C  [libmmtk_openjdk.so+0x1fca18]  mmtk_openjdk::abi::OopDesc::size::hacb6815c8bcd90cb+0x18
#
...
Thread 27 "MMTk Collector " received signal SIGABRT, Aborted.
[Switching to Thread 0x7fff82be36c0 (LWP 103139)]
0x00007ffff7d6e5df in abort () from /usr/lib/libc.so.6
(gdb) bt
#28 0x00007ffff70f60e2 in signalHandler (sig=11, info=0x7fff82be10b0, uc=0x7fff82be0f80)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/openjdk/src/hotspot/os/linux/os_linux.cpp:4917
#29 <signal handler called>
#30 mmtk_openjdk::abi::OopDesc::size<true> (self=0x401e93a8) at src/abi.rs:390
#31 0x00007ffff4ff1682 in mmtk_openjdk::object_model::{impl#0}::get_current_size<true> (object=...) at src/object_model.rs:66
#32 0x00007ffff4f4f578 in mmtk::policy::immix::line::Line::mark_lines_for_object<mmtk_openjdk::OpenJDK<true>> (object=..., state=92)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/policy/immix/line.rs:70
#33 0x00007ffff4f5a0a6 in mmtk::policy::immix::immixspace::ImmixSpace<mmtk_openjdk::OpenJDK<true>>::mark_lines<mmtk_openjdk::OpenJDK<true>> (self=0x7ffff00b11c0, object=...)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/policy/immix/immixspace.rs:790
#34 0x00007ffff4f5cb50 in mmtk::policy::immix::immixspace::ImmixSpace<mmtk_openjdk::OpenJDK<true>>::trace_object_without_moving<mmtk_openjdk::OpenJDK<true>, mmtk::plan::tracing::VectorQueue<mmtk::util::address::ObjectReference>> (self=0x7ffff00b11c0, queue=0x7fff82be2518, object=...) at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/policy/immix/immixspace.rs:654
#35 0x00007ffff4f59656 in mmtk::policy::immix::immixspace::{impl#3}::trace_object<mmtk_openjdk::OpenJDK<true>, mmtk::plan::tracing::VectorQueue<mmtk::util::address::ObjectReference>, 0> (
    self=0x7ffff00b11c0, queue=0x7fff82be2518, object=..., copy=..., worker=0x7ffff020db70)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/policy/immix/immixspace.rs:244
#36 0x00007ffff52943d2 in mmtk::plan::concurrent::immix::global::{impl#10}::trace_object<mmtk_openjdk::OpenJDK<true>, mmtk::plan::tracing::VectorQueue<mmtk::util::address::ObjectReference>, 0> (
    self=0x7ffff00b11c0, __mmtk_queue=0x7fff82be2518, __mmtk_objref=..., __mmtk_worker=0x7ffff020db70)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/plan/concurrent/immix/global.rs:45
#37 0x00007ffff51c6f62 in mmtk::scheduler::gc_work::{impl#39}::trace_object<mmtk_openjdk::OpenJDK<true>, mmtk::plan::concurrent::immix::global::ConcurrentImmix<mmtk_openjdk::OpenJDK<true>>, 0> (
    self=0x7fff82be2500, object=...) at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/scheduler/gc_work.rs:997
#38 0x00007ffff4fd39e8 in mmtk::vm::reference_glue::{impl#0}::keep_alive<mmtk::scheduler::gc_work::PlanProcessEdges<mmtk_openjdk::OpenJDK<true>, mmtk::plan::concurrent::immix::global::ConcurrentImmix<mmtk_openjdk::OpenJDK<true>>, 0>> (self=0x7fff2c018b60, trace=0x7fff82be2500) at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/vm/reference_glue.rs:81
#39 0x00007ffff502c66e in mmtk::util::finalizable_processor::FinalizableProcessor<mmtk::util::address::ObjectReference>::forward_finalizable_reference<mmtk::util::address::ObjectReference, mmtk::scheduler::gc_work::PlanProcessEdges<mmtk_openjdk::OpenJDK<true>, mmtk::plan::concurrent::immix::global::ConcurrentImmix<mmtk_openjdk::OpenJDK<true>>, 0>> (e=0x7fff82be2500, finalizable=0x7fff2c018b60)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/util/finalizable_processor.rs:41
#40 0x00007ffff502c3a6 in mmtk::util::finalizable_processor::{impl#0}::forward_finalizable::{closure#0}<mmtk::util::address::ObjectReference, mmtk::scheduler::gc_work::PlanProcessEdges<mmtk_openjdk::OpenJDK<true>, mmtk::plan::concurrent::immix::global::ConcurrentImmix<mmtk_openjdk::OpenJDK<true>>, 0>> (f=0x7fff2c018b60)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/util/finalizable_processor.rs:92
#41 0x00007ffff50f7656 in core::slice::iter::{impl#190}::for_each<mmtk::util::address::ObjectReference, mmtk::util::finalizable_processor::{impl#0}::forward_finalizable::{closure_env#0}<mmtk::util::address::ObjectReference, mmtk::scheduler::gc_work::PlanProcessEdges<mmtk_openjdk::OpenJDK<true>, mmtk::plan::concurrent::immix::global::ConcurrentImmix<mmtk_openjdk::OpenJDK<true>>, 0>>> (self=..., f=...)
    at /home/wks/.rustup/toolchains/1.83.0-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/iter/macros.rs:254
#42 0x00007ffff502bdaf in mmtk::util::finalizable_processor::FinalizableProcessor<mmtk::util::address::ObjectReference>::forward_finalizable<mmtk::util::address::ObjectReference, mmtk::scheduler::gc_work::PlanProcessEdges<mmtk_openjdk::OpenJDK<true>, mmtk::plan::concurrent::immix::global::ConcurrentImmix<mmtk_openjdk::OpenJDK<true>>, 0>> (
    self=0x7ffff5950448 <<mmtk_openjdk::SINGLETON_COMPRESSED as core::ops::deref::Deref>::deref::__stability::LAZY+8>, e=0x7fff82be2500, _nursery=false)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/util/finalizable_processor.rs:90
#43 0x00007ffff502e8a9 in mmtk::util::finalizable_processor::FinalizableProcessor<mmtk::util::address::ObjectReference>::scan<mmtk::util::address::ObjectReference, mmtk::scheduler::gc_work::PlanProcessEdges<mmtk_openjdk::OpenJDK<true>, mmtk::plan::concurrent::immix::global::ConcurrentImmix<mmtk_openjdk::OpenJDK<true>>, 0>> (
    self=0x7ffff5950448 <<mmtk_openjdk::SINGLETON_COMPRESSED as core::ops::deref::Deref>::deref::__stability::LAZY+8>, tls=..., e=0x7fff82be2500, nursery=false)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/util/finalizable_processor.rs:74
#44 0x00007ffff5012e54 in mmtk::util::finalizable_processor::{impl#1}::do_work<mmtk::scheduler::gc_work::PlanProcessEdges<mmtk_openjdk::OpenJDK<true>, mmtk::plan::concurrent::immix::global::ConcurrentImmix<mmtk_openjdk::OpenJDK<true>>, 0>> (self=0x1, worker=0x7ffff020db70, mmtk=0x7ffff5950440 <<mmtk_openjdk::SINGLETON_COMPRESSED as core::ops::deref::Deref>::deref::__stability::LAZY>)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/util/finalizable_processor.rs:164
#45 0x00007ffff50427ad in mmtk::scheduler::work::GCWork::do_work_with_stat<mmtk::util::finalizable_processor::Finalization<mmtk::scheduler::gc_work::PlanProcessEdges<mmtk_openjdk::OpenJDK<true>, mmtk::plan::concurrent::immix::global::ConcurrentImmix<mmtk_openjdk::OpenJDK<true>>, 0>>, mmtk_openjdk::OpenJDK<true>> (self=0x1, worker=0x7ffff020db70, 
    mmtk=0x7ffff5950440 <<mmtk_openjdk::SINGLETON_COMPRESSED as core::ops::deref::Deref>::deref::__stability::LAZY>)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/scheduler/work.rs:45
#46 0x00007ffff505733f in mmtk::scheduler::worker::GCWorker<mmtk_openjdk::OpenJDK<true>>::run<mmtk_openjdk::OpenJDK<true>> (self=0x7ffff020db70, tls=..., 
    mmtk=0x7ffff5950440 <<mmtk_openjdk::SINGLETON_COMPRESSED as core::ops::deref::Deref>::deref::__stability::LAZY>)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/scheduler/worker.rs:257
#47 0x00007ffff5273065 in mmtk::memory_manager::start_worker<mmtk_openjdk::OpenJDK<true>> (
    mmtk=0x7ffff5950440 <<mmtk_openjdk::SINGLETON_COMPRESSED as core::ops::deref::Deref>::deref::__stability::LAZY>, tls=..., worker=0x7ffff020db70)
    at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/mmtk-core/src/memory_manager.rs:559
#48 0x00007ffff4fb6df3 in mmtk_openjdk::api::start_worker (tls=..., worker=0x7ffff020db70) at src/api.rs:213
#49 0x00007ffff7439130 in Thread::call_run (this=0x7ffff0239000) at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/openjdk/src/hotspot/share/runtime/thread.cpp:402
#50 0x00007ffff7105a2e in thread_native_entry (thread=0x7ffff0239000) at /home/wks/projects/mmtk-github/parallels/review/concurrent-immix/openjdk/src/hotspot/os/linux/os_linux.cpp:826
#51 0x00007ffff7dde7eb in ?? () from /usr/lib/libc.so.6
#52 0x00007ffff7e6220c in ?? () from /usr/lib/libc.so.6

It has something to do with finalizers.

@qinsoon
Copy link
Member

qinsoon commented Jul 31, 2025

I also observed a crash during a full GC (after patching the shifting operations for unlog bits as I suggested above).

It has something to do with finalizers.

The initial implementation does not schedule finalization related packets in final pause. This commit should have fixed that: mmtk/mmtk-core@2bbb200

@qinsoon
Copy link
Member

qinsoon commented Jul 31, 2025

Two tests are still failing with mmtk/mmtk-core@2bbb200.

old_queue is not empty: fastdebug pmd

thread '<unnamed>' panicked at /home/runner/.cargo/git/checkouts/mmtk-core-783748a1e19f117d/100c049/src/scheduler/scheduler.rs:719:13:
assertion failed: old_queue.is_empty()
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread '<unnamed>' panicked at core/src/panicking.rs:221:5:
panic in a function that cannot unwind
stack backtrace:
   0:     0x7f441a685fca - std::backtrace_rs::backtrace::libunwind::trace::h5a5b8284f2d0c266
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5
   1:     0x7f441a685fca - std::backtrace_rs::backtrace::trace_unsynchronized::h76d4f1c9b0b875e3
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
   2:     0x7f441a685fca - std::sys::backtrace::_print_fmt::hc4546b8364a537c6
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:66:9
   3:     0x7f441a685fca - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h5b6bd5631a6d1f6b
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:39:26
   4:     0x7f441a6aa7d3 - core::fmt::rt::Argument::fmt::h270f6602a2b96f62
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/fmt/rt.rs:177:76
   5:     0x7f441a6aa7d3 - core::fmt::write::h7550c97b06c86515
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/fmt/mod.rs:1186:21
   6:     0x7f441a683693 - std::io::Write::write_fmt::h7b09c64fe0be9c84
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/io/mod.rs:1839:15
   7:     0x7f441a685e12 - std::sys::backtrace::BacktraceLock::print::h2395ccd2c84ba3aa
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:42:9
   8:     0x7f441a686efc - std::panicking::default_hook::{{closure}}::he19d4c7230e07961
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:268:22
   9:     0x7f441a686d42 - std::panicking::default_hook::hf614597d3c67bbdb
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:295:9
  10:     0x7f441a6874d7 - std::panicking::rust_panic_with_hook::h8942133a8b252070
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:801:13
  11:     0x7f441a687336 - std::panicking::begin_panic_handler::{{closure}}::hb5f5963570096b29
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:667:13
  12:     0x7f441a6864a9 - std::sys::backtrace::__rust_end_short_backtrace::h6208cedc1922feda
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/sys/backtrace.rs:170:18
  13:     0x7f441a686ffc - rust_begin_unwind
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:665:5
  14:     0x7f4419ec8e2d - core::panicking::panic_nounwind_fmt::runtime::h1f507a806003dfb2
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:112:18
  15:     0x7f4419ec8e2d - core::panicking::panic_nounwind_fmt::h357fc035dc231634
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:122:5
  16:     0x7f4419ec8ec2 - core::panicking::panic_nounwind::hd0dad372654c389a
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:221:5
  17:     0x7f4419ec9025 - core::panicking::panic_cannot_unwind::h65aefd062253eb19
                               at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:310:5
  18:     0x7f441a2a81ec - start_worker
                               at /home/runner/work/mmtk-openjdk/mmtk-openjdk/git/mmtk-openjdk/mmtk/src/api.rs:210:1
  19:     0x7f441c33492b - _ZN6Thread8call_runEv
  20:     0x7f441c0356c6 - _ZL19thread_native_entryP6Thread
  21:     0x7f441cc94ac3 - <unknown>
  22:     0x7f441cd26850 - <unknown>
  23:                0x0 - <unknown>

Segfault in ConcurrentTraceObjects: release h2

# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007efc7b3b5fae, pid=3135, tid=3145
#
# JRE version: OpenJDK Runtime Environment (11.0.19) (build 11.0.19-internal+0-adhoc.runner.openjdk)
# Java VM: OpenJDK 64-Bit Server VM (11.0.19-internal+0-adhoc.runner.openjdk, mixed mode, tiered, third-party gc, linux-amd64)
# Problematic frame:
# C  [libmmtk_openjdk.so+0x1b5fae]  _$LT$mmtk..plan..concurrent..concurrent_marking_work..ConcurrentTraceObjects$LT$VM$GT$$u20$as$u20$mmtk..scheduler..work..GCWork$LT$VM$GT$$GT$::do_work::h164fad0e16f7cb14+0x25e
#
# Core dump will be written. Default location: Core dumps may be processed with "/lib/systemd/systemd-coredump %P %u %g %s %t 9223372036854775808 %h %d" (or dumping to /tmp/runbms-0y88hzri/core.3135)
#
# An error report file with more information is saved as:
# /tmp/runbms-0y88hzri/hs_err_pid3135.log
[thread 3142 also had an error]
#
# If you would like to submit a bug report, please visit:
#   https://bugreport.java.com/bugreport/crash.jsp
#

wks and others added 3 commits July 31, 2025 12:38
Because this is object-remembering barrier, the alignment is only
related to object alignment, not field alignment.  So we don't need to
test if compressed oops is enabled.
@wks wks dismissed their stale review July 31, 2025 07:01

I have pushed related changes.

@wks
Copy link
Contributor

wks commented Jul 31, 2025

@qinsoon
Copy link
Member

qinsoon commented Jul 31, 2025

h2 failed with 6x min heap (OOM). Now I changed the heap size to 7x, and enabled compressed pointer for ConcurrentImmix tests.

@qinsoon
Copy link
Member

qinsoon commented Aug 6, 2025

If we enable weak reference processing, GC threads get stuck during reference enqueueing.

MMTk's reference processor will be holding a lock during reference enqueueing, and will call the binding's mmtk_enqueue_reference while holding the lock.
https://github.com/tianleq/mmtk-openjdk/blob/b120bae7814c5a4ccc2228476d0b17d8879be554/openjdk/mmtkUpcalls.cpp#L300

mmtk_enqueue_reference calls oop_store_at, which triggers the write barrier here:
https://github.com/tianleq/mmtk-openjdk/blob/b120bae7814c5a4ccc2228476d0b17d8879be554/openjdk/barriers/mmtkSATBBarrier.cpp#L36
The write barrier slow path eventually calls object_probable_write_slow which scans this object.
https://github.com/tianleq/mmtk-core/blob/028bd0e109fb747867db83d1782872576292deef/src/plan/concurrent/barrier.rs#L150

During the scanning, it encounters a field that points itself (a weak reference), and will try to add that as a weak reference to MMTk.
https://github.com/tianleq/mmtk-openjdk/blob/b120bae7814c5a4ccc2228476d0b17d8879be554/mmtk/src/object_scanning.rs#L126

MMTk currently holds a lock to the weak reference processor, and tries to acquire the lock again to add the weak reference. This causes a dead lock.


I am not sure what needs to be fixed here.

Clearly when we are at reference enqueueing, weak reference processing should have been done. Thus we should not push another weak reference, because it will not be processed in this GC. This potentially can be fixed by setting the reference processor to disallow new references before reference enqueueing.

But I am not sure if it is reasonable to allow the write barrier to be triggered here, and if it is reasonable to scan the object here.

@tianleq

@k-sareen
Copy link
Collaborator

k-sareen commented Aug 6, 2025

I think the fix is to check if we are currently concurrently marking, and if we are then the write barrier should fire. If the concurrent marker is not active, then the write barrier is not required since it means: either the write barrier is happening in a pause which is unnecessary or that no GC is currently triggered, in which case we don't need to keep track of old references.

If, however, we end up making this into a generational GC somehow (using sticky mark-bits I guess), then you potentially always need a barrier active in which case the above solution will not work.

@tianleq
Copy link
Collaborator Author

tianleq commented Aug 6, 2025

If we enable weak reference processing, GC threads get stuck during reference enqueueing.

MMTk's reference processor will be holding a lock during reference enqueueing, and will call the binding's mmtk_enqueue_reference while holding the lock. https://github.com/tianleq/mmtk-openjdk/blob/b120bae7814c5a4ccc2228476d0b17d8879be554/openjdk/mmtkUpcalls.cpp#L300

mmtk_enqueue_reference calls oop_store_at, which triggers the write barrier here: https://github.com/tianleq/mmtk-openjdk/blob/b120bae7814c5a4ccc2228476d0b17d8879be554/openjdk/barriers/mmtkSATBBarrier.cpp#L36 The write barrier slow path eventually calls object_probable_write_slow which scans this object. https://github.com/tianleq/mmtk-core/blob/028bd0e109fb747867db83d1782872576292deef/src/plan/concurrent/barrier.rs#L150

During the scanning, it encounters a field that points itself (a weak reference), and will try to add that as a weak reference to MMTk. https://github.com/tianleq/mmtk-openjdk/blob/b120bae7814c5a4ccc2228476d0b17d8879be554/mmtk/src/object_scanning.rs#L126

MMTk currently holds a lock to the weak reference processor, and tries to acquire the lock again to add the weak reference. This causes a dead lock.

I am not sure what needs to be fixed here.

Clearly when we are at reference enqueueing, weak reference processing should have been done. Thus we should not push another weak reference, because it will not be processed in this GC. This potentially can be fixed by setting the reference processor to disallow new references before reference enqueueing.

But I am not sure if it is reasonable to allow the write barrier to be triggered here, and if it is reasonable to scan the object here.

@tianleq

This seems to be an issue of using object remembering barrier. Object remembering barrier requires us to store all of its reference fields when it is first encountered. In this particular case,no need to keep track of this reference

@qinsoon
Copy link
Member

qinsoon commented Aug 6, 2025

This commit follows what G1 does for pre write barrier: https://github.com/mmtk/openjdk/blob/28e56ee32525c32c5a88391d0b01f24e5cd16c0f/src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp#L38. Just skip the barrier for AS_NO_KEEPALIVE.

@qinsoon
Copy link
Member

qinsoon commented Aug 6, 2025

This commit follows what G1 does for pre write barrier: https://github.com/mmtk/openjdk/blob/28e56ee32525c32c5a88391d0b01f24e5cd16c0f/src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp#L38. Just skip the barrier for AS_NO_KEEPALIVE.

This doesn't work due to #313. I just changed MMTk core to work around the issue in #311 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants