Skip to content

Conversation

kpd-daemon[bot]
Copy link

@kpd-daemon kpd-daemon bot commented Aug 12, 2025

Pull request for series with
subject: md/raid1,raid10: don't broken array on failfast metadata write fails
version: 1
url: https://patchwork.kernel.org/project/linux-raid/list/?series=990464

@kpd-daemon
Copy link
Author

kpd-daemon bot commented Aug 12, 2025

Upstream branch: c17fb54
series: https://patchwork.kernel.org/project/linux-raid/list/?series=990464
version: 1

A super_write IO failure with MD_FAILFAST must not cause the array
to fail.

Because a failfast bio may fail even when the rdev is not broken,
so IO must be retried rather than failing the array when a metadata
write with MD_FAILFAST fails on the last rdev.

A metadata write with MD_FAILFAST is retried after failure as
follows:

1. In super_written, MD_SB_NEED_REWRITE is set in sb_flags.

2. In md_super_wait, which is called by the function that
executed md_super_write and waits for completion,
-EAGAIN is returned because MD_SB_NEED_REWRITE is set.

3. The caller of md_super_wait (such as md_update_sb)
receives a negative return value and then retries md_super_write.

4. The md_super_write function, which is called to perform
the same metadata write, issues a write bio without MD_FAILFAST
this time.

When a write from super_written without MD_FAILFAST fails,
the array may broken, and MD_BROKEN should be set.

After commit 9631abd ("md: Set MD_BROKEN for RAID1 and RAID10"),
calling md_error on the last rdev in RAID1/10 always sets
the MD_BROKEN flag on the array.
As a result, when failfast IO fails on the last rdev, the array
immediately becomes failed.

This commit prevents MD_BROKEN from being set when a super_write with
MD_FAILFAST fails on the last rdev, ensuring that the array does
not become failed due to failfast IO failures.

Failfast IO failures on any rdev except the last one are not retried
and are marked as Faulty immediately. This minimizes array IO latency
when an rdev fails.

Fixes: 9631abd ("md: Set MD_BROKEN for RAID1 and RAID10")
Signed-off-by: Kenta Akagi <[email protected]>
@kpd-daemon
Copy link
Author

kpd-daemon bot commented Aug 17, 2025

Upstream branch: c17fb54
series: https://patchwork.kernel.org/project/linux-raid/list/?series=992296
version: 2

mgmlme added 2 commits August 17, 2025 17:32
Once MD_BROKEN is set on an array, no further writes can be
performed to it.
The user must be informed that the array cannot continue operation.

Signed-off-by: Kenta Akagi <[email protected]>
Since commit 9a56784 ("md: allow last device to be forcibly
removed from RAID1/RAID10."), RAID1/10 arrays can now lose all rdevs.

Before that commit, losing the array last rdev or reaching the end of
the function without early return in raid{1,10}_error never occurred.
However, both situations can occur in the current implementation.

As a result, when mddev->fail_last_dev is set, a spurious pr_crit
message can be printed.

This patch prevents "Operation continuing" printed if the array
is not operational.

root@fedora:~# mdadm --create --verbose /dev/md0 --level=1 \
--raid-devices=2  /dev/loop0 /dev/loop1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 1046528K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@fedora:~# echo 1 > /sys/block/md0/md/fail_last_dev
root@fedora:~# mdadm --fail /dev/md0 loop0
mdadm: set loop0 faulty in /dev/md0
root@fedora:~# mdadm --fail /dev/md0 loop1
mdadm: set device faulty failed for loop1:  Device or resource busy
root@fedora:~# dmesg | tail -n 4
[ 1314.359674] md/raid1:md0: Disk failure on loop0, disabling device.
               md/raid1:md0: Operation continuing on 1 devices.
[ 1315.506633] md/raid1:md0: Disk failure on loop1, disabling device.
               md/raid1:md0: Operation continuing on 0 devices.
root@fedora:~#

Fixes: 9a56784 ("md: allow last device to be forcibly removed from RAID1/RAID10.")
Signed-off-by: Kenta Akagi <[email protected]>
@kpd-daemon kpd-daemon bot added V2 and removed V1 V1-ci-pass labels Aug 17, 2025
@kpd-daemon kpd-daemon bot force-pushed the series/990464=>md-6.16 branch from 17a3273 to 7b74857 Compare August 17, 2025 17:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant