Replies: 2 comments
-
|
There should be no difference if both pools use full stripes. In your case
if file block consists of odd number of 1<<ashift -sized "sectors/blocks"
the inefficiency kicks in. The overhead becomes 3x instead of 2x. This kind
of layouts you have exhibit the worst case scenario for raidz. Large record
size should minimize the number of such inefficiencies, but small files and
files with blocks smaller than the record size and compressed blocks can
cause more of those.
…On Sun, Jan 4, 2026, 23:38 Burkhard Neidecker-Lutz ***@***.***> wrote:
I had a 3 drive RAIDZ1 setup and wanted to move the content onto a new 4
drive RAIDZ2 pool. All 7 drives are identical. I had naively assumed that
the usable space on both setups would be the same, as the "stripe" width
should be matched well to the drive count. But I get:
Filesystem 1K-blocks Used Available Use% Mounted on
raidz1pool/media 9620538368 9604131840 16406528 100% /raidz1pool/media
Filesystem 1K-blocks Used Available Use% Mounted on
raidz2pool/media 9324984320 64188416 9260795904 1% /raidz2pool/media
Obviously I won't be able to fit the "9604131840 used" of the old pool
into the 9324984320 of space in the new pool. Why is there such a huge
difference between the two pools (close to 300GB difference) ? I use 1M
recordsize on both datasets (though that should not even come into play for
the basic availability calculation).
—
Reply to this email directly, view it on GitHub
<#18110>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXQ6HLFX3S6C4VDKAFG4YT4FECTZAVCNFSM6AAAAACQUSXX3KVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZZGMYDMNZTHE>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Sure, I know about the inefficiencies of odd file blocks. The question is
about the indicated available space even *before* anything has been
written.
But I just noticed something even weirder. Despite no compression, every
file on the new pool reports a "ls -s" size of around 97.2% of the
equivalent file on the raidz1 pool. So maybe the whole space calculation is
different by that amount. I'll let the copy progress and see whether the
reported 9.6 TiB on the raidz1 pool fit in the reported 9.3 TiB available
space on the raidz2.
Am So., 4. Jan. 2026 um 16:04 Uhr schrieb Ivan Volosyuk <
***@***.***>:
… There should be no difference if both pools use full stripes. In your case
if file block consists of odd number of 1<<ashift -sized "sectors/blocks"
the inefficiency kicks in. The overhead becomes 3x instead of 2x. This
kind
of layouts you have exhibit the worst case scenario for raidz. Large
record
size should minimize the number of such inefficiencies, but small files
and
files with blocks smaller than the record size and compressed blocks can
cause more of those.
On Sun, Jan 4, 2026, 23:38 Burkhard Neidecker-Lutz ***@***.***>
wrote:
> I had a 3 drive RAIDZ1 setup and wanted to move the content onto a new 4
> drive RAIDZ2 pool. All 7 drives are identical. I had naively assumed
that
> the usable space on both setups would be the same, as the "stripe" width
> should be matched well to the drive count. But I get:
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> raidz1pool/media 9620538368 9604131840 16406528 100% /raidz1pool/media
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> raidz2pool/media 9324984320 64188416 9260795904 1% /raidz2pool/media
>
> Obviously I won't be able to fit the "9604131840 used" of the old pool
> into the 9324984320 of space in the new pool. Why is there such a huge
> difference between the two pools (close to 300GB difference) ? I use 1M
> recordsize on both datasets (though that should not even come into play
for
> the basic availability calculation).
>
> —
> Reply to this email directly, view it on GitHub
> <#18110>, or unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/ABXQ6HLFX3S6C4VDKAFG4YT4FECTZAVCNFSM6AAAAACQUSXX3KVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZZGMYDMNZTHE>
> .
> You are receiving this because you are subscribed to this thread.Message
> ID: ***@***.***>
>
—
Reply to this email directly, view it on GitHub
<#18110 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADGCLDGZU5JDEX3SDWS2BGD4FETYDAVCNFSM6AAAAACQUSXX3KVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKNBQGUZDKMQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I had a 3 drive RAIDZ1 setup and wanted to move the content onto a new 4 drive RAIDZ2 pool. All 7 drives are identical. I had naively assumed that the usable space on both setups would be the same, as the "stripe" width should be matched well to the drive count. But I get:
Obviously I won't be able to fit the "9604131840 used" of the old pool into the 9324984320 of space in the new pool. Why is there such a huge difference between the two pools (close to 300GB difference) ? I use 1M recordsize on both datasets (though that should not even come into play for the basic availability calculation).
Beta Was this translation helpful? Give feedback.
All reactions