Caution: Kernel 5.3.4 and RAID0 default_layout

When you have a RAID0 array that is made up of 2 or more devices that are not the same size, at least on Archlinux since Kernel version 5.3.4 you will have trouble booting.
You will get the message:


The explanation by the committee is:
raid0 default_layout commit

md/raid0: avoid RAID0 data corruption due to layout confusion.

If the drives in a RAID0 are not all the same size, the array is
divided into zones.
The first zone covers all drives, to the size of the smallest.
The second zone covers all drives larger than the smallest, up to
the size of the second smallest – etc.

A change in Linux 3.14 unintentionally changed the layout for the
second and subsequent zones. All the correct data is still stored, but
each chunk may be assigned to a different device than in pre-3.14 kernels.
This can lead to data corruption.

It is not possible to determine what layout to use – it depends which
kernel the data was written by.
So we add a module parameter to allow the old (0) or new (1) layout to be
specified, and refused to assemble an affected array if that parameter is
not set.

In order to know which layout to use you will have to check when the raid was assembled.
In the case of Archlinux you’ll be dropped to an emergency shell, where you’re able to run the folloing commands:

And receive results similar to the following

There you see the date the RAID was assembled and can backtrack which Kernel was current when you assembled your RAID.
You can see the release version history on the following link: https://kernelnewbies.org/LinuxVersions

Linux_3.14 Released 30 March, 2014 (70 days)

If your RAID creation date is significanty later, like 1 year you can be sure to set the default_layout to 2.

The price question is, how do you set it to 2 or 1?

By a kernel command line parameter. Your grub.cfg probably looks something like

To be able to boot again, press the E key when grub lets you select which kernel to boot and append the following to the kernel command parameters

so it looks similar to this

Once you have booted you can make the change permanent by editing
/etc/default/grub
to hold

as Rodney Ricks pointed out in the comments,
you have to run something along the lines of

or similar to update your grub configuration to disk.

Alternative

While you’re still in the emergency shell type

0 = not set
1 = original, pre 3.14
2 = new, post 3.14

then

and see if everything seems to be good. Check thoroughly.

16 Replies to “Caution: Kernel 5.3.4 and RAID0 default_layout”

  1. Thank you!! I spent more than two days searching for a solution.
    After updating my arch nothing worked anymore. Because of the error message I thought LVM was the problem. Obviously it wasn’t…

    1. Great to hear 🙂 Absolutely awful how they handled that. No warning or anything just bam – unbootable system.
      What if you’re not dual booting or do not have a smartphone?
      What if you have more than 1 array with different creation dates and layouts?
      I backed up all my important data and will have to re-install in any case. That’s another day or 2 wasted on the OS instead of doing something productive.
      Ah well, thanks for feedback and enjoy 🙂

  2. Thanks!

    In openSUSE leap, the alternative version didn’t work, and you have to run the following after adding “raid0.default_layout=2” to the end of GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub, as pointed out in the first line of the file:

    grub2-mkconfig -o /boot/grub2/grub.cfg

  3. 110% perfect guide!
    Thank you so much!!

    This helped me get a remote server (using RAID0) fixed when it failed to boot due to the missing kernel parameter after a Debian 9 to 10 dist-upgrade.

  4. Thanks a Million, Icod. There We Go. Had to get a bunch of tupel-n RAID0 arrays straight after that silent change.

    Also LG

  5. Thanks! You saved me after I upgraded my bootloader and kernel, my RAID0 device was unreadable. Setting to type 1 fixed the problem for me (this RAID device was built on an old, old kernel)

  6. This worked nicely on a server with a RAID that was setup in 2011, forgotten about, reused in 2020, installed Debian Buster, forgotten again, upgraded to Debian Bullseye in 2022 and after rebooting, voilà! Couldn’t mount the RAID.

    Thank you for your post!

Leave a Reply to dalu Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.