1TB software RAID 5 with one hot spare.

Discussion in 'Linux, BSD and Other OS's' started by Impotence, Jul 23, 2008.

  1. Impotence

    Impotence May the source be with u!

    Likes Received:
    6
    Trophy Points:
    38
    I have been putting this off for a month, due to worrying too much about doing it wrong... i would appreciate it if someone would take me throgh this step by step.

    Distribution
    • Kubuntu 8.04 KDE4 remix (x86)

    Hardware
    • 3 x 500GB SATA drives (sdc,sdd and sde) [unformatted, unmounted] to be RAID 5'd.
    • 1 x 500GB SATA drive (sdb) [ext3, /home, 99% full] for a hot spare.

    The hot spare drive cannot be added until the array has been built as i do not have anywhere to store it (and tested to death, I'm not wanting to loose 430GB of data!)

    What i have done so far
    • Tied up SATA data & power cables to stop them from moving
    • Used hot glue to prevent sata data & power connectors from coming lose from HDD / Motherboard
    • Installed mdadm

    What i need to do (help!)
    • Create the RAID
    • Setup RAID monitoring [audible alarm if a disk dies]
    • create a lvm on it
    • format the lvm (Suggestions? something journaled?)
    • Create test data on the RAID (a copy of /home)
    • Selectively drop individual disks out of the array (can't unplug them, dban them?) and learn to rebuild it.
    • when i am confident that it is 'stable' mount it as /home
    • Add the old /home disk (sdb) to the array as a hot spare
    • Possibly use sdb to make a 1.5TB raid once i have filled 1TB

    Questions
    • If i change distribution / when i do a clean install, how do i mount the raid? what files (if any) do i have to have a copy of to do it (such as the mdadm config file?)
     
  2. Impotence

    Impotence May the source be with u!

    Likes Received:
    6
    Trophy Points:
    38
    I have managed to do it myself, my thanks to anyone who took the time to read my post but where unable to help.

    I am currently writing up what i have learned in the form of a idiot proof distribution independent tutorial... I will post a link when it is complete.

    I also plan to fill in https://wiki.ubuntu.com/LVMOnRaid when i have finished!
     
  3. Anti-Trend

    Anti-Trend Nonconformist Geek

    Likes Received:
    118
    Trophy Points:
    63
    Sorry, didn't see this thread until just now. Glad you got it figured out though. On the bright side, I'm sure you learned more from researching it yourself anyway. :O
     
  4. Impotence

    Impotence May the source be with u!

    Likes Received:
    6
    Trophy Points:
    38
    yeah, just one thing that i want to double check though.

    When creating the lvm, i used the formula below (and then rounded it up to 2,4,8,16,32 or 64) to calculate the physical extent size, for a 2TB raid i worked out that i should set it to 32MB (rounded up from 30.048)... is this right?
    Code:
    RAID-SIZE*1000000000/1024/1024/65000
    
     
  5. Anti-Trend

    Anti-Trend Nonconformist Geek

    Likes Received:
    118
    Trophy Points:
    63
    Looks right to me. For simplicity's sake, this is more or less the process I used for my own software RAIDs in Linux: http://linuxhelp.blogspot.com/2005/04/creating-lvm-in-linux.html

    One thing to note however is that there are advantages and disadvantages of running an LVM on top of a RAID rather than simply formatting the RAID directly. If you run an LVM on top of the RAID, you will be unable to use the journalling capabilities of your filesystem. In other words, I hope you have a UPS. :) That said, the LVM makes it quite a simple matter to grow or shrink your filesystem should you want to change the amount of devices allocated for this purpose. So the bottom line is this: If you want to preserve the robustness of your journalled filesystem, don't do LVM at all, just format the RAID and mount it directly. If scalability is more important, go with LVM.
     
  6. Impotence

    Impotence May the source be with u!

    Likes Received:
    6
    Trophy Points:
    38
    I spent nearly an hour in #LVM on freenode today trying to get this figured out...

    Turns out the PE limit was 65,535 (not 65,000) and that lvm2 has no limit on the number of PE's! So now I'm left trying to decide on a good size without a lot to base it on.

    Could you explain why LVM renders journals useless? I'm having a hard time trying to work out why... and unfortunately nobody i have asked knows enough about it to have heard about that problem :/

    saying that, i have been thinking about a UPS for awhile i just can't afford the dam things, just finished paying off a £500/$1000 overdraft that i blew on beer and pizza... perhaps i could spend it on something worthwhile next time the cash goes to my head!
     
  7. Anti-Trend

    Anti-Trend Nonconformist Geek

    Likes Received:
    118
    Trophy Points:
    63
    On closer look, this was a problem specific to EXT3 and it seems to have been fixed in 2.6.18 kernels and later, and possibly even backported into earlier kernels. I ran into that problem on RHEL 4, as it used a 2.6.9 kernel. XFS seems to uneffected by this issue in either case, though it will automatically disable write barriers if it detects that its running on LVM. See here:
    SGI - Developer Central Open Source | XFS
     
  8. Impotence

    Impotence May the source be with u!

    Likes Received:
    6
    Trophy Points:
    38
    ok, good to know :)

    I formatted it as ext3, i also set the reserved blocks to 0% as its going to be /home (reserving blocks for the super user seemed a bit stupid, the default 5% would also have taken 45GB :eek:).

    Once this is done and documented all i have left to do is copy everything across and mount it as /home... I'm not looking forward to cleaning up the formatting, spelling & grammar in the tutorial, but I'm glad its almost finished :)
     
  9. Anti-Trend

    Anti-Trend Nonconformist Geek

    Likes Received:
    118
    Trophy Points:
    63
    One thing to keep in mind with Ext3 is that without reserved blocks, it will fragment pretty badly when you get close to full capacity. That's one reason why I use XFS for my own 1.5TB array: I get the full capacity, and have a live defragmentation utility which I can cron to run monthly or so.
     
  10. Impotence

    Impotence May the source be with u!

    Likes Received:
    6
    Trophy Points:
    38
    If i remember correctly, fragmentation starts to become a problem on a ext3 partition when it is over 90% full. But I'm not too worried about it yet, I plan to add another TB to my raid and hopefully by the time i have filled that ext4 will be stable (which includes a online fragmentation utility and is backwards compatible with ext3).

    I will put the reserved blocks as 1% in the tutorial, which should(?) give Linux enough to play with when people run out of disk space without consuming too much disk space (percentages are evil on large disks / RAIDs)
     

Share This Page