Saturday, January 29, 2011

soft/fake raid in linux

original
I don't have much experience with Linux, but this is a good oportunity to learn

I'm mounting a simple database server and i would like to know if ubuntu server 9.10 (what do you guys recommend for a [begginer] server distribution ?) would work with a hardware raid-1 with this motherboard (there is no linux raid driver listed on vendors download page)

http://www.foxconnchannel.com/product/Motherboards/detail_spec.aspx?ID=en-us0000346

edit

After some tips i discovered that i call raid is actually a fakeraid, also found some articles about running linux on fakeraid using dmraid, and soft raid was suggested and since the performance/capabilities are almost the same, i need to help in another question

Whitch one is easier to setup and will automatically recover and/or boot with 1 disk on some failure

keep in mind that i'm no expert, so if something is very hard to configure i prefer to stand way, at least for now.

Thanks in advance

Arthur

  • You may want to familiarize yourself with the UbuntuHCL (Hardware Compatibility List). Specifically, the motherboard list and the storage controller list.

    arthurprs : Thanks for the quick reply, i found the board listed http://www.ubuntuhcl.org/browse/product+foxconn-a6vmx?id=1674 but no info :/
  • Normally you wouldn't need any drivers as hardware RAID controller would present the RAID device as a physical device to your operating system. So you'd see /dev/sda but in fact it is made of two or more disks.

    Mirroring parameters etc is all controlled from the RAID controller firmware, which you can access during the server POST boot (this is when you hit keys to get into BIOS, etc). Check with MB manual how to configure RAID device. Alternatively just pay attention to boot messages printed on the screen.

    With regards to your question for server OS, I'd recommend looking at CentOS, which is basically a recompiled RedHat Enterprise Linux. This is what "big guys" are using... :)

    arthurprs : So i shouldn't have any problems? This is great news :D!
    pulegium : yeah you should be ok as majority of the hardware RAID controllers operate that way. However having said that.I've downloaded the MB manual and haven't seen RAID configuration section. It does talk about standard MB BIOS stuff, and doesn't mention RAID at all. normally that's ok as the RAID controller utility is something different from the MB BIOS tool. Found this http://forum.egypt.com/enforum/hardware-networking-f198/problem-foxconn-raid-setup-19303.html which indicates that the hw RAId configuration does indeed exist (and you can get there wit Ctrl-F), so yeah, all should be fine :)
    Wesley 'Nonapeptide' : @pulegium concerning your statement: "Normally you wouldn't need any drivers" Unless I'm misunderstanding something, you need drivers to access the RAID controller itself to be able to see the volume that it creates from multiple drives. Thus when installing a new server OS you usually need some kind of disk with the RAID controller's drivers on it for the installation to proceed because in many cases it won't even see a hard drive unless it has some compatible RAID controller drivers already in the installation files.
    pulegium : @Wesley, what I meant here is that you normally don't need any additional specific drivers to enable the h/w controller. I wanted to say that the SAS drivers supplied by majority of Linux distributions would do the trick.
    From pulegium
  • If you mean the RAID controller built into the motherboard, I'd AVOID IT. It's not true hardware RAID.

    Motherboard RAID is regarded as the worst of RAIDs, as it is motherboard specific, there are several online instances of the motherboard just losing the RAID configuration and hosing volumes, and in the end, if you're trying to get RAID on the less expensive but capable side, use software RAID built into Linux.

    True hardware RAID is cached and will cost you in the wallet, but it costs more for a reason. Motherboard RAID often is just software RAID in firmware, only it can make the volume specific to that machine. Drive die or hardware issue? You can't necessarily recover the data by moving it to another system, since the motherboard may have done something odd to the formatting of the disk volume.

    If you're looking for hardware RAID with Linux, I've had good luck with 3Ware controllers, and if you don't want to spend the cash, use software RAID. Comes free with Linux.

    Bart Silverstrim : For more info, google "Fake RAID" to see what pops up.
    arthurprs : Thanks for the clarification, please see the edited question.
    Bart Silverstrim : If you run software RAID, I'll mention that (through this site) there have been questions about locating failed drives, since there can be many headaches when you get a logged failure of a drive but don't know which is which. You'll want to google a bit or search this site for questions about failed software raid drives, and see some tips about labeling your drives with serial numbers and software identification so you can tell which is which when you have a failure! :-) Hardware RAID will often have LED's and labeled cables that will tell you which is which without guessing.
  • I have always stayed away from onboard desktop board controllers (integrated server ones are a different ballpark), horror stories of incremental data corruption, shoddy drivers etc have had an effect. I would go for either an Adaptec (or similar) card that starts at around £100 or go for software RAID.

    If this is a small deployment, I would choose software RAID, it is pretty easy to manage and you have the flexibility of being able to mount half of a RAID mirror on virtually any Linux machine. Plus it is free, out of the box and relatively well battle tested. The main selling point for me is being able to manage it completely from inside the OS, no reboot required.

    In terms of OS, Ubuntu Server is pretty good and lightweight, however, I would recommend perhaps going for an LTS version. Alternatively, as suggested CentOS is a great server OS, it will have slightly older package sets but you do get a thoroughly tested product as a result.

    Wesley 'Nonapeptide' : In my opinion, integrated server controllers are just as poor and should be avoided when possible. ::casts contemptuous glance at an HP ML115::
    Frenchie : If you're giving consideration to Ubuntu LTS, also give consideration to Debian which has naturally longer release cycles and you can roll from one release to the next (which is something Ubuntu LTS releases have lacked but is promised for future releases)
    From Tom Werner
  • Don't use FOXCONN RAID with Linux!

    They are linux hostile. You should get rid of that motherboard and buy something better.

    Don't use Motherboard/Software RAID!

    Motherboard/software raid isn't very reliable, and you can easily end up with two bad copies instead of one good copy. It's very hard to recover from motherboard failures (unless you have more of the same motherboard), and it can be very hard to recover from disk failures (since disks tend not to be labelled well).

    Don't even use RAID!

    RAID is very slow, and doesn't protect against the problems that you think it does. It's not a replacement for a backup system, and it makes testing backups very hard, which means that in the wrong hands, your data is less safe in a RAID setup, than on a single disk.

    RAID should add a few hundred dollars to the cost of your server, and it can protect you from certain kinds of physical disk defects, and a (small) number of data-corruption problems. It won't protect you from:

    • Fire, Flood, or Lightning
    • Operating system errors
    • Sudden power loss
    • Stupidity

    A continuous backup system or a replicated+distributed storage system is always cheaper, and much more reliable. Depending on what you're doing, it may be harder to set up than a RAID system, but it is more obvious what you're protected against. That said, a proper RAID setup will include:

    • A standard disk layout
    • A battery-backup unit
    • Have lots of on-board memory
    • Lots of cooling
    • Regular testing

    RAID systems lacking these things will silently corrupt your data, and smash your hopes when you need it most: after catastrophic failure, and even the best RAID systems won't protect against the thing that actually happens.

    Wesley 'Nonapeptide' : I was with you on points one and two, but you lost me on point three. =)
    arthurprs : I'm confused, i need a reliable(most as possible)/cheap database server for a software, mostly because it will be running on another town, and i thought raid will keep the software running until i contact the owner to get a replacement.
    pulegium : yeah, #3 is not entirely true. RAID is not a backup, but still a very important protection mechanism. We've got some 2k servers running and disk failure is not an unusual thing. All servers have h/w RAID1, and although we need to shut them down to replace the disk, never ever have we had to reinstall the OS/app. Saves some time.
    arthurprs : @pulegium - Do you use soft raid on those?
    Zypher : You had a good post going ... until you rant against raid ... there is a reason that is stands for _REDUNDENT_ array of inexpensive|independant disks. It's for Redundency and HA not backups.
    geocar : @Zypher: Redundant, not redundent.
    geocar : @arthurprs - Raid isn't cheap. As I said, it adds a few hundred dollars to the cost of the server. If you don't have a bbu (battery backup), you can't safely enable disk cache, which makes it really slow.
    geocar : @pulegium - disk failures are rare in my network center. I burn in all disks for three days, then off three days, then on again before using and use continuous backups so even when I lose a disk, I don't lose any data. Switchover is done with heartbeat to the last clone, and machines get recycled after 5 years.
    arthurprs : @geocar - Thanks for the tip on the battery backup
    pulegium : @arthurprs nope, all h/w RAIDs there.
    pulegium : @geocar so when the disk goes you loose your server? until it's restored from backup? and that can happen anytime, like during the peak? we normally schedule server downtime out-of-peak hrs... makes our customers happy :)
    geocar : @pulegium: nope. there's no "off-peak" for web hosting. heartbeat brings the last clone up automatically (well, within 30 seconds). It's also easy to check because: I can simply test the clones periodically.
    Evan Anderson : I agree that motherboard RAID is junk. Software RAID isn't motherboard RAID, though. I just can't get behind the "don't use RAID" argument. Simple software RAID-1 for small installations protects against common hard disk failures. It's nice that disk failure for you are rare-- they've been common enough in my life to make RAID-1 worth it on many occasions. Implementing "continuous backup system or a replicated+distributed storage" is much more costly than adding a 2nd hard disk drive to a server computer in every case I can think of. I think you're living in a dream world.
    geocar : @Evan Anderson - You do not know what you're talking about. To get reliability with software RAID (or otherwise without battery-backup) means disabling the disk cache, and it needs your operating system to *carefully* order writes, or you can end up losing both copies. Buying two 1000$ servers is cheaper than one 3000$ server with the much faster disks (to overcome RAID's slowness).
    Evan Anderson : @geocar: I'll take software RAID-1 with a journaled filesystem on a server w/ a UPS over JBOD and this mystical "free" clustering / failover you speak of any day. Access time on RAID-1 is just as fast as JBOD, and RAID-10 blows JBOD out of the water w/ any number of spindles, so I don't buy your "RAID's slowness" argument. You're just making up numbers (re: "1000$" and "3000$") and living in a fantasy world where clustering and failover capability doesn't cost money in licensing and TCO (not to mention complicating systems and creating additional points of failure).
    From geocar
  • To just start screwing around with raid1, I used ubuntu 9.10 works wonderfully. I had 1 big problem that I'll repost here just in case you run into it, it was really killing me.

    Installing a new ubuntu setup with raid1 as part of the install is the easiest way to go. If you're trying to turn an existing drive into a raid array, it's a little harder.

    Basically you have to make new drive a raid drive, copy all your old drive's contents to it, then reformat/filesystem the old drive to be part of the array then tell raid to update it, and it will mirror over the data from the good raid drive.

    And here was my big problem: You have to add the grub config by hand, (to both drives) and what grub tells you is hd0 and hd1 can be different between when the machine is up and running and you run grub from the command line versus what grub will tell you if you drop to the grub command prompt on bootup.

    And it's the values it perceives at bootup that need to go into the grub config, not the ones you get from grub after the machine has booted.

    From Stu
  • I've had good luck with ZFS under Solaris. It's not Linux, but it's just as easy to install (and just as much hassle if your hardware isn't supported...) and tends to have fewer exploits, if you worry about such things. ZFS offers excellent performance and allows you to create an array using whatever disks are handy (all your disks don't have to be the same size or speed). All your standard OSS software is available (Apache, PostgreSQL, MySQL, PHP, Perl, Python, etc), and the standard desktop is Gnome, so there isn't a long learning curve.

    From TMN

0 comments:

Post a Comment