Not every IT organization has enough money to buy a storage area network (SAN) appliance. Yet most companies need a SAN, because – in my opinion – it's probably the best way to avoid losing critical data when server storage crashes occur. You don't need to pay for a proprietary SAN appliance, because you can build your own SAN using open source software. In this four-part tip you'll learn how to set up such an appliance setup.
Before starting to build your own SAN appliances, you should determine what exactly you need. Typically, a SAN consists of different components of which highly available storage is the core. If your SAN consists of just one box, you are playing with fire. If this box fails, your data will be unavailable, and that will cost you.
With proprietary systems, it takes two boxes to build a SAN, and it is the same when building an open source SAN . That is where the Distributed Replicated Block Device (DRBD), a distributed storage system for Linux distributions, comes in. Using DRBD, you can build your own RAID 1 across the network, so that if one of the servers providing storage goes down, the other will take over this role immediately. We will talk about DRBD more in part two of this tip.
In a highly redundant SAN , you'll need one, unique point of access. Some SAN setups work with Fibre Channel. Because this involves an expensive fiber optics-based infrastructure, this option is not the best imaginable for an affordable open source SAN. Instead of using Fibre Channel, we'll use iSCSI. An iSCSI target will be used to provide access to the storage device.
One of the challenges of building such a flexible SAN setup is to make it as versatile as possible. That means that a failing component has to switch over automatically. So if the SAN box that currently hosts DRBD as well as iSCSI access goes down, the other box in the SAN needs to take over automatically. To do this, we'll use Heartbeat -- a portable, high-availability cluster management program -- to monitor and ensure availability of all critical resources in the SAN.
We will discuss Heartbeat in part three of this tip, and iSCSI in the fourth. Also, check out my Heartbeat how-to on this site.Image 1: Schematical overview of the open source SAN
Required Software
To build such SAN appliances, you need the appropriate software. Typically, any Linux distribution will do, but in this article we'll use SUSE Linux Enterprise Server 10 SP2, which is available as a free download. One of the advantages of using SUSE is its position as the distribution used by the Heartbeat Software developers; Heartbeat plays a critical role in this setup. You'll therefore always be ensured from optimal integration between the operating system and the cluster software when working with SUSE. You could use the open source version of SUSE Linux as well, but to build a stable and reliable setup, I would always recommend working with the enterprise version.
When making your choice of softwares to install, keep it to a minimum. After all, you are building SAN appliances here and typically on a SAN you wouldn't run any services. From the list of installable software patterns, select the following:
- Server Base System
- High Availability
- Documentation
- GNOME Desktop Environment for Server
- X Window System
Apart from that, select the iSCSI target components as well. Then you don't need anything else. In fact, you can even do it with less and choose not to select the GNOME environment and X Window System, but since SUSE heavily relies on YaST to configure the box, I'd recommend - at least for as long you are still working on the installation - that you keep the graphical environment at hand. Once finished with the installation, you can still disable it. Of course, you're also free to start all graphical components from a desktop in case you would prefer that.
In addition to the software requirement, you should also think about the hard disk layout you want to use when building SAN appliances like this. No matter if you are setting up server hardware or two laptops to create a test environment, you should take into consideration that a storage device is needed as the SAN storage. In figure 1, I have used /dev/sdb as an example storage device, but this assumes that you have two (or even more) hard drives available. In case you do not have that, it may be a good option to create an LVM setup, and to make a large dedicated volume available for the SAN storage.
The LVM setup also allows you to work with a snapshot appliance - more about that later in this tip. On my test setup, where I use two laptops with a 160 GB hard disk , I've used the following disk layout:
- /dev/sda1: a 100 MB Ext2 formatted partition that is mounted on /boot.
- /dev/sda2: the rest of all available disk space, marked as type 8e for use in an LVM environment.
- /dev/system: the LVM volume group that uses the disk space available from /dev/sda2.
- /dev/system/root: a 10 GB Ext3 formatted logical volume for use as the root of the server file system.
- /dev/system/swap: a 2 GB logical volume used as swap space.
- /dev/system/DRBD: a 100 GB logical volume which is just allocated and not formatted.
To create such a disk layout, use the YaST integrated module while performing the installation.
Image 2: To configure the disk layout of your SAN, YaST can be of great help.The last consideration to take when setting up your server is networking. I recommend putting your SAN on a network that is separated from normal user traffic. You don't want synchronization between the storage devices in the DRBD setup to be interrupted by a large file transfer initiated by an end-user, so if possible, create a dedicated storage network.
Normally, you would also want to configure a separated network for Heartbeat traffic, in order that a node won't get cast off the network when traffic is temporarily elevated. In this situation, however, I prefer not to do that. If Heartbeat packets don't come through over the normal network connection, it is likely that your DRBD device has also ceased communicating . You wouldn't want Heartbeat to ignore a failure in the communications link while your SAN is in a disconnected stage because a redundant link is still replying, would you?
You now have the basis of your SAN solution available: the two servers that are going to offer the iSCSI storage services. In the next part of this article, you'll learn how to set up a DRBD shared storage device on top of this configuration.
About the author:Sander van Vugt is an author and independent technical trainer, specializing in Linux since 1994. Vugt is also a technical consultant for high-availability (HA) clustering and performance optimization, as well as an expert on SUSE Linux Enterprise Desktop 10 (SLED 10) administration.
2 comments:
Needless to say, the cooktop, range hood, and dishwasher HAD to be replaced...especially the cooktop was a "fire waiting to happen"...and the dishwasher was leaking water. appliances
Needless to say, the cooktop, range hood, and dishwasher HAD to be replaced...especially the cooktop was a "fire waiting to happen"...and the dishwasher was leaking water. appliances
Post a Comment