IBM’s XIV

22 08 2008

I finally got a chance to learn about XIV. I was dragged into an IBM product presentation recently, so I figured I would summarize the one thing not covered on the NDA here :)

What is XIV?

Essentially, it’s a disk storage device that uses only SATA drives but gets a high number of IO/s out of them by spreading the reads and writes across all disks. Every LUN you create will be stretched across every disk in the array. Instead of using standard RAID to do this, XIV has a non-standard algorithm that accomplishes the same thing on a larger scale.

They build every system exactly the same way- each system contains a bunch of nodes of 12 drives each with their own processors and memory. It’s all off the shelf hardware in a node- pentium processors, regular ram, and sata drives. Not enterprise class on its own, but because of the distribution system they’ve worked out, you get all the performance of all the drives for all your reads/writes.

Scalability is done by hooking new systems to old ones through the 10GB switch interlink ports. They say that as newer communication tech becomes available, this will follow along (so eventually they will support infiniband). Also, when you add a system to the cluster, the balancing of data is automatic.

How is this different?

The big change here is in the way they put data on their disks. They’ve re-invented the wheel a bit, but for a reason. The performance you can get out of low cost low end drives in parallel is very good. Normally, I would never tell people that SATA is appropriate for databases or email, but XIV claims to be fast enough. I imagine we’ll see some benchmarks soon.

The first thing I asked about was parity space. XIV puts parity info over the whole array, so with 120 1TB drives, you get 80TB addressable space. Also, because rebuilding a 1TB drive from parity is normally a really intensive operation that generates many reads across the RAID, I asked about how they handle rebuilds. They claim that they can rebuild a 1TB drive from parity in about half an hour because all the parity data is being read from all the other heads simultaneously.

This sounds good, but I wonder if a failure and rebuild will slow down your entire production environment instead of only the raid where the drive failed. Also, in the event of an entire node failing with 12 drives, would that mean a 6 hour rebuild that affects the whole production array? If they have some way of prioritizing production IO, then I am satisfied. I don’t know if they do though.

Snapshots

Normal “copy on write” snapshots create extra writing traffic- every snapshot is another write that must be committed to disk before the acknowledgment is sent to the host. XIV uses a snapshot algorithm called “redirect on write” to avoid this problem and allow larger numbers of readable/writable snapshots.

They create a snapshot LUN that initially points to the real data, and when a change is made to the source, they write the new data to unused space and point the production LUN there while leaving the snapshot pointed at the old data. Netapp used a different algorithm to solve the same problems inherent to “copy on write” traditional snap shots that launched them into success in the enterprise storage market years ago.

Other advanced features

The box is delivered with all functionality enabled, which is an interesting move considering every other vendor I’ve dealt with makes most of their money from software. They include mirroring, thin provisioning, and a weird one time only type of virtualization that sits between the hosts and the old storage and reads all the data off the storage while continuing to pass the IO through transparently.

Questions

If someone from XIV (or more likely IBM) is reading this, I want to know more details about your mirroring and your workload prioritization:

  • Do you support synchronous, asynchronous, and asynchronous with consistency group mirroring? What about one to one, one to many, and many to one configurations?
  • Do you have a way to prevent disk rebuilds from taking disk resources that are needed by production apps?




Oracle RAC ASM on a JBOD- Mike’s question

21 07 2008

I had Oracle RAC with ASM running in a RAW configuration on dual 32 bit servers running RedHat 4. I upgraded to dual qla2200 HBAs on each server and they connect through a Brocade 2250 switch to two JBOD disk arrays, a NexStor 18f and a NexStor 8f. I have set up multipath in the multibus configuration and can see the drives as multipath devices in an active-active configuration. I am using OCFS2 for my CRS voting and config disks and that runs fine. When I try to start ASM I can get proper connectivity on the first server that starts up, the second server hands until it eventually errors with a access issue.

I have verified all permissions required are set and can see the disks on both sides using the oracleasm utility. It appears DM is only allowing a single host to access the ASM disks at one time, so when node A starts up and acquires the ASM disks when the ASM instance starts, node B is left hung, visa versa if Node B starts first.

I was told it may be a SCSI reservation issue, but can’t seem to find any information on this. I know people are using this type of configuration to RAID controllers but is the JBOD causing issues? How to get both instances seeing the ASM disks?

Thanks!

Hi Mike! To preface my answer, I’ll start by saying that my Oracle knowledge was purely acquired through osmosis, and that I’m primarily a platform guy. I’ve never sat in front of an Oracle server and done anything, but I do understand a fair bit about how they interact with storage :)

First, when you say “I upgraded to dual qla2200 HBAs on each server”, do you mean that you had a working system using ASM and RAC before you changed the HBA hardware? If so, without even going into the rest of the story, I would start by checking Oracle’s and Nexstor’s support for that card and seeing if they have any known issues with the firmware level you’re using.

Second, it really sounds like your main issue is a multipath one. A “SCSI reservation issue” is another way of saying that a server is locking the devices to itself, which is exactly what multipathing software is supposed to fix. There are several places that it could break down: the application, the OS, the hardware, or the firmware. The only way to see which level your problem comes from is to try to eliminate them by swapping them out. I’d start with Oracle- ASM is supposed to really get down and directly control the disks as raw devices, so they might have a compatibility matrix that contains the whole stack. Maybe it’s as simple as Oracle not supporting your firmware…

If not, you’ll have to do some trouble shooting and vendor support calls until you find out where in your config the error is. I am fairly certain that Oracle can fix this for you though.





Linux sharing a JBOD- paul’s question

23 05 2008

“Question about multiple servers accessing same disks through a SAN switch

I’m trying to set up a Linux system for server failover where two servers (with SAS HBA) are accessing the same set of disks (jbod) through a SAN switch. First – will this work? If so, what software do I need to run on the servers to keep the two servers from stepping on each other? Do I need multipath support?”

It depends. A JBOD does not do any RAID management- it leaves that to the servers. If you have two servers trying to operate on the same disks, they have to be using a clustering operating system to avoid overwriting themselves. Linux is probably able to do that, and I know Windows has a cluster edition, but it’s certainly not the simplest way to get servers sharing data.

This brings me to my first question: what are you trying to do? Do you want them to run the same application so if one fails, the other will pick up the slack without losing anything? Or are you trying to process the same data twice as fast by using two server “heads”?

Secondly, when you say “SAN switch”, do you mean fibre channel switches? If you have SAS HBAs, those can not plug into an FC network.

Thirdly, multipath support usually implies allowing a server to see a single LUN through multiple fabrics. If you have more than one path between every server and drive, multipathing software would indeed be suggested.





Barry’s question

7 01 2008

Via email:

“When you are thinking about Disaster Recovery, CDP, do you assume that Tier3 is adequate, mainly because this is backup only, or maybe DR so hopefully not needed? How does your thinking proceed? Do think about your primary data at the same time?

I ask this as a loaded question, knowing that anything that has to copy to, snap to, or mirror with, secondary, backup or DR or CDP storage now has a definite tie with the primary.

Barry Whyte

SVC Performance Architect
IBM Systems & Technology Group”

This is a loaded question! To start with, I’ll note some assumptions and concept clarifications to ensure we’re talking about the same thing- if I’m off on anything, let me know ;)

  • CDP: continuous data protection, an IBM backup software algorithm- small changes sent to a central server continuously
  • Tier 3: low price random access storage media- not tape, usually cheap SATA drives
    • Note: there’s been discussion about these tier definitions before, and I hold that tier 3 means different things to different companies.

To your question- I would have to decide based on the company’s current architecture. If they have a storage solution that has synchronous mirroring between two sites, then using low performance drives on either side will slow production. If they’re doing asynchronous replication (or a server instead of storage based DR solution), I would probably be fine with SATA/tier3.

To explain my reasoning, I must first say that I can not decide without having a specific case and a IT person to question. My advice would be based on risk tolerance versus capital expenditure tolerance. Secondly, SATA has a undeserved bad rap- the drives are about as reliable as other enterprise ones (according to Google). SATA drives are certainly not fast for random access loads, but for sequential and low urgency loads like backups, they will do the job.

Low performance media will always be part of a healthy storage balance- the most bang for most companies’ bucks will be in prioritizing their applications (or even their data), and using the media that makes the most sense. Need an Oracle server to stop freezing up your warehouse management app? Put that baby on 15,000 RPM FC hard drives- lots of them. Need to keep a backup copy of a file server on site in case of a server outage? SATA will do the job. Need to keep nightly point in time backups of your entire storage infrastructure for years? You probably can’t afford to put that on drives at all- use tape.

That said, most companies that haven’t reached a boiling point in their storage gear yearly expenditures won’t bother to do much of this stuff. Face it, tiering your applications for storage takes operator time, and gear just seems to feel cheaper to management than IT man hours. That and the explosive growth of media density in the last 5 years have kept tiered storage plan adoption either to the ridiculously large data producers who have no other choice (like large banks) or to more forward thinking smaller shops.





Gene’s question

21 11 2007

Gene writes:

question about SAN interoperability

…2 windows 2003 server sp2 servers, running on HP proliant dl380 g4 each with one single port fiber hba’s. Servers will be clustered to run sql 2005. HBA’s are hp branded- emulex fc2143’s (Emulex id- is lp1150)…SUN SAN has both 6130’s disk array and 3510 array…we want to use disk from both arrays…(betters disks in 6130, slower stuff in 3510)

Do we actually need multipath drivers? (SUN has come out with DSM’s for both these arrays)…any issue using multiple DSM’s if they are requried.

Any known issues with the type of device drivers for the HBAs? Storport versus scsiport…

any help is appreciated

In general, if you only have one FC port per server, you don’t need a multipath driver. I am not sure if this holds true with multiple subsystems that aren’t under some sort of virtualization umbrella though… you might need a device driver that understands how to work with multiple subsystems. This would not be a multipath driver though- those are for multiple paths to the same LUN.

Regarding scsiport versus storport, I found an excellent whitepaper detailing the differences here. The way I read this is that these layers of the storage stack replace the proprietary device and multipath drivers provided by Sun- if they support it, then you should take storport, the more recent version. Unfortunately, I can’t give you very specific caveats with this technology because every system I’ve worked on used the vendor’s device and multi-path drivers, or a virtualization head to combine multiple physical subsystems into a logical one.

The disk subsystems are withdrawn from Sun’s marketing- have you asked your Sun contact whether they’ll support the setup you’re considering?





Defining tiers for storage

17 08 2007

There’s a good series going on over at the Storage Anarchist‘s page about defining storage tiers- if you’re trying to get some insight to better organize your own data, it promises to be a good series. Here’s the link to the first of four entries.





Storage and fabric virtualization

7 08 2007

Aloha Open Systems Storage Guy,

What’s your take on virtualization? VSAN from Cisco, SVC from IBM? What other virtualization products are available from other vendors?

Thanks,
John

Cisco VSANs and IBM’s SVC are different things for certain :)

The VSAN allows you to create multiple logical fabrics within the same switch- you tell it what ports are part of what SAN, and you can manage the fabrics individually. It’s especially useful if you’re bridging two locations’ fabrics together for replication or something because it allows you to do “inter VSAN routing” if you have the right enterprise software feature. That would allow you to have two separate fabrics whose devices can see each other, but if the link between the sites fails (which is more likely than a switch failure), you won’t have the management nightmare of having to rebuild the original fabric out of two separated fabrics when the link comes back. VSANs are also commonly used to isolate groups of devices for the purpose of keeping those devices logically separated from parts of the network they’ll never need to interact with.

IBM’s SVC is a different technology that is supposed to consolidate multiple islands of FC storage. It’s essentially a Linux server cluster that you place between your application servers and the storage. It allows you to take all the storage behind it and create what they call “virtual disks”- essentially a LUN that’s passed to a server but contains multiple raids (possibly from multiple controllers). This gives you the option of striping your data across more spindles than you would be able to normally, and allows you to do dynamic thin provisioning when your datasets grow.

The only downside of the Cisco VSAN technology I can think of is its cost- it’s bloody expensive compared to a cheap low end solution, and for anything less than a 50 device FC fabric, I would questionable whether it’s worth it. There is an alternative from Brocade/McData they call LSAN, however I am not as familiar with it. I have been told that it’s slightly less complicated, but harder to manage, and doesn’t have the full feature-set of Cisco.

The downside to the IBM SVC is that you create latency for all your disk reads- every time a server needs to perform a write, it has to go through the Linux cluster first. It has a much larger cache than most controllers, so there’s a better chance that the data you’re looking for is already there, but if it’s not, your read performance might suffer a little because of the extra few milliseconds. The advantage is that you can now use incredibly cheap controllers with tiny amounts of cache, and it allows you to migrate data from any manufacturer’s device to any other manufacturer’s device without interrupting your servers. Under a virtualized environment like this, an older DS4300 like you have will perform pretty much on the same level as a more expensive DS4800 or EMC CX3-80 (assuming the same number of drives) because you don’t really use the cache of the underlying system. Another advantage of the SVC is that most FC storage controllers charge you either one time or over time for the number of servers you’re planning to connect to them. IBM charges a “partition license” fee for LUN masking, and EMC charges a “multipath maintenance” tax. Either way, the multipath drivers for SVC are free, and it only needs one partition from the controller, so you might be able to save money that way.

Did you have any specific questions about these topics you want more detail on?

Also, one of the new bloggers in the storage world- Barry Whyte- focuses on IBM SVC. He just started, but his blog will hopefully become a real resource for people with IBM storage virtualization on their mind.





Another question from John- multipath drivers

2 08 2007

Aloha Open Systems Guy,can you take another question from me? I’ve got some questions about OS-drivers for disk subsystems…What’s up with all the RDAC, MPIO/DSM, and SDD? I’ll try and keep things consistent by limiting my question to one OS (Windows Server 2003).

I’ve heard talk about SDD being superior for the ESS / DS8000 line of storage. It’s apparently not even available in an active/passive array. However, I’ve got a mid-range disk subsystem from IBM, the DS4300 Turbo model.

Until tonight I thought there was only a single choice of multi-pathing driver for me, RDAC. However, when I went about installing my first Windows OS to be SAN-connected I ran into all kinds of new information like SCSIport and STORport and now MPIO / DSM.

Can you help de-mystify this enigma for me?

Mahalo nui loa,
John

Certainly! Always happy to get more questions. I’m a chronic sufferer of writers block, so your questions help by providing material ;)

Each vendor dictates the support they provide for multi-path drivers, and going outside these constraints is possible, but will usually void the warranty. My experience with IBM is that they usually support something out of the box if it works, or in special cases if it can be made to work. Since they only support RDAC with the DS4000 series, I’ll bet that nothing else would work. Whether through design or technical limitation, I do not know, but I suggest that you stick with the driver they recommend.

The only limitation to RDAC is that it does not dynamically load balance- however in terms of failover protection, it’s bullet-proof.

edited to add: The other drivers you mention are supported on other IBM systems,  by the way. 





John- hardware vs. software RAID, RAID 5 or 10?

23 07 2007

“Aloha Open Systems Storage Guy,

I’m a recent convert to storage administration. I’m having a hard time cutting through the cruft to find the truth. Could you answer some of these questions?

1 – Which is faster, software-based RAID (e.g. Linux md, Windows Dynamic Disks) or hardware-based RAID? One person said that software-based RAID is faster because it has a faster processor and more RAM/cache (something like a Xeon 3.0 Ghz w/ 4Gb or RAM would be typical in my environment). But how could that stack up against my (little bit old) IBM DS4300 Turbo (2Gb cache).

2 – Which is faster, RAID-5 or RAID-10 (or is that RAID-01?) I know everybody says RAID-10, but what about those fancy XOR engines? Or have I fallen prey to marketing?

Thanks for taking a moment to listen to my questions.
Mahalo (Thank you),
John”

Hi John, and welcome to the blog!

To answer your questions, I’m first going to give a bit of background info. If any of my statements don’t make sense, please reply and I’ll answer :).

The term “faster” can mean different things to different people. Each type of storage has its strengths and weaknesses, and different applications perform differently on the same storage systems. There are two primary application workloads- those that do random IOs, and those that do sequential IOs.

The random workloads are the hardest ones to provide storage for because it’s very difficult to “read ahead” by predicting where the next read will fall. An example of an application that has a random workload would be a database or email server.

The sequential workloads are easier to provide storage for. Pre-fetching the next block will most of the time yield a read that’s already in cache. An example of an application like this would be a backup server or certain file servers.

Another general bit of info is that in a RAID, reads (not writes) are usually the bottleneck. Writes are usually fed into the cache and acknowledged to the host server immediately. Reads, however, are typically 70% of the IO being done by a system, and as we discussed are often impossible to “pre-cache”.

When you’re calculating performance, the two stats you’ll want to know is IOs per second for random loads, and MB per second for sequential loads (abbreviated IOPS and MBPS). When you’re trying to tune a system to be quick for your applications, you need to know the different levels of your system and which one is the bottleneck. Normally, on a decent controller, the number of spindles you have in the RAID will determine the IOPS. You should get a linear increase in performance as you add drives to a RAID. Cache is important for the 30% of writes you can expect (your mileage may vary), however everything goes to disk eventually, and most people experiencing slow performance on their disk controllers simply don’t have enough disks.

Onto the specifics of your question:

1- Software or Hardware RAID: For most workloads, a dedicated hardware RAID controller is faster. Software RAIDs have to share resources with the operating system, which is usually not optimized for sharing on that level. The IBM DS4300 you have is actually an LSI box, and has a very powerful RAID controller for its price. Don’t let your sales rep try to replace your controller! Those boxes may be a little old, but the only major difference between that and the newer IBMs is that the newer ones use 4 gig fiber and more cache. It’s very rare that a workload can max out 2 gig fiber on the front end, and even more rare that the controller can fully utilize all the bandwidth on the disk side. The extra cache can be useful, but you will experience diminishing returns- the benefit of going from 2 to 4 GB is way less than from 1 to 2 GB. The controller should not be your bottleneck for anything under 80 FC drives on the system you have, so unless you want to go beyond that, keep your box until the maintenance costs more than the replacement. Add more drives if you need IOPS or MBPS, but don’t throw it out. These boxes are supposed to be like houses- only buy a bigger one when you need it. Not because the last one is obsolete.

2- RAID 5 or RAID 10: I will compare them in reliability and performance. RAID 5 uses the space of one disk for parity, and RAID 10 uses the space of half the disks. Reliability wise, RAID 10 is the obvious winner. You can lose up to half your disks before you lose data (assuming you don’t lose two of the same pair). If you lose a second drive while rebuilding a critical RAID 5 array, you will always have to go back to your last backup. Generally, this is more of a worry for large SATA drives than it is for the smaller and faster FC drives- SATA RAIDs take exponentially longer to rebuild because of the larger amount of data combined with the lower performance per spindle.

Speaking of performance, the performance (per drive) is better on RAID 5. Most people put two RAID 5s on each enclosure, and have 4 to 6 RAIDs per hot spare. The XOR engine you speak of performs the parity calculations for RAID 5, however is not needed for RAID 10 or any other non-parity type of RAID. Since you do have a fairly fast controller, RAID 5 is attractive, however you have to balance your decision based on performance and reliability.





Inaugural post

17 07 2007

Welcome to the newest storage blog in the blogosphere. Storage technology can be complex, and this is the place to come and ask questions to reduce that complexity. I can answer most architecture and design questions (like what is the difference between iSCSI and fibre channel?), and I can find the answers to most usage and best practices questions (like how can I script the CLI to take a snapshot?).

I am also looking for a co-writer who would be able to help answer the more technical questions- any takers? Please email me.

I am happy to join the community of excellent bloggers who write about this technology- I will add them to my blogroll as I find them.








Follow

Get every new post delivered to your Inbox.