IBM’s XIV

22 08 2008

I finally got a chance to learn about XIV. I was dragged into an IBM product presentation recently, so I figured I would summarize the one thing not covered on the NDA here :)

What is XIV?

Essentially, it’s a disk storage device that uses only SATA drives but gets a high number of IO/s out of them by spreading the reads and writes across all disks. Every LUN you create will be stretched across every disk in the array. Instead of using standard RAID to do this, XIV has a non-standard algorithm that accomplishes the same thing on a larger scale.

They build every system exactly the same way- each system contains a bunch of nodes of 12 drives each with their own processors and memory. It’s all off the shelf hardware in a node- pentium processors, regular ram, and sata drives. Not enterprise class on its own, but because of the distribution system they’ve worked out, you get all the performance of all the drives for all your reads/writes.

Scalability is done by hooking new systems to old ones through the 10GB switch interlink ports. They say that as newer communication tech becomes available, this will follow along (so eventually they will support infiniband). Also, when you add a system to the cluster, the balancing of data is automatic.

How is this different?

The big change here is in the way they put data on their disks. They’ve re-invented the wheel a bit, but for a reason. The performance you can get out of low cost low end drives in parallel is very good. Normally, I would never tell people that SATA is appropriate for databases or email, but XIV claims to be fast enough. I imagine we’ll see some benchmarks soon.

The first thing I asked about was parity space. XIV puts parity info over the whole array, so with 120 1TB drives, you get 80TB addressable space. Also, because rebuilding a 1TB drive from parity is normally a really intensive operation that generates many reads across the RAID, I asked about how they handle rebuilds. They claim that they can rebuild a 1TB drive from parity in about half an hour because all the parity data is being read from all the other heads simultaneously.

This sounds good, but I wonder if a failure and rebuild will slow down your entire production environment instead of only the raid where the drive failed. Also, in the event of an entire node failing with 12 drives, would that mean a 6 hour rebuild that affects the whole production array? If they have some way of prioritizing production IO, then I am satisfied. I don’t know if they do though.

Snapshots

Normal “copy on write” snapshots create extra writing traffic- every snapshot is another write that must be committed to disk before the acknowledgment is sent to the host. XIV uses a snapshot algorithm called “redirect on write” to avoid this problem and allow larger numbers of readable/writable snapshots.

They create a snapshot LUN that initially points to the real data, and when a change is made to the source, they write the new data to unused space and point the production LUN there while leaving the snapshot pointed at the old data. Netapp used a different algorithm to solve the same problems inherent to “copy on write” traditional snap shots that launched them into success in the enterprise storage market years ago.

Other advanced features

The box is delivered with all functionality enabled, which is an interesting move considering every other vendor I’ve dealt with makes most of their money from software. They include mirroring, thin provisioning, and a weird one time only type of virtualization that sits between the hosts and the old storage and reads all the data off the storage while continuing to pass the IO through transparently.

Questions

If someone from XIV (or more likely IBM) is reading this, I want to know more details about your mirroring and your workload prioritization:

  • Do you support synchronous, asynchronous, and asynchronous with consistency group mirroring? What about one to one, one to many, and many to one configurations?
  • Do you have a way to prevent disk rebuilds from taking disk resources that are needed by production apps?
Advertisements




Barry’s question

7 01 2008

Via email:

“When you are thinking about Disaster Recovery, CDP, do you assume that Tier3 is adequate, mainly because this is backup only, or maybe DR so hopefully not needed? How does your thinking proceed? Do think about your primary data at the same time?

I ask this as a loaded question, knowing that anything that has to copy to, snap to, or mirror with, secondary, backup or DR or CDP storage now has a definite tie with the primary.

Barry Whyte

SVC Performance Architect
IBM Systems & Technology Group”

This is a loaded question! To start with, I’ll note some assumptions and concept clarifications to ensure we’re talking about the same thing- if I’m off on anything, let me know ;)

  • CDP: continuous data protection, an IBM backup software algorithm- small changes sent to a central server continuously
  • Tier 3: low price random access storage media- not tape, usually cheap SATA drives
    • Note: there’s been discussion about these tier definitions before, and I hold that tier 3 means different things to different companies.

To your question- I would have to decide based on the company’s current architecture. If they have a storage solution that has synchronous mirroring between two sites, then using low performance drives on either side will slow production. If they’re doing asynchronous replication (or a server instead of storage based DR solution), I would probably be fine with SATA/tier3.

To explain my reasoning, I must first say that I can not decide without having a specific case and a IT person to question. My advice would be based on risk tolerance versus capital expenditure tolerance. Secondly, SATA has a undeserved bad rap- the drives are about as reliable as other enterprise ones (according to Google). SATA drives are certainly not fast for random access loads, but for sequential and low urgency loads like backups, they will do the job.

Low performance media will always be part of a healthy storage balance- the most bang for most companies’ bucks will be in prioritizing their applications (or even their data), and using the media that makes the most sense. Need an Oracle server to stop freezing up your warehouse management app? Put that baby on 15,000 RPM FC hard drives- lots of them. Need to keep a backup copy of a file server on site in case of a server outage? SATA will do the job. Need to keep nightly point in time backups of your entire storage infrastructure for years? You probably can’t afford to put that on drives at all- use tape.

That said, most companies that haven’t reached a boiling point in their storage gear yearly expenditures won’t bother to do much of this stuff. Face it, tiering your applications for storage takes operator time, and gear just seems to feel cheaper to management than IT man hours. That and the explosive growth of media density in the last 5 years have kept tiered storage plan adoption either to the ridiculously large data producers who have no other choice (like large banks) or to more forward thinking smaller shops.





Storage and fabric virtualization

7 08 2007

Aloha Open Systems Storage Guy,

What’s your take on virtualization? VSAN from Cisco, SVC from IBM? What other virtualization products are available from other vendors?

Thanks,
John

Cisco VSANs and IBM’s SVC are different things for certain :)

The VSAN allows you to create multiple logical fabrics within the same switch- you tell it what ports are part of what SAN, and you can manage the fabrics individually. It’s especially useful if you’re bridging two locations’ fabrics together for replication or something because it allows you to do “inter VSAN routing” if you have the right enterprise software feature. That would allow you to have two separate fabrics whose devices can see each other, but if the link between the sites fails (which is more likely than a switch failure), you won’t have the management nightmare of having to rebuild the original fabric out of two separated fabrics when the link comes back. VSANs are also commonly used to isolate groups of devices for the purpose of keeping those devices logically separated from parts of the network they’ll never need to interact with.

IBM’s SVC is a different technology that is supposed to consolidate multiple islands of FC storage. It’s essentially a Linux server cluster that you place between your application servers and the storage. It allows you to take all the storage behind it and create what they call “virtual disks”- essentially a LUN that’s passed to a server but contains multiple raids (possibly from multiple controllers). This gives you the option of striping your data across more spindles than you would be able to normally, and allows you to do dynamic thin provisioning when your datasets grow.

The only downside of the Cisco VSAN technology I can think of is its cost- it’s bloody expensive compared to a cheap low end solution, and for anything less than a 50 device FC fabric, I would questionable whether it’s worth it. There is an alternative from Brocade/McData they call LSAN, however I am not as familiar with it. I have been told that it’s slightly less complicated, but harder to manage, and doesn’t have the full feature-set of Cisco.

The downside to the IBM SVC is that you create latency for all your disk reads- every time a server needs to perform a write, it has to go through the Linux cluster first. It has a much larger cache than most controllers, so there’s a better chance that the data you’re looking for is already there, but if it’s not, your read performance might suffer a little because of the extra few milliseconds. The advantage is that you can now use incredibly cheap controllers with tiny amounts of cache, and it allows you to migrate data from any manufacturer’s device to any other manufacturer’s device without interrupting your servers. Under a virtualized environment like this, an older DS4300 like you have will perform pretty much on the same level as a more expensive DS4800 or EMC CX3-80 (assuming the same number of drives) because you don’t really use the cache of the underlying system. Another advantage of the SVC is that most FC storage controllers charge you either one time or over time for the number of servers you’re planning to connect to them. IBM charges a “partition license” fee for LUN masking, and EMC charges a “multipath maintenance” tax. Either way, the multipath drivers for SVC are free, and it only needs one partition from the controller, so you might be able to save money that way.

Did you have any specific questions about these topics you want more detail on?

Also, one of the new bloggers in the storage world- Barry Whyte– focuses on IBM SVC. He just started, but his blog will hopefully become a real resource for people with IBM storage virtualization on their mind.





John- hardware vs. software RAID, RAID 5 or 10?

23 07 2007

“Aloha Open Systems Storage Guy,

I’m a recent convert to storage administration. I’m having a hard time cutting through the cruft to find the truth. Could you answer some of these questions?

1 – Which is faster, software-based RAID (e.g. Linux md, Windows Dynamic Disks) or hardware-based RAID? One person said that software-based RAID is faster because it has a faster processor and more RAM/cache (something like a Xeon 3.0 Ghz w/ 4Gb or RAM would be typical in my environment). But how could that stack up against my (little bit old) IBM DS4300 Turbo (2Gb cache).

2 – Which is faster, RAID-5 or RAID-10 (or is that RAID-01?) I know everybody says RAID-10, but what about those fancy XOR engines? Or have I fallen prey to marketing?

Thanks for taking a moment to listen to my questions.
Mahalo (Thank you),
John”

Hi John, and welcome to the blog!

To answer your questions, I’m first going to give a bit of background info. If any of my statements don’t make sense, please reply and I’ll answer :).

The term “faster” can mean different things to different people. Each type of storage has its strengths and weaknesses, and different applications perform differently on the same storage systems. There are two primary application workloads- those that do random IOs, and those that do sequential IOs.

The random workloads are the hardest ones to provide storage for because it’s very difficult to “read ahead” by predicting where the next read will fall. An example of an application that has a random workload would be a database or email server.

The sequential workloads are easier to provide storage for. Pre-fetching the next block will most of the time yield a read that’s already in cache. An example of an application like this would be a backup server or certain file servers.

Another general bit of info is that in a RAID, reads (not writes) are usually the bottleneck. Writes are usually fed into the cache and acknowledged to the host server immediately. Reads, however, are typically 70% of the IO being done by a system, and as we discussed are often impossible to “pre-cache”.

When you’re calculating performance, the two stats you’ll want to know is IOs per second for random loads, and MB per second for sequential loads (abbreviated IOPS and MBPS). When you’re trying to tune a system to be quick for your applications, you need to know the different levels of your system and which one is the bottleneck. Normally, on a decent controller, the number of spindles you have in the RAID will determine the IOPS. You should get a linear increase in performance as you add drives to a RAID. Cache is important for the 30% of writes you can expect (your mileage may vary), however everything goes to disk eventually, and most people experiencing slow performance on their disk controllers simply don’t have enough disks.

Onto the specifics of your question:

1- Software or Hardware RAID: For most workloads, a dedicated hardware RAID controller is faster. Software RAIDs have to share resources with the operating system, which is usually not optimized for sharing on that level. The IBM DS4300 you have is actually an LSI box, and has a very powerful RAID controller for its price. Don’t let your sales rep try to replace your controller! Those boxes may be a little old, but the only major difference between that and the newer IBMs is that the newer ones use 4 gig fiber and more cache. It’s very rare that a workload can max out 2 gig fiber on the front end, and even more rare that the controller can fully utilize all the bandwidth on the disk side. The extra cache can be useful, but you will experience diminishing returns- the benefit of going from 2 to 4 GB is way less than from 1 to 2 GB. The controller should not be your bottleneck for anything under 80 FC drives on the system you have, so unless you want to go beyond that, keep your box until the maintenance costs more than the replacement. Add more drives if you need IOPS or MBPS, but don’t throw it out. These boxes are supposed to be like houses- only buy a bigger one when you need it. Not because the last one is obsolete.

2- RAID 5 or RAID 10: I will compare them in reliability and performance. RAID 5 uses the space of one disk for parity, and RAID 10 uses the space of half the disks. Reliability wise, RAID 10 is the obvious winner. You can lose up to half your disks before you lose data (assuming you don’t lose two of the same pair). If you lose a second drive while rebuilding a critical RAID 5 array, you will always have to go back to your last backup. Generally, this is more of a worry for large SATA drives than it is for the smaller and faster FC drives- SATA RAIDs take exponentially longer to rebuild because of the larger amount of data combined with the lower performance per spindle.

Speaking of performance, the performance (per drive) is better on RAID 5. Most people put two RAID 5s on each enclosure, and have 4 to 6 RAIDs per hot spare. The XOR engine you speak of performs the parity calculations for RAID 5, however is not needed for RAID 10 or any other non-parity type of RAID. Since you do have a fairly fast controller, RAID 5 is attractive, however you have to balance your decision based on performance and reliability.