cod·i·fy (kd-f, kd-)
tr.v. cod·i·fied, cod·i·fy·ing, cod·i·fies
1. To reduce to a code: codify laws.
2. To arrange or systematize.
Pay attention to number 2 there. Chris Mellor of The Register got some words from Steve Legg, IBM UK’s Chief Technology Officer for Storage.
These words made it quite clear that it there's an intent to codify stupidity within IBM Storage UK. He said simplify, but this is me, and I don’t like lies and obfuscation. What he actually meant is “collapse the offerings, and then make some patently ridiculous and arguably false statements to the press.” The word choices he made were exceptionally poor, but the choices made in "collapsing" are far worse.
And here comes the hatemail because me, Mister I-Love-SVC and I-Love-DS8K is calling IBM Storage “stupid” and “ridiculous” and thus I must now be a shill for $MostHatedVendor or whatever. Except I’m STILL not employed or representing anybody but myself. Seriously, if I was shilling, I would have built myself a Dragon 20w with dual 5970’s. Or I would have at least put 16GB in my ESXi box instead of 8GB.
Anyways, let’s be honest and start with the good. I like honest, and I like good. Who doesn’t? SONAS – forget IBM’s acronym of Scale-Out NAS. I demand they change the acronym to Seriously Ossum NAS. It’s a brilliant design in its overall simplicity, combined with absolutely ridiculous density. If anyone’s going to get this right, it’s not Sun – I mean Oracle, it’s going to be IBM. They have the budget and resources. And SONAS delivers, if the order is NAS. I am a little dubious of some aspects of SONAS, but these are software issues and not hardware issues. Software issues should be able to be fixed without needing to forklift the hardware.
What software issues am I concerned about? SONAS is going up against not just Oracle, but NetApp, EMC, HP, Dell and so on inevitably. In that regard, it’s lacking in the snapshot to application integration NetApp and others have. At the price points IBM’s talking on SONAS? Integrating with applications for snapshots is pretty much expected. There are a lot of other software integration and capability questions that IBM has so far left unanswered (without NDA,) so it’s very much a wait and see. The hardware has the potential, it’s up to the software to execute. But at least they’ve solved the back end portion already with GPFS.
The good while being less than brilliant; “VDS.” This ‘offering’ is almost insulting to the capabilities of the IBM SVC. The VDS product cripples the SVC by chaining it to IBM’s low and midrange storage, the DS3k and DS5k. Look, you’re not likely to sell any business who’s had a DS5k another DS5k. The architecture is positively ancient, and is still incapable of anything beyond the most basic of maintenance being performed online. Any firmware maintenance absolutely requires hours of downtime. The DS3k doesn’t even attempt to fake online maintenance capabilities – it just can’t, and it’s not meant to.
But this is a channel play. Why? Beats me – IBM could certainly use more solutions as opposed to just products. My opinion is that it would be a lot smarter to keep VDS close to the chest, and offer it with DS3k, DS5k and DS8k. Seriously folks, the DS3k and DS5k can produce great performance numbers, but they have not been and will not be true enterprise arrays. You have a minimum 2 hours of downtime per year – that’s minimum, not typical – for mandatory firmware upgrades. Why? DS3k and DS5k require stopping all IO to do controller, ESM and disk firmware. So the SVC’s high availability ends up somewhat wasted here. Only the DS8k is on par with the SVC for high availability while servicing.
And the patently ridiculous and arguably false, otherwise known as codifying stupidity. I’m going to give you a quote, and you’re not going to believe it, but it’s a very real quote.
"XIV can reach up quite a long way and run parallel to the DS8000.” –Steve Legg, IBM UK Storage CTO
Yes, that’s Steve Legg of IBM UK saying that the XIV is the equal to the DS8000. Now Steve, the horse is out of the barn, and you can damn well believe I’m going to call IBM out on this load of manure. That statement has absolutely no basis in fact by IBM's own published case studies and reference sites, and even a cursory review of specifications between the two arrays reveal it to be obviously disingenuous at best.
But let’s have a refresher of those spec sheet contents, shall we?
XIV is comprised of 15 modules totaling 180 1TB 7200RPM SATA disks with 120GB of cache and over 7kAVA of power draw at idle and a peak of 8.5kAVA at 29000BTU/hr. The only RAID type is mirroring, reducing actual capacity to 79TB before snapshot – this is also the maximum capacity of the XIV, 79TB – it is not possible to span frames except to mirror them. You cannot grow past 79TB and there is no intent to move to 2TB disks in the next generation XIV hardware. Disk interface is 12xSATA over Gigabit Ethernet, changing to SATA over InfiniBand in the next hardware release (forklift upgrade required.) Protocols spoken are Fiber Channel 1/2/4Gbit and iSCSI over Gigabit Ethernet with a maximum number of 24 FC ports and 6 iSCSI ports, with host ports removed for Mirroring HA (the only HA method available.) Major component maintenance is limited and customers may perform absolutely no service on XIV whatsoever. And I do mean NONE; even a simple disk replacement must be performed by a specially trained CE. IBM shipped the 1000th XIV in November of 2009.
DS8000 is now four generations old, comprised of the DS8100, DS8300, DS8300 Turbo and recently introduced DS8700. Based on the IBM POWER architecture as a controller and using custom ASICs, the DS8000 family doesn’t just hold but absolutely owns the SPC1 and SPC2 benchmarks. Two processor complexes provide from 32GB to 384GB of combined cache and NVS. The DS8700 ranges from 16 to 1024 disks using any combination of 73/146GB SSD, 146/300/450GB 15K RPM, and 1TB 7200RPM disks in packs of four or sixteen with a maximum capacity of 1024TB. RAID levels supported are 5, 6 and 10. Disk interface is FC-AL via multiple GX2 connected IO Complexes. The frame ranges from a single wide cabinet to 5 frames (base plus four expansions) with minimum power draw of 3.9kAVA base, 2.2kAVA per expansion and maximum of 7.8kAVA and 6.5kAVA respectively. The thermal min/max is 13400/26500BTU/hr and 7540/22200BTU/hr respectively. Protocols spoken are Fiber Channel 1/2/4Gbit and FICON 4Gbit with a maximum host port count of 128 in any combination of FC and FICON. Almost all major component maintenance can be performed without needing to shut down the DS8000, and all prior models can be field upgraded to the current DS8700 941/94E. Customers may opt to perform most DS8000 maintenance tasks themselves and some hardware repair, including disk replacement.
As you can see, these two systems are not even remotely similar or comparable. The absolute maximum disk IOPS an XIV is capable of, being as generous as we can be at 180 IOPS per disk, is 32,400 IOPS. The DS8700 using FC disks and the same 180 IOPS per disk as a conservative number, is capable of 184,320 IOPS. This is ignoring all buffering, caching and advanced queuing. The DS8700 is proven to be capable of well over 200,000 IOPS with a high number of hosts. IBM refuses to submit XIV to an audited benchmark and their most detailed case study with Gerber Scientific shows XIV only handling a total of 6 systems (claiming 26 LPARs, that's still ridiculously tiny) and using less than 50% of its available capacity.
For IBM to even insinuate that the XIV is “parallel” to even the DS8100 first generation hardware is to basically call their customers idiots; it is the same as telling MotorTrend that your 1985 Yugo 45 can keep pace with a 2004 Ferrari Enzo. It’s only true as long as they’re both doing 25MPH and you’re willfully ignoring everything other than the fact that they both can do 25MPH. Anybody who spends more than 10 seconds reviewing the specification sheets for these two systems or cars will immediately be able to tell that they are not in the same class. Yet IBM would very much like you to believe that their Yugo 45 is just as fast as their Ferrari Enzo. Perhaps a more apt comparison would be that Steve is currently telling you that IBM's Renault Twingo can totally hold at least as many people as their London Double Decker Bus.
Am I calling Steve Legg an idiot? Absolutely not. Steve just made an amazingly bad word choice. Steve Legg is a well respected guy, and not someone who's going to call you daft, especially not customers. But he’s basically said that IBM’s organizational stance is that customers aren't smart enough to spend a few moments reviewing a spec sheet, and seeing the obvious disparity between the two arrays. He’s saying that IBM believes customers are too stupid to see the inefficiency of the XIV as compared to its “green” claims, too stupid to see the raw horsepower of the DS8700, too stupid to tell the difference between 7200RPM and 15000RPM, too stupid to understand that 3.9+2.2/7.8+6.5 kAVA is more efficient than 7+7/8.5+8.5 kAVA. The problem with this is that the special XIV people will latch onto these words, yet again, and continue to use them while they do treat customers like idiots. (Those who claim they don't, I had them telling me to my face that the numbers they were putting up on the screen as gospel, didn't mean anything. Among other things.)
Yet again, this does not mean XIV does not meet some needs. What it does mean is that XIV is still not equal to nor does it offer performance comparable to the DS8000.
Yet again, this does not mean XIV does not meet some needs. What it does mean is that XIV is still not equal to nor does it offer performance comparable to the DS8000.
His statements show that IBM’s offerings have codified stupidity; “we now sell on the basis that customers are too stupid to read or question us.” When customers push back on the high cost of DS8000, just whip out the significantly cheaper and far less capable XIV without mentioning anything other than "it can run parallel to the DS8000!" Which only goes to further support my arguments that you should be questioning your vendor at length, demanding hands on testing, and refusing to take their word for it on any statements of suitability or performance. The choice is yours – you can challenge your vendor, or you can enjoy the challenge of finding new employment. And you should be really extra careful about what exactly you say to the press, especially when you have a fiefdom that doesn't answer to you itching to abuse it.
Update:
I'm sorry about the VERY poor wording on my own part, and I want to extend my sincerest apologies to Steve Legg if I caused any offense. (I should not be writing so late, obviously.) Steve is by all accounts a great guy, and I'm sure that it wasn't his intent to imply that customers are idiots. The problem is that he made a bad choice of words and phrasing, and that's how it came out. I'm quite positive he knows better, especially since IBM UK is the home of the SVC.
The problem is that's how the words went and how the offerings are now aligned, and what it says to me as a customer. But they're also not decisions that are made by just one person at IBM, and Steve is just the messenger in this case. He certainly isn't deserving of, and I certainly would not rain my wrath down upon Steve specifically. If you ever get a chance to meet Steve Legg, be sure to shake his hand and thank him for SVC. ;)
Update:
I'm sorry about the VERY poor wording on my own part, and I want to extend my sincerest apologies to Steve Legg if I caused any offense. (I should not be writing so late, obviously.) Steve is by all accounts a great guy, and I'm sure that it wasn't his intent to imply that customers are idiots. The problem is that he made a bad choice of words and phrasing, and that's how it came out. I'm quite positive he knows better, especially since IBM UK is the home of the SVC.
The problem is that's how the words went and how the offerings are now aligned, and what it says to me as a customer. But they're also not decisions that are made by just one person at IBM, and Steve is just the messenger in this case. He certainly isn't deserving of, and I certainly would not rain my wrath down upon Steve specifically. If you ever get a chance to meet Steve Legg, be sure to shake his hand and thank him for SVC. ;)
This comment might seem a bit strange, considering that I'm Marc Farley, otherwise known as 3parfarley, the blogger from 3PAR. We compete against both XIV and DS8000 and DS5000 products.
ReplyDeleteI know Steve Legg and I consider him to be one of the finest people I have met in the industry. I understand your frustration, but I can assure you Steve did not mean to imply that IBM customers are idiots.
The dishing you give him would be more appropriate for somebody like me or EMC's Chuck Hollis or Barry Burke or any of the other vendor bloggers, but Steve did not make statements bashing anybody, not even IBM's competitors - nor did he engage in the usual mud slinging that the rest of us practice with such zeal.
However, I definitely DO WONDER why IBM's customers would be so crazy as to buy any of their kit, considering the cost and the issues you outlined.
You mention SPC-1 bragging rights. Yes, their numbers top the chart, but at what ridiculous expense? Here are some comparison numbers between SVC + DS8K and 3PAR's most recent SPC-1 benchmarks. (this content comes from the Executive Summaries published by the SPC. Links are provided at the end of this comment)
SVC 6 node cluster was $18.83/SPC-1 IOPS
With response time of 7.64 ms.
@ 100% load generating 380,489 IOPS
Priced at $7,165,323.19
SVC 4 node cluster was $22.65/SPC-1 IOPS
With a response time of 28.13 ms
@ 100% load generating 315,043 IOPS
Priced at $7,134,842.39
Note: at 7.19 ms. this config was running at 80% load and generated 252,000 IOPS
3PAR's enterprise T class was $9.30/SPC-1 IOPS
With a response time of 7.22 ms.
@100% load generating 224,989 IOPS
Priced at $2,091,667
3PAR's mid range F Class was $5.89/SPC-1 IOPS
With a response time of 8.85 ms.
@100% load generating 93,050 IOPS
Priced at $548,432
The SVC configurations were both priced at over $7 million, while 3PAR’s T Class was priced at just over $2 million. The 6 node SVC cluster cost 3.5x more than our T Class and generated just under 70% more IOPS. The 4 node SVC cluster cost 3.5x more than our T class and generated only 40% more IOPS.
Here are shortend URLs for the SPC-1 summaries:
http://bit.ly/bO8CDB
http://bit.ly/dDtTcI
http://bit.ly/9lETHE
http://bit.ly/a6nKco
I have worked with Steve for many years, and I'm not quite sure why you have ranted quite so much here - it feels like you've been burned by something and taking it out here.
ReplyDeleteWith the proportion of cache to disk in XIV, you can scale many more IOPs in certain workloads than the basic disks themselves provide. Therefore taking some workloads up to over 100K iops - thus scaling into the same domain as DS8000. (2 DS8700 SPC-1 - gave ~160K iops - the extra on the 6-node test coming from the additional SVC cache and processing capability - since it was the same 2 DS8700 - which in itself shows how much additional performance can be gained from adding just 48GB additional cache)
I was also going to correct your SPC-1 comments, it was two DS8700 we used for these tests, not one.
Both these tests were disk limited, that is, the SVC could sustain more IOPs if there were more spindle ops.
As for Marc, if you want a better $/iop, then use cheaper storage to get the iops. I doubt 3pars arrays are tracking to the same enterprise class availability as SVC + DS8700 and for that, people are willing to pay a lot. In addition, imagine what we could do if you put an 8 node SVC infront of a collection of your boxes... thats true clustering ;)
Hey guys, glad to see you here. :)
ReplyDeleteOkay, first and foremost, I'll have to go back and edit the blog. The point I'm trying to make is that Steve isn't stupid, but he made an EXCEPTIONALLY poor choice of words. Remember, these are published quotes. It's not necessarily what Steve thinks - I know Steve is a brilliant guy, and has the utmost respect for IBM's customers - but he said what he said.
I will go back and attempt to clarify that better. People need to pay attention to what they say when they speak for their employer, and I'll be honest - I've seen XIV sales. And they treat customers like idiots. Maybe it was just the folks I got, but XIV remains their own little fiefdom in IBM land.
Marc, to address cost versus benefit, let's be honest here. How many people do you know that got fired for buying IBM (other than DS4k/DS5k)? IBM sells kit not purely on "our stuff is better" but "we're IBM." They provide a certain comfort factor to organizations from age and size. Customers aren't just buying kit; they're buying a company that's been around forever, that practically invented half the stuff they use today, that has a huge and well proven service organization. (Call it the "Safety Blanket" effect.) Also, as Barry mentioned, the DS8k's are nigh on unkillable. I think the ONLY way I could bring one down unplanned is by yanking all power, and I'm pretty sure even that's not enough to get it down dirty unless I hit the EPO.
As Barry pointed out before I could, yep, it was a pair of DS8700's in mirroring as I recall. He'd have to speak to the exact details of how that was setup and what access pattern SVC used. I've been told by IBMers what the theoretical max IOPS for the old 4F2 nodes is, but I don't have approval to share the number.
Continued, because of the 4096 character limit...
The problem with XIV is that it's unproven, unverifiable, etcetera ad infinitum. All the case studies in the world are effectively worthless in the face of the SPC numbers we've just mentioned. Can it do 100K IOPS? Nobody can absolutely prove it can. No offense Barry, but without public, verifiable numbers, it's just claims. It's the same as me saying that a single SVC node can do 1M IOPS ball by it's lonesome - maybe it can, maybe it can't, but either way nobody can verify one way or another.
ReplyDeleteAnd this isn't a new thing, either. IBM has been refusing to provide verifiable, audited performance numbers for XIV since day one. Moshe has been stonewalling benchmarks since the first unit was sold. Frankly, the XIV reeks of Sun E10k to me. (Sorry, vague thing, but I couldn't care less about the claim of perpetual NDA.) In fact, it's almost exactly parallel to the E10k in many, many ways. How exactly? Let's see:
1) NO BENCHMARKS ALLOWED! NO NO NO!
Early customers were not even allowed to run benchmarks on your E10k, internally. XIV has spent a great deal of time decrying the validity of benchmarks like SPC. NDA be damned, I have no qualms about sharing that the XIV TECHNICAL presenters I got told me to my face, that SPC wasn't a valid benchmark for XIV and didn't apply at all to them because they were special.
2) Critical show-stopper problems abound, requiring major re-engineering to actually resolve.
For E10k this was the 400MHz/8MB chips, for XIV it's the ATA-over-Ethernet.
3) You Can't Say That - Or Anything!
I was warned a year and a half ago that all XIV presentations were under a blanket NDA. We could not discuss anything we saw in them with anyone except IBM and an XIV approved VAR. E10k, the official stance was pretty much Fight Club rules 1 and 2.
4) Look, Don't Touch, under penalty of death.
When you bought an E10k, it came with Sun Engineers. (Get to that in a sec.) They added six digits each to the annual cost of your E10k, and were mandatory. You were not allowed to manage the operating system or the hardware - only they could touch it.
When you buy an XIV, you don't even get the engineers, but you get the restrictions. You're not allowed to do anything other than create LUNs and snapshots. You're not allowed to do basic repairs like swapping disks, or even update the firmware. Everything must be done by a specially trained IBM CE who has gone to XIV school.
In both cases, if you touched something you weren't allowed to touch, you just voided your support contract in entirety.
5) It's Made By Us But It's Not Made By Us.
The E10k was the result of an acquisition Sun made. That says it all right there, pretty much - the people who designed, built, and maintained E10k's were Sun in name only. Most of their organization was not integrated, so you had to call the E10k specific numbers, use E10k specific engineers, etcetera. It was a long time before any meaningful integration. And even then, engineering was their own area unto themselves.
XIV is the same deal; your regular IBM technical staff can't make XIV presentations. Those have to be made by people from XIV. Your regular VAR couldn't just sell XIV until mid-'09; they had to be a special partner and go through special XIV training. Your regular CE can't touch your XIV till he goes to special XIV training.
And then there's the parts side; made by us but not even remotely made by us. Sun E10k, the Sun part was the UltraSPARC processor (and the Ultra 5 boot system.) XIV, it's.. maybe some of the cables?
So, as I said, XIV is okay for some workloads. But it's no equal to the DS8k. Barry, you even made that point clearer yourself. The XIV has a 180GB of splintered-access slow DRAM cache, the DS8000 has 384GB of mirrored high-bandwidth DRAM cache. There's just no way XIV can scale up to DS8000.
Guys, Unplug power, controllers, pull out disk magazines - our systems keep running. We introduced a technology called persistent cache that makes new redundant copies of cache data should a node or its cache fail - the result is high performance after a node failure.. FWIW, that is available on our 4-node mid range F-class arrays too.
ReplyDeleteBarry, I'm not sure what the advantages would be of putting SVC in front of our arrays. Would two layers of cache give much of a boost? Management would be more complicated.
Yes 3PAR is a much smaller company than IBM. Our financials are excellent (no debt and $108 Million in cash). As one of the giants IBM can do things as a company that we can't. All we do is storage.
And blog :)
Careful, Marc; my ability to break things is well known and documented. (BRB, finding somebody else's F-class to do "maintenance" on. ;)
ReplyDeleteBarry's going to be able to explain it way better than me, but it goes a little something like this:
SVC has a minimum 8 ports which will use all available storage ports and stripe data across LUNs in an Mdisk group.
Thus, if we have a single F class with four 8 spindle LUNs across 4 nodes, traditionally we only use 8 spindles per LUN.
With SVC involved, you now have all 4 nodes busy, and the host-presented LUN now uses all 32 spindles instead of just 8. (Or you could only use 16, or only 8, or so on.) It also centralizes management, and makes it simpler, because all your host and disk management regardless of backend storage, is done at the SVC.
Make sense? If not, I'm sure Barry can help explain it better. That's also ignoring cache involvement - he'll have to explain CF8 Vdisk mirror abuse, as I call it. ;)
3Par's definitely got a leg up on IBM in some areas, while IBM has a leg up in others. I certainly consider both competitors in a wide range of areas. Unfortunately, nobody is ever going to let me decide purchases purely on technical merit.
LUNs in our systems typically stripe across all disk drives (of the same class) in the system. We don't group them by controller of shelf. Admins don't create hypers or metas and combine them to make bigger entities. There is a hell of a lot of cross shipping across the backplane, but its big and fast and includes special ASIC code for managing it all as part of a cluster. The same cluster design also gives os a lot of resiliency if things start failing.
ReplyDeleteTime has shown that the quote: "You cannot grow past 79TB and there is no intent to move to 2TB disks in the next generation XIV hardware." turned out to be not true. Yesterday the Gen3 was announced, with 2TB disks, but then again the latest versions of Gen2 already have them with some clients. And who knows what disk sizes the system will support by the end of next year, the disk vendors sure are pushing the larger sizes.
ReplyDeleteYou have really helped several of individuals like me, who have been searching internet from past quite a long time to find detailed information on this particular topic.
ReplyDeleteibm storage controller