There’s been quite a hype about the so-called “Server SAN”, which basically denotes a software-defined storage solution based on commodity server hardware carrying a significant number of disks and SSDs. There’s an excellent in-depth description by Stuart Miniman on Wikibon.
Analysts see a high market potential for this type of storage, even mostly replacing the traditional enterprise storage within about 15 years. Now will this really happen, do we all need to change our storage paradigms and switch to Server SAN?
No, I do not think so.
Why? Let me point out some things, and rest assured this is not about bashing a new technology. We’ve already seen Server SAN grab its market share, and I’m not surprised because (obviously) it’s got very strong advantages and benefits. I suppose the most notable are that
- it provides excellent low read latency and high performance as it is hybrid local storage;
- it’s software-defined and usually very easy to manage;
- it eliminates the need for specialized, expensive storage system by utilizing commodity hardware;
- it’s highly scalable.
One of the main characteristics is that Server SAN is a combination of compute and storage resources. I do not list this as a benefit since to me this is the most significant, and crucial, disadvantage.
First reason: the convergence of compute and storage will only work out properly if your capacity requirements for both also correlate closely. It’s no big wonder these technologies originate from Facebook and Google or generally web-scale enterprises, because I think these are perfect matches.
It’s just that most enterprises do not consist of IT business only, but need IT to run their real business, and this Enterprise IT is different. They will not need to scale out the compute part as much as the storage capacity, and would be wasting money on unused resources and licenses. Extending only the compute (which nowadays more likely means memory) capacity may be feasible if the product supports nodes without storage or with just a small amount of disk space, but the other way around it gets quite hard. And I think the storage demand of Enterprise IT will increase considerably faster than its computing power requirements. In this case the convergence of compute and storage will increase the costs instead of reducing them.
The key success factor of virtualization, besides cost reduction of course, was the flexibility and agility it provides, especially to adjust the IT to business requirements. So do we really want to mostly dismiss this advantage?
But there’s more reasons why I’m quite sceptical about the Server SAN hype. Mainly because we create dependencies I do not consider to be desireable in Enterprise IT. If I have to put some servers of a well designed virtualization platform into maintenance mode, or want to add additional servers, I simply do so. Move the VMs around and all is good. With Server SAN I have to carefully think about the side effects like automatic reconfiguration or rebuild activities. Any of these will impact the performance of otherwise unaffected nodes. Enterprise IT services and applications are usually far from being cloud-ready, which means they don’t easily tolerate outages, hickups or significant delays. So one has to understand the inner workings of the Server SAN product, which puts the claimed ease of operation into perspective. You may not need a full team of storage experts, but you should not go without at least a few.
Speaking of maintenance – from my experience the patch & update frequency of hypervisors is considerably higher than of dedicated storage systems, whose inner redundancy often allows for rolling updates without downtime.
One last thing, which I found has not been covered widely, is the support contract cost. Properly designed [larger] virtualization platforms have enough spare or failover servers to reduce the support contract to next business day. I would recommend this to customers. Really. Buy decent hardware with inner redundancy (especially power supplies and mirrored storage for the hypervisor) and extra servers to have enough failover capacity, like you would most likely do anyway, and reduce the cost with basic support level. Works fine with separate compute and storage systems, but if you go the Server SAN way you have to buy 24×7 support, at least for your production platform. Could be a huge amount of money, depending on your design. At least don’t forget these costs in your ROI calculation.
There’s use cases where Server SAN perfectly meeds the requirements, maybe better than dedicated compute and storage resources. But I would strongly recommend not to fall for the hype. In Enterprise IT or generally in large environments with mixed and potentially changing requirements Server SAN may provide more drawbacks than advantages. Dismiss FC-based SAN, go for converged network, and check out the improvements of “traditionally” connected Hybrid Storage. New vendors with fascinating products have entered the scene, with amazing improvements in the fields of manageability as well as high IOPS. Also I’m very sure the vendors of conventional storage systems will come up with something adressing the Server SAN advantages. And no, I don’t think they have done so yet – All-Flash-Arrays are to me just some kind of panic reaction, not products I would take seriously. I mean, come on. It’s overkill. To me hybrid storage is the future. Time will tell.
Let me know what you think!