Thursday, November 5, 2009

Virtual Computing Environment Coalition

I was wondering how long it would take before these three decided to get together and try to push out the competition. For a customer that hasn’t really done anything with Virtualization due to the perceived risk with implementation this would be a tempting way to go since it would appear to reduce those risks. There are really three main parts to this announcement.

The Vblock Infrastructure Packages
This is the combined hardware and software offered by Cisco, EMC, and VMware which are made up of pre-packaged VMware, UCS, MDS, and EMC storage solutions and called a Vblock. As part of the announcement, three Vblock solutions were introduced:
  • Vblock 0 (available mid-year 2010) is an entry level system which includes Cisco compute and networking, EMC Unified Storage, and VMware vSphere software.
  • Vblock 1 is a mid-sized configuration that includes Cisco UCS, Nexus 1000v and MDS swicthes, EMC CLARiiON CX-4 storage, and VMware vSphere.
  • Vblock 2 is a high end solution that includes Cisco UCS, Nexus 1000v and MDS swicthes, EMC Symmetrix V-Max storage, and VMware vSphere.
VCE Services
There are a number of "pre-packaged" services offerings ranging from high level strategy, to actual implimentation services for the Vblock. These servcices will be delivered by a new company called Acadia which is a joint venture between Cisco and EMC. Note that a quick trip to indicates that Acadia will not be fully functional until 2010.

VCE Seamless Support
The three vendors have created a vutual support center which is a combination of people from each of the three. They have also created joint test labs and cooperative engineering groups, etc. to try and provide the customer with a single point of contact for support.

My experience has been that the support for Vmware solutions tends to break down around the interfaces between the components. For example, where the storage meets the server and VMware itself. When there is a problem, who do you call first? As it turns out managing the “interfaces” tends to fall on the heads of the customer's infrastructure team. They in turn need the skills and the knowledge to fill those gaps. THAT is a lot of risk to take on, especially in the beginning. The consolidated support part of this coalition would mitigate those risks since there is only one virtual support organization. The question in my mind as a customer would be, how up to speed are these support guys going to be in regards to those “gaps” on day one?

You also have to wonder how much leverage a customer ends up having when they are buying this kinds of unified solution? Since everything is unified, and there is only one game in town, you have to wonder how expensive something like this is going to be? Sure, EMC, CISCO, and Vmware are going to tell you that all of the goodness of the unification means that the solution is going to be a little more expense ... But look at what you’re getting for that extra money! ;-)

What I would be interesting in knowing is what other vendors like Microsoft, Brocade, IBM, NetApp and Hitachi are planning as a response, if anything? Perhaps a collation between VMware, DELL, Brocade, and NetApp to offer similar prepackaged solutions? Would Vmware refuse to join such a collation and turn away business? Or even if they didn’t join the collation there’s still nothing preventing the rest of those vendors from forming a collation and just buying Vmware as they need it or joing with Microsoft and offereing something based on Hyper-V instead of VMware. Could this be the start of “Coalition Wars”?

Sunday, April 5, 2009

The real cost of storage

What's the real cost of storage? I get asked this question all the time, and it's so difficult to answer because it really does depend on so many factors from storage team to storage team. What's really surprising to me is that I'm being asked the question at all. You would think that everyone who runs a storage organization would know exactly what that number is. Some people simply look at their budget and say "here you go, this is what it costs". But can you break it down? Do you know where all of that money is going, and why? I think that's really what people are asking. They know how much they are spending, but they want to know why and how they can save money. Certainly in these economic times storage managers are asked to do more with less while the data continues to grow. So that leaves them asking, how? How do I manage to address this growing pile of data with fewer people, less CAPEX budget, and more demands from the business around things like disaster recovery?

So how do you address the question? How do you do more with less? A lot of storage managers are looking at the cost per GB of their disks and asking, can I get this number down? I think that they can, but it may mean doing some things in different ways than they have in the past. Specifically, here are some things to look at.

Tiered Storage

Yup, I'm recycling that idea again. Getting data off expensive spinning disk and onto cheaper disk saves money, I think that's been well established and taking another look at how you are classifying your data is a worthwhile endeavor at this point in time. Why? Because things have changed in the last year or two, and those changes might have an impact on your data classification policies, so I think a review might be in order. For example, a few years ago when I was classifying data I used SATA disk pretty much just for dev/test and archive data. But things have changed, and now there's technology out there that will allow you to use SATA disk for some of your production workload. Some technology that will even allow you to use SATA drives for all but your most demanding workloads for that matter the IBM's new XIV are now available. So, another look at your tiering policies and the SATA technology that's available today is probably a good use of your time if you're looking to save some money.

Cost of Managing Your Storage

What does it really cost to manage your storage on a per GB basis? This is really the age old question of "how many TB of storage can a single storage admin administer?" that we have been asking for a long time. The answer to this question is critical since you probably aren't getting a whole lot more headcount right now, and you might even be asked to give some up. So how do you manage more disk space with the same or fewer people? First, you have to keep in mind all of the things that go into managing a TB of space. There's a lot more to it than just provisioning a TB to an application and then walking away, right? Here are a few examples of the kinds of things that go into managing a TB of space based on my experience:

  1. Provisioning – This one is obvious, right? But you would be surprised how many people have immature processes and procedures around disk provisioning. How many people still manage their disks based on spreadsheets and command-line scripts making the process time consuming and error prone.
  2. Backup/recovery – So you have to make sure that your data is protected, and that you can get it back should the need arise. This can be a time consuming effort, and one place that you can look for efficiencies that will save you money. It's also a place that people sometimes forget to account for when they are buying more disks. Don't forget that as you add disk capacity, you also have to add backup/restore capacity, and that means more tape, or backup disk, etc but it also means that you have to account for the increased load on your backup admins as well.
  3. Disaster recovery – All of the same things I talked about above with backup/recovery also applies to DR.
  4. Data migration – Sooner or later you're going to have to move this data around. Whether it's because the lease is up on an array, or you need to re-tier the data doesn't matter, what matters is that this can be a costly process in perms of people time, and sooner or later you're going to have to do it.
  5. Performance management – At some point you always get that call "hey, our database is slow and we've looked at everything else and haven't found the problem, can you look and see if it's the disks?" Unless you have some very mature performance management processes in place, this tends to turn into a huge people time sink.
  6. Capacity management – We all know that our data is growing, that's a given, so that means that we need to spend some time planning how we are going to address that growth. When are we going to have to make those new disk purchases, when will we have to buy a whole new array? What about the switches? Are we going to need to expand that environment when we bring in that new array as well?
  7. Documentation – yes, that's right, I said it, documentation is an important part of managing your storage, and it can take up quite a bit of the storage admins time, but it has to be done.

So the question I always ask is, "how mature and efficient are your processes?" Do you have a high degree of automation around all of the above? What use are you making of technology to help you manage the processes above? If you have very mature processes, employ a high degree of automation, and make good use of technology to help you automate as many of those processes as possible, then you probably have done everything you can to drive down the cost of managing your storage. But now is a good time to take a look and see if you can improve any of those areas. For example, does my disk vendor really provide tools to make managing my disk arrays easier? Not just from a provisioning standpoint, but from the standpoint of all of the above. If not, maybe it's time to consider looking at another vendor, one that has better tools.

Let me leave you with a final thought in this area based on my experience. What I found when I was managing storage was that the cost of managing a TB of disk could easily meet or exceed the cost of buying that disk over the 3-4 year life of that disk. So, a myopic focus on who has the cheapest disks on a per GB basis may not make much sense. Perhaps what we should focus on is how much it costs to manage a TB of a particular vendor's disk. In other words, the 3-4 year TCO for any storage acquisition needs to include the cost of management, not just the per GB cost of the space.

SSD vs. Wide Striping

So, what's this got to do with the topic at hand? Well, I think that a lot of the argument around this is really an argument around the cost of managing disks. Both technologies have their places, and both can help you address certain performance issues, and both can help you save money. The difference is that SSDs only help with a very small percentage of cases, whereas wide striping can help you with the vast majority of cases. What's more, wide striping can help you address those management costs and drive down that 3-4 year TCO I keep talking about, where-as SSDs really don't help there at all, and in a lot of cases, I believe that the 3-4 year TCO goes way up with SSDs. That's not to say that for those cases where you need the performance, that using SSDs in a targeted way isn't a good idea. But just keep in mind what I said about the cost of managing a TB of storage perhaps exceeding the cost of purchasing it in the first place. In the end, I think we need both, but I think that the bulk of your storage should be on a side striped array where your storage admins don't have to spend a lot of time trying to figure out exactly where they should place the data so that the new LUNs will perform, and the added load doesn't negatively impact existing applications.

My vision

So, ideally, I think that the storage team should have a vast majority of their data on an array that does wide striping, manage that space though some kind of virtualization engine, and purchase SSDs very tactically to address specific performance issues, again managing everything through the virtualization engine thus allowing re-tiering of the data should that be necessary, and making migrations when they are needed quicker, easier, and less impactful to the business. You also need to deploy software to help you with performance management as well as capacity management, and something to help automate the documentation process. This means that there is very likely not a single vendor that can provide all of the technology, but rather you will need to put together a "best of breed" approach you your storage environment. Here's an example of one set of technologies that I think can help get you to where you want to be.

IBM XIV storage – The XIV provides wide striped storage on SATA disks and makes it all very easy to manage. This is where I would put the bulk of my data since my admins wouldn't have to sit there and try and figure out where to place the data, etc.

EMC CLARiion – Put some flash drives in a CLARiiON and I think you have a great platform for those few LUNs you need that require the kind of performance that SSDs offer if you have that kind of need.

Datacore SANSymphony - A software approach to SAN virtualization which allows you to move data around to different arrays without the users being aware that it's going on. This is the way that you address things like re-tiering of your data as well.

Akorri – This is a software tool that helps you to manage your entire storage infrastructure find the bottlenecks, and generally free up storage admin time.

Quantum DXi 7500 – This is a deduplicating VTL that will help you reduce the amount of time that your backup admins spend troubleshooting failed backups.

Aptare Storage Console – This is software that will help you manage your backups. It will report on things like what backups failed, which of those were on SOX systems, etc.


The above are just a few examples of what's available out there to help you to create a more mature, automated, easier to manage storage environment, but they certainly aren't the only ones, just some good examples of what's available, and why you should be looking at that kind of technology. In the end, whatever you choose, just making sure that you are truly addressing the 3-4 year TCO of your environment is the key to getting those management costs under control and allowing your storage/backup admins to manage larger and larger environments.




Monday, February 23, 2009

Storage Shangri-La

Cloud Computing

I don't know about you, but I've spent a lot of time reading about "Cloud Computing" lately. A lot of space has been devoted to the topic in the blogosphere, that's for sure. Some people think it's the "next big thing", others say not on your life. But don't worry; I'm not going to bore you with another prediction. Personally, I think that the truth lies somewhere in the middle. By the end of this year, or the beginning of next I think we will see some people adopting "Cloud Computing", mostly in the SMB space. The enterprise customers will pretty much stick to their data centers, with a few exceptions for certain applications.

Ok, so now that I've bored you with a prediction after I said I wouldn't, here's why I did it. If I'm right, and enterprise customers do stick to their internal data centers it begs the question what are those data centers going to look like? How are these companies going to address the simultaneous issues of an uncertain economy, increasing demands on IT, and storage in particular, that confront them? For now, I'll stick with the storage team, since I think that they have a particularly difficult task. Data volumes continue to grow, no matter what is happening with the economy. Maybe those volumes won't grow quite as fast as they did when things were booming, but they will continue to grow. This means that the issues of increased capacity will continue to challenge the storage team. What will be new is that they will have to address those challenges with fewer dollars. As I indicated in my last blog, entitled Storage Efficiency, that means an ever more myopic focus on "storage efficiency" for most companies. But as I said, this can also present an opportunity for forward thinking leaders to implement changes in IT, and in storage in particular, that will provide not only long term cost savings, but also provide better service to the business.

Everyone into the pool!

So, what is my vision for the storage team that will do these amazing things? It's just as simple as applying what seems to be working for the server team to storage, virtualization. Actually, it's a bit more than that. It's creating a pool of storage which can be managed as a single entity and delivered in different ways (NAS, SAN, FCoE, etc.), easily backed up, and protected with a proper DR solution. I realize that some of you reading this are saying "he's talking about storage Shangri-La"! Well, maybe I am, but I think that it's something that technology today might just allow me to do. It won't necessarily come from a single vendor, but I think that it's doable, but it means some changes to the way that organizations purchase storage, and the kind of storage that they purchase. It also means that some money will need to be expended in order to create that Shangri-La. It's because of those expenditures that it's going to take forward looking leadership. The fearful and the visionless need not apply.

If you are going to use heterogeneous storage (and I think you should at least be able to) in your storage pool, then you need some way to do things like SNAPs, Replication, and DR which is not vendor dependant. Personally, I think that the virtualization engine itself should provide those features, but you could use a third party tool to perform those functions as well. The key point here is that you separate these functions from the storage array so that you aren't dependant on what's available from a single storage vendor, or a single storage vendor's array for this functionality. That is unless you pick a storage vendor who provides virtualization in the array itself as your virtualization engine. For example, if you use the Netapp V Series of virtualization engines, or the Hitachi USP or USP-VM to perform your virtualization. Those engines provide you with the ability to use the vendor's tools for replication, etc. with many other vendors' storage. They key is to find a virtualization engine which allows you to perform storage moves in a completely transparent manner to the hosts that consume that storage. This is important not only for reducing the impact of changes in your storage vendor, for example, but also when you want to re-tier you data. We often take data for certain applications which we consider borderline, and put it on tier-1 storage just to be safe. Now we can put that data on tier-2 storage (SATA), and if the performance turns out not to be what we need, we can move it to tier-1 without any disruption to the application. Thus saving CAPEX costs for the organization as well as OPEX costs.

This also means that there would be a change over time in the kind of storage I would buy. I would prefer to buy storage arrays that have little in the way of the kind of features that I describe above. Basically, just something that lets me configure different protection levels, and provides the storage out more than one port so that I can provide some high availability. All this should save on the per GB cost of the storage, and since I can use any vendor I want, my ability to negotiate price is enhanced. Again, more savings on CAPEX costs.

Storage Delivery

Once we have this pool of disk available, we need to make sure that we can deliver this storage in different ways. We need to make sure that the storage network is flexible enough to deliver the storage using iSCSI, Fiber Channel, and NAS (NFS or CIFS). Again, if you can get this from a single source, like Netapp, that's one way to go. However, if you go a different route with your virtualization engine, then you need to make sure that your NAS engines are gateways, not appliances, so that you can deliver any vendors storage out of the pool. The same is true for any other storage consumers other than your applications hosts. For example, if you want to do backup to disk using something like a Data domain box, then, again, make sure that you are using their gateway so that you can utilize any kind of disk from the storage pool with your Data Domain solution.

Backup and DR

Finally, backups and DR need to be addressed. As I mentioned above, these services need to be available in the pool regardless of the mix storage vendors used. But at some point you may need to take things to tape, and that's OK. Again, as long as the tape management system you use will play well with the virtual disk pool you have created. But more importantly, I recommend that daily backups be done to disk. The cost is within reason when you consider some of the deduplicating devices available today. This relegates tape to just an offsite (DR) role. You can even replicate some of these deduplicating devices, potentially eliminating tape entirely and saving yourself a lot of OPEX costs.


So, I really believe that now is the time, under the guise of cost savings, to introduce things like storage virtualization, backup to disk, SNAPs, etc. If you have forward thinking leadership they will recognize that the ROI for the costs is reasonably short, and when it's done, the ability of the storage team to manage more storage, provision storage more quickly, and reduce the cost of a managed GB of storage will be greatly enhanced going into the future. It will also position the storage team to handle the onslaught of storage growth that we are going to see once the economy turns around.


I just want to mention that this blog is now being syndicated on I want to say what an honor it is for me to be associated with GestaltIT. Stephen, and all the other authors are much better known and much smarter folks, so I'm hoping to be able to provide some content that doesn't embarrass. Take a look if you get a chance, there some great stuff there.


Saturday, January 31, 2009

Storage Efficiency

So, I've been sitting here thinking that with the current economic distress everyone is looking to save money. In the storage business, this means an almost myopic focus on something called "storage efficiency". Everyone wants to get the most "bang for the buck" that they can right now, and they really don't want to talk about much else, and that's really too bad.

I say it's too bad, because for those few who are bigger thinkers, people who are willing to go out on a limb and take a more strategic view of things, right now is a great time to make some changes that will, at the end of all this, leave their business with a stronger, better, more sustainable storage infrastructure. Or better yet, should those at the top of the IT pyramid actually have magically found some stones, they could create an entire IT organization that's better, stronger, and faster than it is now and one that even operates more efficiently than the one they have today.

Unfortunately, what I'm seeing is fear and the result of that is that people are pulling back. They are dragging out or postponing projects, turning the screws on their vendors to reduce costs, and some are laying off people or even going so far as to outsource. I won't even go into why I think that anyone who outsources today is both a fool and a traitor to this county, that's for another time/post.

To those few who have the courage to build instead of tear down. For those who recognize opportunity in the current economic climate, I say bravo. To the rest, I give the Bronx Cheer.

But back to the topic at hand. What I find interesting is that this myopic focus on "Storage Efficiency" on the part of both the consumers of storage and the resulting response from the vendors of storage. All of the big storage vendors have some kind of "Storage Efficiency" marketing strategy going. The blogosphere is full of arguments about how vendor A's storage is very inefficient, and the supporters of vendor A defending that vendor's storage efficiency. In the end, I don't think that any vendor's storage hardware in inherently more efficient, or less efficient, than any other vendors. It's all about how you lay out your applications on that array, how well you manage the space, and how you are able to properly tier the data. In other words, in the end, it's about people. In this case, Storage Architects and Storage Admins who do the grunt work of managing a company's storage infrastructure on a day to day basis. If they are good and are allowed to obtain the tools that they need, you get efficient storage utilization. Otherwise, you end up with very low utilization rates. My fear, however, is that with all of this focus on "Storage Efficiency" from a hardware perspective that those folks in the trenches won't be allowed to get what they need in order to truly make a company's storage more efficient than it is today. Management will fall prey to all that marketing hype and think that if they just switch from vendor A to vendor B that all of their problems will be solved. Oh, and to pay for that switch and since it's going to be soooo much easier to manager vendor B's storage, lets lay off a couple of those Storage Admins we aren't going to need anymore. Again, for those folks I have no sympathy, and they deserve the disaster that's waiting for them just around the corner.

In the end, I think that given the opportunity to do some storage virtualization in conjunction with server virtualization and network virtualization that storage could become very efficient. When you do all three together, you end up with a very efficient data center, as well as a very green data center. Yes, that's right, I said green data center. I fully realize that green sooooo 2008 and no one wants to talk about it anymore (back to that myopic focus on "Storage Efficiency"). But I think that if you look at the big picture, that the more efficient your storage/servers/networks are, the greener they are. That means reall dollar savings folks, so let's not stop talking about "green" just yet.

So, in my opinion, for those that are willing to invest in the future, I say build a "virtual datacenter". Some call it "Unified Computing", some call it "Cloud Computing", and some have other names for it. But as I see it, it's just creating an environment in which business users can run the applications they need in order to operate the business. I think that the "virtual datacenter" would allow for containerized applications. This means that the user's applications including the code and the data, would be in some kind of portable container that could be easily moved, expanded, shrunk, spun up or spun down, depending on the needs of the business. Add to this a way for business users to deploy their own applications into the environment and you completely change the relationship between IT and the business.

Yes, I know this concept isn't for the faint of heart, especially in today's economic climate. But in the end I truly believe what you would have is a much more efficient, flexible, responsive IT organization which has a much better relationship with the business. Heck you might even end up with IT being viewed by the business as something other than just a cost center which needs to be controlled! Yeah, I know, fat chance, but I can dream, can't I?


Wednesday, January 28, 2009

Wide striping is a two edged sword

I have spent a lot of time lately talking with some of my coworkers, friends, etc. on the topic of wide striping. This topic keeps coming up since there are now a number of vendors selling storage arrays with SATA drives that claim to have "the same performance as fiber channel". Some of the Sales folks I work with keep asking how we are supposed to dissuade people from that idea, or if it's true. One of the prime offenders in this regard is IBM with their new XIV array. The XIV uses wide striping and SATA drives and they claim to have "enterprise performance" at a very low price point. But they aren't the only ones; you have Dell telling people the same thing about their EqualLogic line of storage as well, and there are other too. For an excellent article about the XIV and its performance claims, take a look at

What I usually tell them is that the statement is true; you can get fiber channel performance by striping across a large number of SATA drives. The only problem is that you have to give up a lot of usable disk space in order to keep it that way. A quick example usually illustrates the point quite well. Let's say that for the sake of easy math the average application in your environment uses about 5TB of space (I'm sure some are a lot more, and some a lot less, but we are talking average here). Let's also say that you need about 2,000 IOPS per application in order to maintain the 20ms max response time you need in order to meet the SLAs you have with your customers. Finally, let's also assume that your SATA array has about 90TB of useable space using 180 750GB SATA drives and you can get about 20,000 IOPS in total from the array. So, let's do some basic math here. That means that you can run about 10 applications at 5 TB apiece which will take up about 50TB. So, your array will perform well, right up until you cross the ½ full barrier. After that, performance will slowly decline as you add more application/data to the array.

So, what does this mean? It means that the cost per GB of these arrays is really about twice what the vendors would have you believe. OK, but considering how much cheaper SATA drives are than 15K fiber channel drives, that's still OK, right? Sure, as long as you are willing to run your XIV at ½ capacity. In today's' economic climate, that's going to be tough to do. I can just imagine the conversation between your typical CIO and his Storage Manager.

Storage Manager – "I need to buy some more disk space."

CIO – "What are you talking about, you're only at 50% used in theses capacity reports you send me and we didn't budget for a storage expansion in the first year after purchase!"

Storage Manager – "Well, you know all that money we are saving by using SATA drives? Well, it means I can't fill up the array; I have to add space once I reach 50% or performance will suffer."

CIO – "So let performance suffer! We don't have budget for more disk this year. Why didn't you tell me this when you came to me with that 'great idea' of replacing our 'enterprise' arrays with a XIV?!?!"

Storage Manager – "Ahhh … ummmmm … gee, I didn't know, IBM didn't tell me! But we had some performance issues early on, and figured this out. Do you really want to tell the SAP folks that their response time is going to double over the next year?"

CIO – "WHAT! We can't let that happen, we have an SLA with the SAP folks and my bonus is tied to keeping our SLAs! How could you let something like this happen! Maybe I should use the money for your raise to pay for the disks!"

Storage Manager – "Um, well, actually, we need to buy an entire new XIV, the one we have is already full."

OK, enough fun, you get the idea … make sure you understand what wide striping really buys you and if you decide that the TCO and ROI make sense, make sure you communicate that up the management tree in the clearest possible terms. Look at the applications that you currently run, see how much space they require, but don't base the sizing of your EqualLogic (see, I'm not just bashing the XIV) just on your space requirements. Base them more on your IOPS requirements. With SATA drives chances are pretty good that if you size for IOPS, you'll have more than enough space.


Tuesday, January 27, 2009

2009 Outlook

Like everyone else I'm looking at the business climate in 2009, and it makes me nervous. I listen to the news reports of more layoffs and cutbacks that come almost nightly, and wonder what that means to me and to the storage business. I have coworkers who suggest that storage is recession-proof. That no matter what the economy is doing, that data will continue to grow, and thus companies will have to continue to grow their storage infrastructure. I'm not sure that I buy it, but that just might be my nerves talking. Perhaps it's just that I tend to believe that the truth typically lies somewhere in the middle. So, I thought I'd take a minute and describe what I think is going to happen this year. No guarantees, I can't predict the future, but a little speculation is always fun.

Storage will continue to grow just not as fast
Yup, I do believe that the amounts of data that companies keep on hand will continue to grow. Just not at the same rate it has in the past. Depending on whom you want to believe, year to year growth for storage has been growing at 40-60% CAGAR or even more. I'm guessing that in 2009 we are not going to see that kind of growth. Due to the reduced sales volume that most companies will see due to the recession, there's got to be an attendant reduction in the amount of data that gets created. How much is the $64,000.00 question. I suspect that the growth rate might be cut in half, or even more. Add to this the fact that budgets are getting slashed and storage managers are going to be looking to expend the useful life of storage that they have on hand and it makes me think that this year the average growth rate for storage is going to sit somewhere between 5-10%. So, overall I believe that the volume of raw disk sales is going to drop dramatically. I'm probably not the only one looking at things that way, look at the major storage vendors, they are all cutting forecasts, laying off people, and generally cutting back.

It's not all doom and gloom
I think that in this situation, however, there is some opportunity. Storage providers that can help the storage managers at their clients address the issues of their budget reductions and to find ways to do more with less will get quite a bit of business. I also think that companies, like the one I work for, that can package best of breed hardware and software into solutions that are very cost effective will also do well. Vendor loyalty, however, is going to go out the window and companies that were once locked into a single vendor will look at other vendors if they perceive that other vendor as being more cost effective. Again, this means opportunity for vendors to get into companies that they had previously been locked out of. I predict that we are going to see some of the major storage users leave the "big four" (EMC, NepApp, Hitachi, IBM) and moving to storage from smaller players in an effort to reduce both CAPEX and OPEX costs.

The year of storage efficiency and virtualization
Finally, this year it will all be about efficiency and virtualization. I'm betting that CIOs will actually accelerate any server virtualization projects that they currently have in the works in order to get those reduced costs as quickly as they can get them. However, what they will find is that unless they are quite careful, their server virtualization project might result in increased spending on storage, backup/recovery, and DR that they hadn't planned for. This can be overcome to some extent by partnering with storage suppliers that understand the issues involved when dealing in a virtualized world. I also predict that sales of things like data deduplication, and thin provisioning are going to accelerate this year. Again, all of this is in an effort to "do more with less" on the part of storage consumers.
So, overall, I'm cautiously optimistic that for those that can show their customers how to "do more with less" this year will be just a challenge, but in the end the will survive. For those who continue to try and do business as usual, well, they mind for this year to be more difficult.


Tuesday, January 6, 2009

IBM XIV Could Be Hazardous to Your Career

So, I haven't blogged in a while. I guess I should make all of the usual excuses about being busy (which is true), etc. But the fact of the matter is that I really haven't had a whole heck of a lot that I thought would be of interest, certainly there wasn't a lot that interested me!

But now, I have something that really get my juices flowing. The new IBM XIV. I don't know if you've heard about this wonderful new storage platform from the folks at IBM, but I'm starting to bump into a lot of flolks that are either looking seriously at one, or have one, or more, on the floor now. It's got some great pluses:

  • It's dirt cheap. On top of that, I heard that IBM is willing to do whatever it takes on price to get you to buy one of these boxes, to the point that they are practically giving them away. And, as someone I know and love once said "what part of free, isn't free"?
  • Fiber channel performance from a SATA box. I guess that's one of the ways that they are using to keep the price so low.
  • Teir 1 performance and reliability at a significantly lower price point.

So, that's the deal, but like with everything in this world, there's no free lunch. Yes, that's right, I hate to break it to you folks, but you really can't get something for nothing. The question to ask yourself is, is the XIV really too good to be true? The answer is yes, it is.

But the title of this blog is pretty harsh, don't you think? Well, I think that once you understand that the real price you are paying for the "almost free' XIV could be your career, or at least your job, then you might start to understand where I'm coming from. How can that be? Well, I think that in most shops, if you are the person who brought in a storage array which eventually causes a multi-day outage in your most critical systems that your job is going to be in jeopardy. And that's what could happen to you if you buy into all of the above from IBM regarding the XIV.

What are you talking about Joerg?!? IBM says that the XIV is "self healing", and that it can rebuild the lost data on a failed drive in 30 minutes or less. So how can what your said be true? Well folks, here's the dirty little secret that IBM doesn't want you to know about the XIV. Due to its architecture if you ever lose two drives in the entire box (not a shelf, not a RAID group, the whole box all 180 drives) within 30 minutes of each other, you lose all of the data on the entire array. Yup, that's right, all your tier 1 applications are now down, and you will be reloading them from tape. This is a process that could take you quite some time, I'm betting days if not weeks to complete. That's right, SAP down for a week, Exchange down for 3 days, etc. Again, do you think that if you brought that box in after something like that your career at this company wouldn't be limited?

So, IBM will tell you that the likely hood of that happening is very small, almost infinitesimal. And they are right, but it's not zero, so you are the one taking on that risk. Here's another thing to keep in mind. Studies done at large data centers have show that disk drives don't fail in a completely random way. They actually fail in clusters, so the chances of a second drive failing within the 30 minute window after that first drive failed are actually a lot higher than IBM would like you to believe. But, hey, let's keep in mind that we play the risk game all the time with RAID protected arrays, right? But the big difference here is that the scope of the data loss is so much greater. If I have a failure in a 4+1 RAID-5 raid group, I'm going to lose some LUNs, and I'm going to have to reload that data from tape. However, it's not the entire array! So I've had a much smaller impact across my Tier 1 applications, and the recovery from that should be much quicker. With the XIV, all my Teir 1 applications are down, and they have to all be reloaded from tape.

Just so you don't think that I'm entirely negative about the XIV let me say that what I really object to here is the use of a XIV with Tier 1 applications or even Tier 2 applications. If you want to use one for Tier 3 applications (i.e. archive data) I think that makes a lot of sense. Having your archive down for a week or two won't have much in the way of a negative impact on your business, unlike having your Tier 1 or Tier 2 applications down. The once exception to that I can think of is VTL. I would never use a XIV as the disks behind a VTL. Ca you imagine what would happen if you lost all of the data in your VTL? Let's hope that you have second copies of the data!

Finally, one of the responses from IBM to all of this is "just replicate the XIV if your that worried". They right, but that doubles the cost of storage, right?