Friday, August 17, 2012
ILM/HSM part 2, Return of ILM/HSM
Folks, Sorry it's been so long since my last posting! Time fly's when you're having fun and I've been having a lot of fun over the last year. What have I been doing, you ask? Well a lot and among other things I've been trying to help our customers sort through a changing storage environment, and I've learned a few things in the process. What's all this change I'm referring to? Well, among other things, Flash/SSD has really started to take off, and that has a lot of implications for the storage team. So I have spend a lot of time helping our customers sort through the different options, etc. and discovered some things in the process that I would like to share with you. But first, a quick review of what's up with storage and Flash/SSD. As I indicated above, Flash/SSd is really beginning to make in-roads into the data center. Flash/SSD comes in basically three different flavors. First, Flash/SSD's can be used in something that looks like a traditional storage array. There are a couple of different variations of this type of storage array. Some use SSD drives in place of traditional disk drives, and some use Flash memory directly. Typically, the arrays that use SSD's provide many of the same features as other tradition storage arrays such as snapshots, replication, etc. Arrays based on Flash memory, on the other hand, typically provide better performance than arrays that use SSD drives mainly because the avoid all of the overhead involved with the SCSI protocol, etc. However, these arrays also often don't have all of the features we need in the data center such as snaps and replication, etc. In both cases, from a storage management perspective you would manage it much like any other storage array in your data center. Second there are the traditional storage arrays with Flash/SSD added to them. Again, these arrays come in basically two flavors. In both cases, however, an effort is made to only utilize the Flash/SSD for data which is currently "in use" or "hot" in an effort to keep the costs down. With the first flavor, SSD drives are used to hold "hot" blocks of data, with "cool" blocks of data being stored on traditional disk drives. This requires sophisticated software that monitors how "hot" the data is and moves it appropriately. With the second type of array Flash is added to the controller and used to extend the cache. This has the advantage that the software is a simple extension of the existing controller software, and as I mentioned above, the overhead of the SCSI protocol is avoided. The downside is that this only provides a performance boost for the read half of the equation. Finally, there is the ability to add Flash memory to the servers that run your applications. Once again, there are two flavors here. The first, and simplest flavor is to utilize the Flash memory as an extended disk cache. The advantage to this is that it accelerates I/O to/from any disk arrays you may already own. The down side is that it is often limited in what kinds of OS's it works with. The second flavor makes the Flash memory appear to the OS on the server as a disk drive. This has the advantage of very high performance, but is limited in size. It is also limited in that you can't use features like server clustering, etc. since this data can't be shared among a group of servers. So what's the lesson learned from all of the above? I think that there are a couple. One is that if we are going to utilize some or all of this technology in the data center, we are really looking at bringing back the old ILM/HSM days. For the "Flash/SSD" only arrays, because of their cost, most data centers aren't going to bring them in to replace all of their traditional storage array capacity. So some way to move data from the expensive storage to the less expense storage needs to be found if costs are going to be kept under control. With the second type of array software to move the data is supplied, but there are questions about how effective this software is particularly in keeping up with quickly changing "temperature" data. The third type of Flash/SSD certainly improves performance, but increases the "storage islands"in your data center unless some kind of ILM/HSM software can be applied. Where this leaves us is with many of the same issues that, ultimately, derailed ILM the last time around. The main issue at the time was the classification of the data. Getting the business to classify their data was very difficult, and in the end,we often threw up our hands and just moved data based on "last access date". While this works for file based data, it doesn't work for database data, for example, at all.