Being an old server guy a common rule we live by is: “It’s not the bandwidth, it’s the latency that gets you.” How so appropriate for storage today. Applications put a tremendous demand on accessing data when and where you need it. Users nor customers are willing to deal with waiting a long time for their data. With Web 2.0 services spanning a multitude of needs, response time is critical. For certain real time applications interrupt response time and minimal latency is a must. A real time data feed comes in an instant and you have to be ready to respond to the telemetry data as that satellite passes over that receiving station.
Data needs to be stored or retrieved for devices that span from the small mp3 player to that large cloud that you provide and/or utilize. Specific to the storage of your data, performance fundamentally comes down to how you manage your reads and writes. File and block serving of data needs to be tuned, staged and ultimately not waiting at any stage of the pipeline from disk to client. Today that requires a lot of intimate knowledge of processor caching, storage controllers, I/O software stacks and much more. Having knowledge of this information is only part of the solution as the whole application topology is further mystified by unknown bottlenecks, resource hogs and just plain alchemy.
The unknowns are attempted to be turned into knowns by expensive analyzers, network sniffers and debug tools. What could one do if a visual dynamic analysis tool was made available to you? The you being that novice with limited knowledge as well as that you with all the intimate knowledge of hardware, kernel, drivers, application software, cache coherency, round robin scheduling, relational databases, etc. For the investor world we have Cramer’s Mad Money. I’d like to introduce you to Gregg’s Mad Storage. Brendan Gregg has a great post on explaining how a hybrid storage pool of solid state disk and cheap SATA disks can significantly outperform traditional storage. It’s not only RAM and disk any longer, rather RAM, SSDs, cheap disks and the ZFS file system. The heat maps from Analytics of storage latency are just so visual. Using Analytics (Dtrace) in the Unified Storage Server 7000 Appliance is very intuitive and straight forward. No clumsy logs files to comb through. No debug points to capture state. Only point and clicks of your mouse and loads of visual histograms of data for your eyes. Brendan does an awesome job of breaking down fundamental performance problems using analytics built into this storage appliance.
There even is a Unified Storage Server 7000 emulator available on VMware. Check it out for yourself and see what commodity hardware, an open sourced operating system, innovation and differentiation can do for your storage needs. You may also want to bookmark Brendan’s blog as his posts on performance for hybrid storage appliances are just as passionate as the technology. Stay tuned for more on solid state disk technology where we’d rather lead than follow.