Managing storage, especially at the corporate level, can be a tedious, time-consuming prospect. This is particularly true for information technology specialists everywhere. Perhaps the most tedious aspect of this is storage provisioning, which Margaret Rouse describes as “the process of assigning storage, usually in the form of server disk drive space, in order to optimize the performance of a storage area network (SAN). Traditionally, this has been done by the SAN administrator.” With the advent of hyper-converged storage, software that will do this job for the SAN administrator now exists.
The Three Layers of Converged Storage
Conventional converged storage aims to combine three separate aspects of a data center: compute, network, and storage.
- Compute: Problems solved (computed) by the datacenter.
- Network: Tech Target defines a network as a combination of “the physical (cabling, hub, bridge, switch, router, and so forth), the selection and use of telecommunication protocol… and the establishment of operation policies and procedures.”
- Storage: the location where information is saved.
Converged storage tries to smooth these layers together by “providing dedicated hardware nodes that can perform compute, networking, and storage all on a single layer,” according to network specialist George Crump. Software is the key to the storage component of converged storage. This ends up creating a “shared pool of storage that all the virtual machines in the cluster can access.”
Converged storage offers speed capabilities like no other, allowing rapid deployment of information. However, the drawback that is it limits storage flexibility.
Hyper-converged storage is the cutting edge of converged storage. “Hyper-converged architectures take the converged concept to the next level in that they are provided as software and they can run on any vendor’s server hardware,” says Crump. “This will appeal to organizations that have a long lasting relationship with a particular server vendor or to an organization that is looking to drive as much potential cost out of their storage infrastructure storage.” Hyper-converged storage takes the speed of conventional storage and gives it more flexibility by allowing software to control the storage.
One of the biggest benefits of hyper-converged storage is that it gives SAN administrators unparalleled control and quick access to storage provisioning. Hyper-converged storage works in a virtual server environment, where the actual servers aren’t even on-site.
Another benefit of hyper-converged storage is the overall view it gives IT managers. Margaret Rouse says, “In addition to providing administrators with single pane of glass management capabilities, hyper-converged storage nodes can be connected and scale out horizontally. This allows administrators to create a distributed storage infrastructure in which direct-attached storage (DAS) components from each physical server are combined to create a logical pool of disk capacity.”
Overall, hyper-converged storage can be automated, giving the IT department less to worry about. “It makes the storage invisible to people running VMs on top,” says Jan Ursi, senior director for channel sales and marketing at Nutanix EMEA. “Traditional storage had to be hooked up to servers and managed separately, creating LUNs, zones, masks and so on. With hyper-convergence, the VMs use internal hooks via software that presents the storage as an automated service.”
How Hyper-converged Storage Affects You
Hyper-converged Storage lets the user dedupe (reducing storage space needed to save data) efficiently. With storage devices growing at a breakneck pace, the ability to access the data quickly has become more important than ever. Don Kempel, founder of Diligent Technologies, gives a succinct example in an interview about data capacity and mobility issues:
“We used to ship 18 GB drives; today we ship 3, 4, 5, 6 TB drives, which means the density of the drive increased about 300 times. But the RPM, the performance of the drive, increased 45% or 50%. It’s as if we used to drink with a straw out of a cup, but now, with the same straw we’re drinking out of a swimming pool. It doesn’t work. There is a major IOPS problem. So what do we do? We throw SSD [solid-state drives] at the problem. But SSD is very expensive. So if we dedupe the data before it ever hits the disk, we reduce the number of IOPS.”
Essentially, Kempel suggests solving the problem before it even exists. Hyper-converged storage is the future of deduping and the future of speedy storage solutions.