As discussed before in the previous episode “How to select Mid Range Storage ?”, Fiber Channel Storage category selection is a critical process that needs proper sizing in order to have the best fit solution and gain the intended benefits of SAN Storage Solutions.
What if you find out that your storage infrastructure does not fit your performance needs anymore or that you have an old storage array and you want to integrate it into your new storage infrastructure without any application downtime? What about data migration, can it be online? What if you made that decision for different vendors and now your infrastructure have multiple boxes from different vendors? The only way to answer all these questions and overcome storage complications lies in two words “Storage Virtualization”.
Storage Virtualization has passed through a long way over the past seven years. After a false start in 2001, several vendors disappeared; many others repositioned themselves to focus on the Small and Medium Business (SMB) space, where others reinvented themselves with completely different products. With the SAN Volume Controller (SVC) product, launched in July 2003, IBM nurtured the market, in spite of the fact that many in the market did not even want to say the V-word anymore. IBM persisted and has successfully demonstrated the potential of storage virtualization and the real Return on Investment (ROI) that customers would have. The gain that IBM had was huge by penetrating and prevailing different locations worldwide, where customers were suffering from performance problems or having storage complexity and then SVC has came to solve all their problems.
SVC is a mature, enterprise-proven product that has demonstrated Investment Protection to its customers. Moreover, SVC and its in-band architecture can indeed scale to handle the largest, most stringent enterprise SAN environments. By doing so, IBM has led the market where others have only slowly followed. The company’s efforts have in fact changed the market, and now it is filled with solutions for storage virtualization. But, a casual glance at the success of the other solutions in this market is telling; HDS has successfully brought USP to market. HP and Sun resell the controller-based HDS solution, EMC is carrying invista, and Dell still has no offerings, where all of the other solutions are still in their infancy when customers are counted. The bottom line is IBM has paved the way to show customers the value of virtualization to the point that the V-word is back in the vocabulary of all storage vendors who have rapidly tried to deliver solutions to the market over the past few years. But, the truth is none of these solutions come close to the success and maturity demonstrated by IBM’s SVC.
The value of storage virtualization is unquestioned. It provides a forum to perform storage management in a consistent fashion even while the underlying physical storage is heterogeneous. It is a key building block for the next generation data center that will focus on delivering a variety of services. From our experience, we believe that IBM gain till now is a shadow of what it is to come, as IBM ties storage virtualization to other efforts, such as server blades and server virtualization delivering the values of the coming decade.
The Storage Management Nightmare
It is no secret that the job of the storage administration has gotten a lot harder over the past decade. Much of the reason for this can be traced back to five fundamental challenges that exist in most enterprise data centers.
Challenge-1: Rapid Capacity Growth
IT departments are being asked to store more information longer. Solution would be by adding low performance, high capacity SATA disks or using more disks in Fiber Channel Loops that are all against system stability and proper performance. Alternative would be by adding a new storage array to te infrastructure that will end up by multiple arrays which need more professional IT staff to manage while keeping high risk. Storage Virtualization would be the only possible right solution in order to keep up with the ever increasing capacity requirements with Always On-Line infrastructure. Storage virtualization will not be optional in the next generation data center.
Challenge-2: Poor Storage Utilization
Inflating this data growth is the fact that the deployed storage capacity is not readily accessible to the hosts that need it. Current storage practices over-provision disk for running out of capacity are high and over-provisioning reduces the need for repeated provisioning in the future. Hence, the typical storage utilization rates in most enterprises run in the 25-40% range. Today, low utilization creates more burden than ever before by consuming precious expensive power, and creating unnecessary heat. Storage provisioning is a normal operation for SVC.
Challenge-3: Tiered Storage
Storage administrators are being asked to wring costs out of their infrastructure by ensuring that the data is stored on the most cost-efficient media possible. Typically, the value of data decays over time. Therefore, it does not make sense to store seldom accessed information on the highest cost storage systems and media. To cut costs, storage administrators must create tiers of different types of storage based on performance and cost per capacity ($/TB) metrics. They must continually ensure that the data is stored on the most efficient storage available, redistribute data among storage types (FC Disks, SATA Disks & Tape) i.e. On-Line Storage, Near-Line Storage & Off-Line Storage, and ensure that protection practices such as replication are consistently maintained across tiers. In fact, migrating data is disruptive, and then breaks many complex relationships between replicas and data protection systems. This makes storage tiers extremely complex if not impossible, unless you have SVC, then you will not face any of these problems.
Challenge-4: Non-Disruptive Data Migration
In today’s world, IT systems are expected to be constantly operational. However, storage administrators are often required to take storage offline in order to migrate data between arrays or change the storage infrastructure. In fact, storage administrators are expected to perform technology refreshes, vendor/equipment swap outs, and re-configuration activities as part of routine data center and storage maintenance. These actions prevent applications from accessing data and thus increase application downtime. The cost of downtime can dramatically impact a corporation’s bottom line and its reputation. Therefore, storage administrators need a way to perform storage changes and data migrations between arrays and different types of storage media while still maintaining continuous availability for the applications and their data.
Challenge-5: Data Protection and Disaster Recovery
Managing a Disaster Recovery solution is not an easy job. The management of snapshots, backup, replication and mirroring technologies imposes a tremendous level of administrative complexity on the storage organization. Storage administrators must now protect each application and its data and cope with the delicate differences between the variety of heterogeneous storage array vendors and products. Furthermore, DR and data protection compound the already critical storage management problem. An administrator now can cope with managing two copies of the same data across two locations while ensuring its consistency.
By: Mohamed El Mofty
Storage Networking Solutions Expert
IBM Systems and Technology Group