Achieve cost-effective storage as easy as A, B, CAuthor : Deepa December 18, 2012 0
BANGALORE, INDIA: The challenges facing today’s CIOs go far beyond shrinking IT budgets. They must also manage the continuous emergence of new technologies and applications, as well as the increasingly stringent demands of customers and end-users.
In the constant struggle to balance these onerous demands, while also ensuring performance and quality of service, many IT managers rapidly expand their corporate infrastructure, unintentionally creating disparate information silos of multi-vendor hardware and software. This ripple effect significantly complicates storage management: it increases the burden on workloads and capacity, and dramatically drives up storage costs, not to mention placing new demands on floor space and power and cooling consumption.
To put this situation in context, recent research shows that 47 per cent of total storage management costs are now spent on file management alone. That is why simplifying file and content management is crucial to achieving the sustainability and scalability that organizations need to accommodate storage development in the future, thus minimizing long-term total costs of ownership (TCO).
Of course, simplifying a complex, inefficient and chaotic heterogeneous environment with low TCO is not an easy task – unless companies have a reliable and well-developed blueprint that can be implemented as easily as A, B, C.
In data management terms, ‘A, B, C’ is an easy way to explain the key requirements to successful storage management. A represents Archive First, B is for Backup Less, and C refers to Consolidate More. By properly and smartly managing these ‘A, B, Cs’, not only can organizations streamline operations and improve performance and efficiency, they can also ensure robust data security with drastically reduced costs of storage.
Intelligent Data Archives
It is a known fact that only 20-30 per cent of the data on most networks is active data, while the other 70-80 per cent of data is either inactive or sits idle with only infrequent access.Â Indeed, 51 per cent of open system data is generally unnecessary, 22 per cent of data is duplicated, and 68 per cent of data has often not been assessed for 90 days or more. Ironically, this inactive, stagnant data consumes the majority of storage resources, not only making data storage costs expensive, but also leading to gross inefficiencies in system operations.
By applying an intelligent content-archiving approach, file and content management can be operated according to your precise business needs via a service-oriented storage repository. In a complex heterogeneous environment consisting of various classes of storage devices and platforms, intelligent file tiering can be made use of to maximize storage utilization and increase ROI of storage assets. For example, intelligent file tiering can dynamically move inactive data from the primary disk – normally an expensive high-tier level of storage – to easily accessible, less expensive low-tier storage such as SATA drives, thus releasing the high-performance drives such as Fibre Channel for business-critical and active datasets.
This automated data placement makes data storage significantly more cost-efficient while also requiring dramatically less investment in new capacity. In fact, it is estimated that intelligent archiving can reduce overall storage costs by more than 25 per cent while simultaneously meeting security and compliance requirements by providing low-cost, long-term data protection for all inactive data.
By implementing intelligent file tiering, less data is left in primary storage, which also reduces backup loads. Instead, inactive data is compressed and de-duplicated, and then stored in a content repository for long-term retention. Once in the archive environment, advanced features such as multiple data protection copies, integrity checking and advanced replication accelerate system restores and eliminate the need for backup altogether. If a disaster recovery copy is required, this inactive data can quickly and easily be backed up simply by creating a second replica copy, which can be local or remote.
In this way, the entire back-up process is improved to achieve unprecedented levels of efficiency and scalability with a significant decrease in backup windows and workloads. Reduced back-ups also decelerate the need for new back-up media, as well as the associated floor space, power and cooling costs. In fact, the total back-up cost of unstructured data can be reduced by as much as 60 per cent. More importantly, with less data to backup, organizations can now fit the backup process within the backup window without deploying fast and expensive backup technology.
For multi-location or branch office operations, the traditional backup process is particularly costly and complicated. Edge-to-core storage technology offers the ideal solution to streamline backup of remote sites because it enables complicated storage management and regular backups to be centralized and automated at the core, which in turn means that remote offices and branches can focus their resources on business growth.
In the quest to achieve cost-efficient storage management, it’s vital for organizations to “consolidate more” – namely, to consolidate their many storage systems (containing both structured and unstructured data) into a single cluster. With fewer silos, smaller footprints and less administration points to be managed and controlled, the cost of storage CAPEX and OPEX, management overheads, power, cooling and floor space all fall dramatically.
A virtualized, integrated storage infrastructure that provides converged block, file and content data also offers a single platform to cost effectively manage datasets across dispersed storage islands. By creating pools of storage in a heterogeneous environment, organizations can simplify their management, gain better resources utilization and reinvigorate or extend the useful life of existing assets.
The author is VP & GM, India, at HDS.