Advertisment

How smart are IBM's new Storage products

author-image
Deepa
New Update

BANGALORE, INDIA: Faced with a challenge of managing massive volumes of data, organisations are looking for affordable storage options. Storage being a fundamental part of cloud computing, IBM offers a viable solution which is not just about storing data, but about storing it in the right place.

Advertisment

IBM has revamped its storage product portfolio as a part of the whole smarter computing initiative. Arpita Sengupta, product manager, Storage, IBM India, says the company has invested billions of dollars in its research and development.

In an interaction with Deepa Damodaran of CIOL, she said: "In India we launched Smarter Storage which is a part of Smarter Computing platform and it is focused on making storage systems more efficient." She also spoke what makes it affordable and efficient.

Excerpts:

Advertisment

CIOL: How is IBM Smart Storage different from what IBM has been doing so far and how is it more efficient?

Arpita Sengupta: Most of the organizations or competitive vendors talk about increasing or managing capacity growth by adding another SAN or more storage or devices, but we say, 'stop storing so much, move data to the right place and store more with what is on the floor'.

 

Advertisment

IBM Smart Storage has three pillars:

Design Efficient Systems. This is about managing cost and capacity. We do it with the help of smart algorithm in our system, which is called real time compression.

This is different from that of our competitors because when the system is integrated into customer's infrastructure, it starts behaving on its own. That is, it is efficient by design and a customer does not have to worry about implementing or installing it as it automatically integrates in half the time what it used to in earlier days. Moreover, we do this process on hard or productive data.

Advertisment

For unstructured data, we would commit up to 50 per cent of compression rate. For structured data which is more of relational database, the tabular format data, or Oracle database, we usually commit up to almost 85 per cent of compression rate.

Self Optimization: This is about improving performance and productivity. Customers usually invest in different types of storage environment or tiers. We offer a self optimizing algorithm based on EasyTiering where automatically as soon as their peak-end performance, the data will be pushed out from the most important tier so that it is called out much quicker. The SATA disks are more economical and then you have SaaS and solid state drive (SSD) on it.

It uses only three per cent of the solid state drive and gives up to 300 per cent improvement in performance.

Advertisment

Cloud Agile System. This is based on virtualization concept. It has got a couple of layers:

The first layer addresses the connection or rather the return on investments for customers. That is it will improve the application availability by about 30 per cent and will improve storage deployment speed by 25 per cent and it can also provide storage immediately.

The product is completely vendor agnostic. So you can use V7000 as a virtualization header and it would sit in an environment where there will be competitive storage spaces as well, but it would literally virtualize the whole storage and make it look as one single pool.

Advertisment

This used to be a feature that was only available in our enterprise system in the form of a SAN volume controller (SVC). But with the proprietary codes available for us, we have managed to build this code and give it to the mid range systems as well.

The second layer is an algorithm around Active Cloud Engine. What it can do is, if you have data at a SaaS level and the data has been idle for more than say 30 days and is not worth to be stored on an expensive SaaS disk, one can set up a policy which will push it to the SATA layer and if it is laying idle even in the SATA layer, one can push that data to an outer storage in the form of a tape library so that again one is not wasting necessary disk space when it is not required.

The active cloud engine works in such a way that across different cities across the globe, you could actually be able to upload or access the same file and modify it from different locations.

Advertisment

CIOL: However, do customers have to replace their existing infrastructure in order to install this system and moreover, is not cloud a better option?

Arpita Sengupta: Most of the time, customers will have to go for V7000 which is our flagship product, which can be used by SMB and large enterprises. So most of the customers are currently installing 4000 and 5000 series, and are already due for an upgrade because these systems have been around for more than three to four years.

Now, if they want to buy V7000, they do not have to do away with their old systems because the V7000 can actually virtualize the whole environment that can be used as a header and customers can migrate in a phased-out manner.

Storage is also a fundamental part of cloud computing which we at IBM define by the three Vs i.e. Velocity, Variety and Volume. Storage systems fit well into the cloud infrastructure, where irrespective of whether it is public, private or hybrid, you still will have to use disks.

And that is why, be it cloud computing infrastructure or be it locally solutioned DC site they will anyway have to think through about sizing when it comes to disk utilization.

CIOL: How affordable are these systems?

Arpita Sengupta: We have met the Gartner and IDC benchmark. We re-bundle software features along with the product itself, unlike in competitor products where you have to pay additional cost for additional features, such as easy tiering, or additional enclosure or wants to expand, for licence.

We are giving the whole box when it is shipped from the plant. Moreover, basic protocols such as NFS, CIFS or FCP, Unix, Windows, which usually come at an additional cost is part of the bundle.

CIOL: What is the capacity of this system?

Arpita Snegupta: You are looking at 32 petabytes of addressable Cache, basically from a virtualized point of view. The starting specification is usually between 10-15 terabytes. Mid-range customers can even start from a 5 terabyte box. So the V7000 will support a huge range of requirements based on the kind of environment customer asks for.

tech-news