Watch out for revolution in SSD: Hitachi

author-image
CIOL Bureau
Updated On
New Update

Hubert Yoshida, VP and CTO, Hitachi Data SystemsBANGALORE, INDIA: On the sidelines of CIOL C-Change 2011, CIOL got an opportunity to chat with Hubert Yoshida, vice president and chief technology officer, Hitachi Data Systems, about the company, its offering, the market trends and its road map.

Yoshida is responsible for defining the technical direction of Hitachi Data Systems and currently leads the company's effort to help customers address their Data Life Cycle requirements to address compliance, governance and operational risk issues.

CIOL: Why is virtualization adoption still in its infancy stage after all these years? Gartner points out that only 16 per cent of enterprise workloads are running on virtual machines today.

Over last two years, server virtualization has taken off quiet widely, but storage virtualization has a bad start. First information of storage virtualization was done in network with the appliances. Since these do not talk to each other easily, the only solution possible in this scenario is mapping cable, that becomes single point of failure and vendor locking, for migration massive changes. Another issue is of complicated SAN (Storage Attached Network). They are the third major causes of application failure in environment, next to user error and software error. Storage also had this perception of being very complicated and risky.

But things have improved since the early stages. Applications were difficult to complicate until VMware came in the picture with their virtualization products. What we did is very similar to that. We said, customers do not require mapping, no additional donning. Our concept is as simple as plug and play. If customers want to switch they can just disconnect wire and plug it somewhere else.

So had there been a confusion between the storage and server virtualization? Did you require extra effort to educate customers about storage virtualization?

Yes, the customers were confusing the whole issue with appliances. They were confusing terms between server and storage virtualization. A lot of customers who claimed to have adopted virtualization, were largely talking about server virtualization but they had lesser idea about it.

Advertisment

So after the first job of educating the companies about virtualization, we had taken the second round of awareness building task wherein they were told about storage virtualization and removing the fears regarding complexity and risks.

Now people have become more aware about the two terms and downturn helped them in increasing their understanding.

How is the adoptions trend at the SMB level? How different are their requirements from the large enterprises?

In terms of requirements, there is no difference between SMEs and large enterprises. All of them have critical applications, all of them have to compete with each other in terms of availability and agility. So they all need storage.

Days back, when we started storage virtualization, one of our first customers was a university. Tough they were in education space, they also did a lot of work in healthcare. They were left with 120 Gb. They adopted storage virtualization spending millions of dollar for business requirements. Now they work on terabytes.

So, for SMEs, given their investments constrains, we have to have a lower entry point. Like we can make use of the existing storage box than a new one.

Advertisment

Apart from virtualization, what are the trends that you see catching up this year?

One of the things that would see uptake is the communication between applications and the storage. In other words, storage would need to scale up and offload more than before.

Second things is the movement of application from storage. People talk about the explosion of the data, but now applications are growing at a faster pace, bringing a need for storage optimization, to make them more scalable.

Advertisment

The third need that comes out of necessity is power and cooling. Today data centers work on reducing the foot print in power, but they also need cooling. The cloud providers are building efficient data center using relative technology and that's why cloud becomes attractive from infrastructure point of view.

What is the next step in virtualization 5.0 in terms of technology?

There is need to distribute the infrastructure across geographic differences, so that even though infrastructure is distributed in geography, it looks like one common network. Clustering model is the way to scale in cache, but it is not good for application. We couple internal switch, and our model is to be open like VMware and not follow the proprietary approach.

As the CTO of HDS, what according to you are the next wave of technology innovation and adoption that you are setting for the company?

Advertisment

Frankly, speaking the world of technology is very dynamic. I belong to Hitachi which is an R&D company. We are the only vertically integrated storage company, so we need not wait for off-the-shelf products, but build on our own.

But one of the areas where innovation would come is in the disks. It is still mechanical disks, which has a road map in this world of electronics. So there is great future in the area of Solid State Disks. Then, today's NAND technology is not what will continue for future. Today there are two types of NAND — one is for the enterprise (single level cells), and the second is for commodities.

In the commodity space the new products (phones, tablets) are going in the SSD direction. This is not visible to most of us because it is happening only in the media and entertaining products. We have seen things following the law of doubling the capacity every 18 months. If this law (Moore's law) slows down to 25 per cent, it will have major impact on cost of IT. What we are doing is separating intelligence from media products.

tech-news