Advertisment

It’s Time for a Composite Architecture: Hyper-converged and converged

What would be the better choice - Converged, Hyper-Converged or Composite Infrastructure? The current IT infra challenges led to choose composite infra.

author-image
CIOL Bureau
New Update
technology

In the past, customers bought infrastructure for key applications such as VMware, SAP HANA, Oracle, by buying servers from a vendor such as Lenovo, storage from a vendor like Hitachi and network switches from vendors like Cisco, Juniper. All three would be for the same application, and if something broke, they had to deal with multiple vendors.

Advertisment

This led to multiple hand-offs between vendors, resulting in delayed problem identification and resolution - and the operational efficiency of a DIY environment was low, because each infrastructure component required its own management console, with no deployment automation when the infrastructure was onboarded.

The Need for Converged

The solution put forward for this problem was Converged, which simply meant that for a commonly used application like SAP, Oracle or VMware, you didn’t have to worry about putting the infrastructure together.

Advertisment

A single vendor could bundle it together for you in one stock-keeping unit (SKU), with certified network, server, and storage components. If something broke, you could go to the vendor to sort it out for you.

The management plane was unified to provide visibility across the inventory, health, and monitoring aspects of the entire stack. Automation capabilities were incorporated for initial deployment, as well as for performing frequent tasks on the infrastructure.

A lot of customers preferred to walk down this path, dealing with a single vendor and gaining faster time to deploy as the entire stack was pre-tested and validated. better operational efficiency due to single management across different infrastructure elements, all resulting in a quicker resolution when things broke.

Advertisment

Birth of Hyper-converged Infrastructure

A desire to further reduce the number and complexity of vendors gave birth to hyper-convergence. Hyper-converged simply meant having storage disks and software inside the server, that would bind and manage those servers. There would be no external storage.

The customer liked this even more, as they had fewer components to deal with. The entire storage lifecycle was managed at the software layer, and they had resiliency against hardware failures. In some cases, even the underlying hardware lifecycle was capable of being managed by the software either through plug-ins, or natively.

Advertisment

Hyper-converged also brought flexibility so that you could incrementally scale your compute and storage together by adding nodes in a modular fashion, thus avoiding “guess estimates” that resulted in big iron purchases upfront.

Why Hyper-converged?

Customers go for hyper-converged because it’s easy to use, easy to manage, and easy to scale. A server has compute power, performance and storage capacity because there are disks inside the server. If more is needed, nodes are added – which makes it very easy to scale in a linear manner. It’s truly software-defined where underlying hardware does not matter much. Customers like this.

Advertisment

If the application the customer is running scales linearly, compute and storage requirement grow together and that’s a perfect workload for hyper-converged.

But hyper-converged has some limitations.

If an application says ‘I don’t want more storage capacity – I just need more compute power to serve an increasing number of user requests,’ hyper-converged cannot do that, because every time you add a server you have to add both - compute & storage. It is not a good fit for use cases where one needs to add storage and compute independently. Though some vendors offer storage- only nodes, in my view this is more of a workaround.

Advertisment

Let’s look at the opposite scenario. And that’s where I have seen it put a serious dent in the customer’s budget. When a high-investment application such as Oracle Database - which is priced based on the number of CPUs a customer has, is running on hyper-converged, and the customer only needs more storage, to add capacity, they will also have to add CPU power. This means additional Oracle software licence costs as the number of CPUs goes up - even though it was not required.

Converged

With Converged, storage is external and one can add storage and compute independently of each other. For the Oracle Database, the customer can simply add storage capacity without beefing up CPU cores.

Advertisment

Also, converged does well when the latency is an important consideration. and Environments requiring zero recovery-point objective (RPO) and recovery-time objective (RTO) -such as core banking applications, prefer to utilise external storage systems.

And in some storage-intensive workloads like archive, converged scores well on data centre footprint as compared to hyper-converged.

Even though converged infrastructure offers some unique benefits, it is not truly software-defined, and management is not as easy as hyper-converged.

Converged or Hyper-converged or Both?

There is clearly a huge adoption wave for hyper-converged, with spend on it growing between 40-70% year-on-year in Asian countries.

Most customers I see today, have both converged and hyper-converged environments running in separate silos. Depending on the workload that comes in, they will make an evaluation and deploy it on one or the other.

But the more islands one has, the weaker the utilisation. There’s a cost–leakage happening simply by maintaining two separate environments. And sometimes workload behaviours change. When the workload initially came in, it was a good fit for hyper-converged and maybe two years later the nature of it changed and it became a good fit for converged. How do you make that move from an application running on converged, to one that isn’t?

Yet today, customers are more-or-less forced to run two separate environments and they are forced to decide when a workload comes in, where they should put it.

Therefore, the best answer to whether to choose hyper-converged or converged is ‘have both integrated.’

Best of hyper-converged and converged, together in one architecture

If you need the best of both worlds, a composite architecture would be the best, so that the customer doesn’t have to choose.

A composite architecture can be provided by acquiring hyper-converged network switches (IP & SAN) and externally connecting a atorage system to them, Robust software unifies the management and lifecycle of all the solution elements, along with a rich set of eco-system integrations.

If you are using VMWare vSAN, you will find composite architecture very useful.

An administrator will have different storage pools available. One pool comes from external storage, another from vSAN. Depending on whether hyper-converged or converged is required, you can assign the storage pool from the respective storage area. On the VMware environment, you could have 100 VMs for example, some of which may be getting storage from the external SAN because they are running a workload optimised for converged, while others may be getting storage from internal disks because they are a good fit for hyper-converged.

Composite architecture is now available. We can offer the best of both worlds in one architecture and help the customer avoid silos, and continue to provide a common interface to management. vCenter remains the software through which to manage the internal and external storage – which is what many customers like.

Other vendors either don’t offer this capability or if they have it, they don’t actively recommend it. So, the majority of customers have been led to believe they have to choose between converged and hyper-converged.

Going with Composite Architecture

A large insurance organization in India wanted to optimise the customer experience for their policyholders. and reach new customers. To do both, they wanted to enhance their offerings with improved digital services. This, of course, would require increased agility across their IT landscape and a refresh of their legacy hardware - which had previously restricted the scope for innovation.

To develop a highly flexible, shared infrastructure, they identified private cloud and software-defined data centre (SDDC) technologies as the solutions they’d like to pursue.

They deployed a complete private cloud stack.

The backup of this environment was on to an external SAN using modern data protection software, and they also ordered an Integrated Hyper-Converged appliance with FC HBA to connect to the external SAN to gain the flexibility of provisioning storage both from vSAN and external SAN.

By Pratyush Khare, APAC CTO, Hitachi Vantara