Spelling out NFV and learning from ‘Frederick the Great’

By : |May 30, 2016 0
Telcos have been using purpose-built hardware designed to meet the performance and service- level agreements for years. Now the switch to NFV packs many advantages, implications and questions

INDIA: Playing Whack-A-Mole? Confusing NFV and NV? Not treating security attacks like diseases? Underestimating network’s role there? B S Nagarajan, Sr. Director, Systems Engineering, VMware India underscores micro-segmentation, SDDC approach, strategic vs. tactical military and some other elements as he dissects the excitement around NFV or Network Function Virtualisation. In this interview, we also try to capture some ringside concerns around security, provisioning, maintenance and CIO-friendliness.

How do you explain NFV’s rise and relevance, specially in terms of key adoption factors?

NFV is often misunderstood for network virtualization, or sometimes called SDN (Software Defined Networking). Network virtualisation is only a part of an NFV solution.

___________________________________________________________________________________________________________

Network functions by telcos have traditionally been delivered using proprietary hardware, where a specific server—built for the delivery of that service/function only—delivers the function. This purpose-built hardware is designed to meet the performance and service- level agreements required of that function, and would do it very well. This purpose-built hardware would be designed and built by network equipment providers (NEPs), and sold to the Telcos, who would then sell these network functions to their customers.

This model has worked successfully for years, but it carries some challenges around hardware expenses, management and skills.

How? Can you elaborate?

The proprietary hardware is very expensive so that means CAPEX (capital expenditure) pressures. The hardware only fulfills a single function and that brings in capacity management challenges. Next, skills needed to support the hardware are very specialised hence silos of support and high maintenance costs emerge. Also, there are OPEX (operating expenditure) pressures and thus bringing new services on board is very expensive and time consuming. In other words, lack of agility and responsiveness to business requirements.

When the Telco industry took a look at what cloud computing promises the enterprise, they found it addressed many of the challenges they were struggling with (like cost, agility, and silos). These factors led to the adoption of NFV which is moving network functions from their proprietary hardware appliances on to a shared cloud platform by virtualising them.

What is different about VMWare’s play here with NSX? Also, anything about its Nicira lineage?

VMware NSX will be the world’s leading network and security virtualisation platform providing a full-service, programmatic, and mobile virtual network for virtual machines, deployed on top of any general purpose IP network hardware. The VMware NSX platform brings together the best of Nicira NVP and VMware vCloud Network and Security (vCNS) into one unified platform. VMware NSX exposes a complete suite of simplified logical networking elements and services including logical switches, routers, firewalls, load balancers, VPN, QoS, monitoring, and security; arranged in any topology with isolation and multi-tenancy through programmable APIs – deployed on top of any physical IP network fabric, resident with any compute hypervisor, connecting to any external network, and consumed by any cloud management platform (e.g. vCloud, OpenStack, CloudStack).

Does it take into account emerging trends like containers and cloud today?

Yes, to fully understand the value and benefits of NSX, it’s important to look at how NSX manages three of the most important strategies—and challenges—to networking today: Containers, public clouds, and infrastructure management. Talking of Containers as the Application Management Layer of Choice; NSX uniquely takes advantage of all the major strategies in networking that virtualization has unleashed.

One of the most significant is the emergence of containers as the application management layer of choice. Depending on the type of application deployed, containers offer a number of structural advantages over virtual machines (VMs). One of the most significant is that containers allow you to share the base operating system. You don’t need to make multiple copies of the OS for each different app as you do with VMs.

How does micro-segmentation align with end-to-end encryption or provisioning goals?

The challenge is that this lack of isolation also becomes a container’s most vulnerable security liability. Once inside a container, an attacker can move from app to app and do tremendous damage. Only NSX with micro-segmentation can provide the bullet-proof security solution that containers need. NSX allows you to put the equivalent of mini-firewalls between each container, and to set rules to deny access to the apps inside. NSX also provides visibility into the containers, making container communities easier to monitor and manage.

What about Public Clouds and complexity that is inherent there?

Each public cloud comes with its own unique collection of services, including storage, load balancing, and firewall. The challenge with public clouds arises because modern apps are extremely complex. NSX supplies the solution to this problem by simplifying this process, by treating each app as the same, no matter how different and regardless of complexity. This capability enables NSX to make deploying, configuring, and securing apps from one cloud to the other fast and easy. It’s a capability that allows you to run all your apps the same way and to deploy them in just minutes, regardless of the number.

How secure is network virtualization? And why would micro-segmentation play a role there?

NSX network virtualization brings an SDDC approach to network security. Its network virtualization capabilities enable the three key functions of micro-segmentation: isolation (no communication across unrelated networks); segmentation (controlled communication within a network); and security with advanced services (tight integration with leading third-party security solutions).

Key benefits of micro-segmentation include
• Network security inside the data center: Fine-grained policies enable firewall controls and advanced security down to the level of the virtual NIC.
• Automated security for speed and agility in the data center: Security policies are automatically applied when a virtual machine spins up, are moved when a virtual machine is migrated and are removed when a virtual machine is deprovisioned—eliminating the problem of stale firewall rules.
• Integration with the industry’s leading security products: NSX provides a platform for technology partners to bring their solutions to the SDDC. With NSX security tags, these solutions can adapt to constantly changing conditions in the data center for enhanced security.

What are some main security concerns for CIOs? Like why is network suddenly so crucial area to work upon for security? Or why is a granular approach better? Also, is the idea of abstraction and network programmability readily in favor of better security and not sometimes vice versa?

According to PwC, the volume of cyber attacks grew 38 per cent between 2014 and 2015. Even more alarming is the fact that these attacks are consistently becoming more sophisticated and more successful.

Determining how best to defend against this avalanche of cyber attacks is a key priority for every organization moving forward. But, perhaps surprisingly, many organizations are finding it challenging to develop a coherent strategy for cyber security. A recent global survey of C-suite business executives (CEOs, COOs, CFOs), and leading security executives (CIOs and CISOs), revealed that while business leaders tend to think strategically and long-term, security leaders prefer a tactical approach to security, one that focuses on individual solutions to each possible attack.

I think of this as Playing Whack-A-Mole here.

Explain.

The problem with this tactical approach is that the amount and type of attacks is continually growing and evolving. By trying to defend attacks on all fronts individually, cyber security teams find themselves in the unhappy place ‘Frederick the Great’ warned his generals against. Cyber security becomes a game of Whack-A-Mole, in which corporate defences cannot be proactive and instead must simply react to the newest and biggest threat. The sheer number of successful cyber attacks alone is proof that this reactive, tactical approach to security has reached the limits of its effectiveness. It’s time for a new approach.

So, what’s the answer?

We need a Strategic Architectural Approach to Security. What’s needed is a more strategic architectural approach to cyber security that would align a firm’s security strategy with its most important security priorities. For most firms, the most precious asset they have, according to the EIU survey, is the trust of their customers. Any holistic, strategic cyber security plan begins here.

A flexible, architecture-based defence allows your IT department—once notification of an attack has taken place—to identify, mitigate, and contain the attack. Data breaches are like diseases; if you can spot and treat them early, you can reduce the gravity of the effects. VMware NSX offers organizations the new, architecture-based security solution they need to defend themselves against the growing number and types of cyber threats. The micro-segmentation made possible by these NSX capabilities transforms security by creating the proactive, strategic defence needed to protect an organization’s most valuable assets.

No Comments so fars

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.