Advertisment

C-Special 2: Latency: Are we barking up the wrong tree?

author-image
CIOL Bureau
Updated On
New Update

One fine morning, waken and shaken to the core while sipping his coffee, the head of a geriatric postal department of a sleepy state decided that things need some turning over. Private courier companies had started descending like hawks, and the writing on the wall was quite legible in the newspaper.

Advertisment

So he got up and stirred everything around. Soon railway partners were wheeled off on a hearse and fresh deals were being cracked with airline majors. Speed was definitely of essence in a service industry like this and he realized that if time sucked up in logistics was taken care of then his department would be able to survive and even wrestle out its private rivals in this squeaker. The supply chain was overhauled with savvier, faster and nimbler distribution points.

Runways instead of roads. Computers instead of typewriters. Stylish stamps instead of curator stuff.

Yet, the newspaper headlines never changed. The private guy had weaned postal users off the old service faster than before.

Advertisment

Discombobulated, the Head set up a special committee, and ordered another report.

Typical again. But the answer is not that hard to guess. He kinna committed the good old mistake of shooting the messenger.

He hired cargo planes to ferry the letters but he missed the fact that his liveried postman still delivered the letters on a cycle. The sky might have been covered, but the last mile still meant the one dreaded word- Delay.

Delivering letters or delivering applications, the pipeline matters till the packet is received.

It’s all about conquering latency.

Advertisment

In the parlance of a network, latency, is akin to ‘delay’, and is usually an expression of how much time it takes for a packet of data to get from one defined point to another.

An engineer would tell you that put simply, it is the time delay which comes in between the moment something is initiated or as soon as the effects begins.

And a network-engineer would tell you how this all-time clog keeps baffling IT users at varying rates and with varying technologies. For instance- FTP or File Transfer Protocol deals with it in a painfully slow, unreliable, and not-so-suave manner, many say. Could be because of the complete-packet being transferred every time one resends it, with long distances adding a double whammy.

Advertisment

Now, ask any Cloud exponent and you will be told how the arrival of next-generation digital transfer technology, the scalability potential and cost savings offered by cloud services will work as the Superman long awaited. Latency addressed, they say.

But can the Superman fly?

What they claim is not eyewash. But there’s more to the story. The damsel in distress is usually the end-user, who for ages has been struggling with latency attacks in the network. Irrespective of how fast and agile a Cloud environment is, the clincher is after all a simple question. Does the end-user get what he wants at a finger’s snap or is s/he left frustratingly cursing the network tapping her/his fingers?

Advertisment

Cloud means Internet. And Internet, as many still misconstrue, is not a field devoid of latency.

Latency across the Internet is typically the culprit behind slow or unresponsive applications and websites, and represents a major issue for cloud computing, agrees Kartik Shahani, Country Manager, RSA India & SAARC.

As he further outlines, geography and network distance play a key role in determining latency - the further the cloud environment is from your internal network systems or the end user, the greater latency across the network.

Advertisment

“Cloud infrastructure performance and the ‘network’ must be given equal consideration. They are two sides of the same coin, and the ultimate success of any application deployment in the cloud for end user relies on both aspects performing reliably at a level acceptable to the end-user.”

Dennis Drogseth, Vice President, Enterprise Management Associates, Inc advises it in the same drift. “IT should evaluate all cloud deployments in terms of user experience at the end point.”

Everyone, from IT mangers, vendors, SLA sculptors to CIOs, tends to stay blinkered with the word performance within the cloud environment when deploying applications to the cloud. The moot point is — can the overall application’s reliability, availability, uptime (or better still zero downtime), across the entire delivery chain be guaranteed?

Advertisment

And yes, the end-user’s side matters. To be precise, matters more than the Cloud’s inside.

Unfortunately, as Shahani seconds, the focus on performance and availability within the cloud environment ignores the aspect of the "network" path by which latency and jitter affect the performance of application content delivery to end users.

What is important to note is that both aspects, the cloud computing platform and the network or "Internet," have the potential to adversely impact the end-user experience. The combined latency or degraded performance can manifest itself.

Latency to sum it up is an aggregate of  intra-cloud latency, and network or Internet latency. Everything boils down to their total - total system or systemic latency.

Shahani explains how a serious miscalculation can happen due to two reasons:

First, the cloud provider's choice of network carrier shouldn't penalize the cloud user when network performance is degraded.

Second, end users will abandon applications and websites based on the smallest performance delays or downtime, jeopardizing the perceived value of the cloud initiative.

“For these reasons, it is critical that the discussion around cloud latency shift away from IT or business unit-defined acceptable levels of latency to end-user behavior judgments as to what level of latency is acceptable.” He stresses.

{#PageBreak#}

And Latency can cost heavy, very heavy

Consider a study by Aberdeen Group on the performance of web applications underlines how a one second delay reduces customer conversions by seven per cent.

These insights are not one-off.

If you happen to spot some interesting findings by Equation Research, you would note that application ‘unavailability’ means more than just a few minutes lost or wasted. It actually means ‘revenue’ down the drain (or pipeline, if you please).

Peak Online Traffic Periods are critical since more Web visitors mean more revenue opportunities. Consumers’ expectations during peak traffic times, and how do they behave if/when they experience poor Web performance, can mean a lot.

What this study (that was commissioned by Gomez on consumer Internet usage experiences during peak traffic times), rightly highlighted was that a not-so-good application experience can translate into a lost revenue opportunity, a lower customer perception of your company, and if you can face it, even boost your competitor's bottom line.

Well, it doesn’t come as a surprise that 78 per cent of site visitors have gone to a competitor's site due to sub-par performance during peak times. And 88 per cent are less likely to return to a site after a poor user experience.

When it comes to end-user requirements, application and website performance, every millisecond is important, emphasizes Shahani.

“End-users expect the results fast enough to respond otherwise will click away, which has a direct impact on customer satisfaction and in turn to the revenues.”

But that’s not all about latency and how it seeps into balance-sheets.

The denominator can sharply impact many other figures.

If you think, your SLAs cover all availability metrics adequately and you are reaping unprecedented savings just because you hopped on to the talk-of-the-town bandwagon called Cloud, think again.

Latency can be trapping you inside an invisible lock-in.

Lock-in. The same dreaded-word that used to plague Cloud’s forefathers of the licensed software era. What’s interestingly impish about this new avatar of lock-in (whether by design or by oversight) is that no one can be blamed for actively creating it. In fact as one observer nails it accurately- every vendor gains by ignoring it.

To add salt to injury, considerable bandwidth and latency degrades result in overcharging the customer. All the while, making him feel unscathed as intra-cloud data exchange is essentially free.

Things maintain relatively cheap and fast if one sticks to one vendor, as connecting between one vendor and another makes it difficult on the network angle, both being at a distance, and exacerbating latency.

Cracking the code

Some possible answers to the problem could be a dynamic, intelligent traffic routing mechanism, improved reliability of IP traffic and better mechanisms to measure performance across the entire chain of Cloud.

Stronger SLAs that keep these factors of latency into scope will help further.

Specially with those myopic performance and availability guarantees.

Other darts hitting the board and closer to the target could be the concept of cloud storage gateways. If cloud storage is wielded as local storage, wiping out all the difference between storage area network (SAN) and cloud storage, and thus attacking the latency and bandwidth issues propped up by WAN.

Startups like Cirtas are talking about taking front-end block-access storage array controller functionality and plugging it together with WAN optimization technology and deduplication and compression technologies. This may actually add a slug of cache to help beat access latency problems.

Shahani suggests efficiency through utilization and automation.

“Resource pooling and a self-managed, dynamically optimized environment dramatically increase IT performance–leveraging existing resources to avoid unnecessary infrastructure latency and investments following the same followed by technology lock-in.”

Other experts opine that ratification of  802.3ba standards for 40 and 100GbE in June 2010, might be just the tourniquet for now. Even more bandwidth to the back-end can take care of bleeding networks to some extent. The noise being made by 40GbE top-of-rack and backbone switches gets one’s attention here.

More so, as 40GbE could pass on the baton to 100GbE and eventually to Terabit networking technologies. The spoke in the wheel is however — time. All this might take few years to actually happen the way we wish to.

To be or not to be

There are evangelists who would argue in favour of Cloud again. Who feel that ‘real’ cloud computing and cloud services are actually the solution to the issue of latency and poor performance. The argument is that computing happens very close to the user and network latency is reduced to a minimum.

But Cloud is no more about just any application any more. It has started covering mission-critical applications fast and deep, a confident Manoj Chugh, President of EMC India & SAARC tells.

Not only that. As Brian Prentice, Research VP, Gartner reveals, Cloud would soon be more than your back-stage crew. It could be the revenue-spinner window to deliver to an enterprise’s end-customers.

This is exactly why end-user latency acquires serious proportions.

End-user matters.  Dennis Drogseth from EMA cautions well — “In the end,  IT should look for service providers willing to at least shake hands with that equation.”

Message delivered?