When ICICI Lombard found platinum lines in a cloud

By : |May 16, 2014 0

MUMBAI, INDIA: One should never blame the weather. After all, thanks to its mood swings, every stubborn slab of ice is broken between two strangers who scrape their feet wondering how to strike a conversation. Whether they wait under a bus stand or brush sleeves at a friend’s party, the vagaries of this umbrella-or-sunscreen-puzzle always come to rescue them out of awkward silences.

Many IT forces and quotidian problems provide the same cloak-room camaraderie to people in this industry. After all, what fun is it to polka dodge someone at a conference if you cannot whine about an idle server or gossip about latency numbers?

___________________________________________________________________________________________________________

But looks like Goutam Datta, VP, Technology at ICICI Lombard GIC Ltd. never meets strangers in odd situations or has better ice-breakers up his sleeve. For he recently decided and succeeded in wiping out a gripe-box out of his enterprise (and conversation kit).

This was something that is usually forgotten as an innocuous factor or is indulged into only when embarking on a big new SLA list.

For Datta, the subject of non-production computing paraphernalia could have been quite an odd eyesore/ priority though. At this company, operation part was happily growing with ever growing business numbers. But that also spelled increased automation requirement and with automation – need for developing new functionality, testing those freshly-developed parts, and testing with multiple combination of changes ensued. In short, effectively at some point the non-production computing went up to 40 per cent of total computing currently being billed.

As the issue kept growing and turning a major contributor to cost, Datta knew that it was time for a strong effort to manage and sustain the computing environment, instead of letting it be a matter of future ‘cross-the-bridge-when-it-comes’ solicitation, as has been the case in many places.

He felt it throbbing with an odd urgency also because this could mean a bottleneck for business when it was about matching up the pace in which they wanted to react to market dynamism.

“We needed an option which would be the answer to all constraints at once and possibly forward-looking so that obsolescence is not a factor.” He reflects.

Wrestling with Mercury

About a year back, the detailed design preparation started as the team got ready for moving non-production computing needs for core application on public cloud.

 

They got down to assessing all three forms of cloud implementation during this phase and reckoned that only public cloud can offer various benefits at its highest order, albeit, with its set of a handful of challenges.

Datta observes that private cloud was in many ways their existing platform itself, which with over 90 per cent virtualized level and provisioning-de-provisioning, did not turn up as a big technical exercise. But when it came down to administrative overheads, it meant some second thoughts.

“Establishing private cloud on third party provider’s premises would actually would go against theoretically pure ad-hoc spike-up/ spike-down concept as it often requires earmarking physical boundary on computing. On the other hand, hybrid variant required us to extend our internal network expanding on to partner’s network and provide trusted access to ensure ad-hoc consumption of computing capacity on partner’s network. This was technically a possible option but with its own regulatory impact, and possible security risks.” He reasons.

So, pure public option was in a different spotlight, weighing then as truly a cloud solution within the gamut of services available with selected partner, as he shares. Of course, this had to be approached with a few standard security / confidentiality measures like masking of data before make it available.

With the decision made, the real hands-in-the-soil work began. What helped considerably was the part that their on-premise application pools were individually best-of-breed and each of them was internally connected to complete a business transaction.

“They were also completely heterogeneous technology stack with variety of versions (and rarely upgraded to the latest version).” He reminisces. Challenges and on-the-move readjustments accentuated this transition with a new flavor. While they ran one copy of production version of applications, non-production environment had multiple copies of each of the core applications to test multiple permutations of changes that were on test bed. Enabling this set of applications over public cloud required intelligent selection of deployment mode (Infrastructure as a Service / Platform as a Service / Software as a Service).

“Also, migration of legacy application without complete compilation was one of the single biggest challenge. This is where detailed understanding and mastering of various models available on public cloud was critically required. We also had to take specific technical assistance from some other partners of ours to reengineer some of the code base etc.” he shares.

It never rains but pours

When it came down to implementation of internal integration among migrated applications, the shift opened a can of new species of challenges crawling in slowly. Internally, the core policy administration system interacted with 23 foreign systems to complete a business transaction, hence to make it work on cloud most of these foreign applications had to be migrated and interconnectivity was to be configured.

Next came implementation of mass email and SMS gateway so that during testing the application can generate regular email / SMS alert for the tester and other users. Such alerts were critical from end-customer point-of-view. Data was still not residing in Indian geography, or at least there was no assurance on the same and on the other hand physical and logical security could not be audited with chosen partner as per individual enterprise’s security policy. To make the waters more muddled, regulator was still not very comfortable with this service maturity and there was no clarity on how exactly data is cleaned before letting the same resource being put in use by other partner, after it is freed by an enterprise.

But Datta and his team did not throw in the towel and rather scrubbed away all these apprehensions and risk-flakes with the power of control, better redundancy planning, uptime assurance, network factors, rigorous integration with other IT components, synchronization and strong partner management.

As to the hard-to-ignore but quite a head-in-the-sands issue of latency and end-user concerns, Datta remembered and prioritized to sketch clear boundaries and fresh interfaces.

Rainbows and Pots

All the muscle and brain-power exerted in this cliff-walk did condense into some thunderous-enough, quick as well as long-term returns.

First of all this move saved money. With the cost of running non-production gone down tremendously, the practice enabled through cloud based infrastructure helped the organization to save critical application downtime.

Cost saving also came up through intelligent control on ensuring system available only when they are practically needed, for rest of the time the computing capacity is retuned back to providing partner – hence saving dollars, he stresses. Now that the non-production systems were no longer hosted at inner most network segment in the internal network and actually available over internet – external partner(s) could be effortlessly engaged to add value in testing process. This proved critical for many assignments. For Datta, lightning struck in many forms of huge learning bonus in terms of challenges it created. It came from maintaining integration among the applications (on Cloud) and required innovative methods put in place to take care of this challenge.

“Today our non-production computing need is supported through complete framework enabled over public cloud, however the journey taught us a lot about various cloud model, taught us about intricacies involved in migrating legacy application and building brick-by-brick over public cloud. We understood challenges around data security and mitigations.”

The project has accomplished its intent. Today it effortlessly enables business to be able to respond with deploying change on core applications as per market dynamics, maintains long term cost saving, and helps in detailed testing / performance testing or regression testing.

Think of more intangibles apart from usual tangibles, and we hear that this implementation has enabled business teams to go complete location-agnostic, and, if required the tasks can be even assigned to geographically-dispersed third-party testing partner(s).

“It has also resulted into higher rate of bug reporting in software / application testing and better application going for production, and that too within a very quick turn-around-time, which otherwise was becoming the bottleneck. Going forward this practice would establish the trend of higher and smoother availability of business application.” Datta assesses.

Expecting any direct-revenue equation would be a tad unrealistic for now as he shares how being in General Insurance business, one can not actually generate revenue from this mode.

“However for us this has been critical play area to save on bottom line. But I do feel that when it comes to innovative usage of cloud in other business and particularly startups – cloud can generate considerable amount of revenue – to start with.” He surmises.

No doubt, he has so many more interesting and intriguing topics to talk about now, since he has managed converting a so-called unpredictable factor into a favorable steady climate he is firmly in control of.

So next time you bump into him in an elevator, try asking how Clouds insure him with sunny-side-up mornings so often.

No Comments so fars

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.