/ciol/media/media_files/2026/02/25/enterprise-ai-scaling-as-a-business-problem-2026-02-25-11-22-45.png)
At the AI Impact Summit 2026, conversations around enterprise AI repeatedly returned to a familiar gap: pilots succeed, scale fails.
In an interaction with CiOL, Rahul Bhattacharya, Partner and AI Leader, EY Global Delivery Services, argued that the reasons have little to do with technology.
“There can be several factors, but what we have found to be the key important ones are actually nothing to do with technology.”
Bhattacharya emphasised that organisations often focus on models and proofs of concept, but scaling requires foundational work, governance, operating models, value measurement and change management.
Having a working pilot, he noted, only confirms feasibility. Enterprise value emerges when organisations build structures that support sustained deployment. He described this as a playbook approach to building what he calls an “agentic enterprise”, where technical architecture is only one component among several organisational shifts.
Interview Excerpts
Why do most enterprises’ AI pilots fail to scale into measurable business transformations?
There can be several factors, but what we have found is that the key ones are actually not related to technology. It is about organisations focusing on the right things that create value and the way they go about doing it. Having the right operating model and the right supporting structures matters. Using AI models or LLMs is one part of it, and I would say that is the simpler part. Usually, you run the pilot or the POC, and you know whether it works or it does not.
But when you move to scaling, certain foundational components must be built. There are governance aspects that have to be brought into the organisation. There has to be value measurement and sometimes measurement of the existing state. Often, we try to improve something without even knowing the baseline: what the number is today and how that improvement will be measured. All of these things are very important, and we typically use a playbook. When implementing an agentic enterprise, we define what that playbook looks like; there are six components you have to take care of and how you execute each of them.
Technical architecture is one part of that, but many things around business process, value engineering, change management, organisational design and operating model design play a very important role.
Is generative AI distracting enterprises from building deeper foundational AI capabilities?
I would not necessarily say that. AI as a field has been around for decades. It started in the 1950s, immediately after World War II. I personally started working in AI in the 1990s. I built my first neural network model for a steel plant in India during that time.
AI, machine learning: whatever you call it, has existed for a long time and has strong applications. Right now, the focus is heavily on generative AI and agentic AI. These are not substitutable. Traditional predictive AI has its own applications, and generative AI has its own.
There is a lot of excitement around generative AI because the applications it enables are very different and highly visible to consumers. In that sense, AI has become democratised, similar to how search became universal.
Because of that, there is more focus on it, but that does not necessarily take attention away from foundational AI. The principle remains the same: you do not start with the technology; you start with the business problem and then apply the right technologies. That focus is important.
Do you believe AI will impact jobs or take jobs?
Whenever new technology emerges, there is always a fear it will take away jobs. In reality, what usually happens is that jobs change. People need to learn to work differently.
AI is good at certain things, but it performs well only when given the right instructions, context and narrative – things you already hold because nobody knows your job better than you.
Software developers often ask what they should do. My advice is to understand the business domain. Even though AI agents can write code, they will not write the right code unless you explain the domain clearly.
Understanding industry context – how an industry works – becomes critical. Technology professionals need to become more business-savvy to use these tools effectively, write software faster and build solutions more efficiently.
Can India move from an AI services hub to a creator of core AI IP?
It is an interesting question. I was moderating a panel on AI infrastructure and scaling, where we discussed sovereign AI extensively. From a talent perspective, can we do it? Absolutely. India has a critical mass of talent like nowhere else. We are building global systems and global AI from India. The scale, depth and breadth of talent and experience are exceptional.
The question comes down to two things: infrastructure and time. Could we have been further ahead? Yes, but that would have required starting 20 years earlier. These shifts take time; they cannot happen overnight.
Infrastructure is essential; power infrastructure, computing infrastructure and the hardware ecosystem all matter. Can we do it? Yes. Can we still do it? Yes. But it will take time. Talent is unquestionably here.
Is responsible AI a strategic differentiator or still a compliance exercise?
I think it is a necessity. When organisations adopt technology, they can take an aggressive view, a balanced approach or a conservative one. The key question is finding the right balance. Where regulation exists, it becomes law — you must comply. Where regulation does not exist, organisations still need to decide what is right. With any technology that has the potential for significant good, there is also potential for harm.
Controls are necessary, but too much control can slow innovation, while too little can create risk. Finding the right balance is critical. Responsibility involves transparency, security and governance considerations. Responsible AI is ultimately about human responsibility. 'Responsible' is an adjective; 'AI' is a noun. We are assigning a human trait to a non-human system. AI itself is not responsible; humans are responsible for how it is used.
Most harm is not intentional but arises from misunderstanding the technology or overlooking consequences. That is what organisations must guard against.
He further added, 'I often say, do not focus on technology.' I use a line frequently: all models are wrong; some models are useful. A model is a representation of reality, so by definition it is incomplete.
Debating which model is better is often not the right question. The real question is whether it is useful. Usefulness starts with the business problem.
Does it make someone’s job easier? Does it enable better outcomes? Does it generate a return on investment? Those are the questions that matter. We must start with the job to be done and then choose the right technology.
Just tell me.
/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png)
Follow Us