/ciol/media/media_files/2025/12/24/soum-paul-2025-12-24-18-03-14.png)
For many enterprises, AI adoption still looks fragmented, chatbots in one corner, pilots in another, and a growing sense that value remains just out of reach. While global narratives often focus on models and GPUs, the real challenge inside organisations is far more operational: how to embed AI into everyday work without breaking culture, governance, or trust.
Founded in 2022, Superteams.ai operates at this intersection. Rather than selling AI as a standalone toolFOUN, the startup positions AI as a workforce layer, augmenting human teams with AI agents, workflows, and sovereign deployments that fit enterprise realities. Its approach reflects a broader shift underway: AI moving from experimentation to infrastructure.
In a detailed interaction with CiOL, Soum Paul, Founder and CTO Superteams.ai, explains why AI adoption is less about intelligence and more about data readiness, trust, and organisational design and how enterprises can move from pilots to production without losing control.
Interview Excerpts
The shift from human-only teams to AI-augmented workforces sounds inevitable, but what does the transition really look like inside traditional enterprises? Which cultural or structural barriers resist this shift the most?
Our data shows that nearly half of India’s medium to large enterprises use AI in some form, with many more in pilot phases. IBM puts active AI deployment at around 59% of enterprise-scale organisations, and NASSCOM’s AI Adoption Index shows over four-fifths are beyond the experimentation stage and actively using AI in their operations.
However, in most enterprises, the shift to adopting AI isn’t a quick overnight transformation. It starts in one segment of the business: that could be a contact centre, a KYC back-office, a loan underwriting desk, or a real estate sales team drowning in leads and paperwork. You begin by putting an AI co-pilot next to an existing team. The early wins might seem ordinary on paper, but are powerful: fewer repetitive queries for agents, faster document checks, more accurate data entry. Over six to twelve months, the organisation slowly moves from “AI as a pilot” to “AI as plumbing”, something that is built into their workflows.
Take, for example, real estate. A traditional developer still runs on site visits, broker networks and Excel sheets. What AI can do is help sales teams gather leads, generate personalised follow-ups, and keep track of conversations across channels. On the operations side, AI tools can read land records and contracts, helping legal and finance teams. As AI works alongside people, it changes the scope of their daily work.
Banking is slightly further ahead, but the pattern is similar. AI sits beside human teams in KYC, collections, and customer support. It flags suspicious patterns in documents, summarises call recordings, and so on. Over time, banks start trusting these systems with higher-stakes tasks, like pre-screening loan applications or prioritising recovery efforts for credit teams.
The resistance that we observe is more cultural than technical. Many managers in traditional sectors still equate value with headcount and visible effort. If a process becomes too “easy”, they might worry that the job will be seen as dispensable. Front-line staff, on the other hand, fear that automation will lead to downsizing, so they resist new tools.
Luckily, some enterprises state quite explicitly that AI is here to remove drudgery, not headcount. They then redesign roles so people move up the value chain rather than out of the organisation. And they treat AI as a long-term capability instead of an “innovation project” run on the side.
Enterprises often overestimate AI’s intelligence and underestimate the importance of data quality. In your experience, what is the biggest misconception businesses hold about building “AI-powered teams,” and how does it derail adoption efforts?
Most data in most organizations are unstructured - tools have attempted to structure this data for the past two decades, but with low accuracy. Modern AI models have an incredible ability to understand unstructured data - text, video, images, audio - and convert them into structured formats. And if you think about it, a lot of repetitive manual workflows crop from unstructured data.
AI is helping us solve this mess of unstructured data by structuring them into formats that are more useful. For instance, it can help convert the scanned image of an invoice into SQL data. Or, for instance, it can find inconsistencies in contract documents after comparing them against guidelines or rules.
Of course, you cannot drop an AI model into a messy backbone of data and expect it to triumph over it. AI is a good student - it will produce whatever you feed into it. If your data is scattered across databases, teams, stored in disparate formats, or riddled with gaps, your AI responses will reflect those incompatibilities.
And this is something we discuss with every business before taking on projects. We emphasise that some groundwork needs to be done: understanding the current data architecture, figuring out how data needs to be made AI-ready, or deciding which role gets access to what data. For example, in real estate, approvals, land records, customer conversations, and payment schedules may all live in different systems, so one will have to figure out ways to bring them together, and build agentic workflows after that. So, there’s quite a bit of legwork involved.
Another big misconception is around people. Many orgs think an “AI-powered team” means fewer people. That is not necessarily true. It means different work for the same people. If people believe that tech is a threat, they will resist it: not sharing edge cases, not correcting outputs, not trusting recommendations. And without that early human feedback, the system will never improve.
Workflows involving LLMs, private data sources, and proprietary knowledge graphs raise critical trust issues. How far are we from establishing standards for accountability, versioning, and explainability of AI-driven decisions in daily operations?
Right now, what we really have are good practices rather than industry-wide standards. In most serious deployments we do, we end up building our own “mini-standards”: every workflow gets a clearly tagged model version, and prompts are treated almost like code. We also ensure that every output is auditable and traceable, so that we can explain, evaluate, and verify the output.
In regulated sectors like banking and insurance, for instance, if an AI system is used to score credit, pricing, or risk, you need a human-understandable rationale. So we often combine an LLM with more traditional, interpretable models and strict business rules. The LLM might summarise or converse, but the underlying decision logic is still something a regulator can read on paper.
In less regulated areas – like internal knowledge assistants or real-estate sales co-pilots – people tolerate a lot more “black box” behaviour as long as the outcomes look sensible and there’s a human in the loop. In such scenarios, the role of AI is to reduce manual effort and act as an assistant - so as long as exact sources of information are accurately referenced, it can be a great time-saving device.
The messy bit is data. Every AI problem is a data engineering problem in disguise. And most organisations lack AI-ready data. Additionally, AI models vary widely in their capabilities. Some feature stronger reasoning capabilities, while others may be less capable in reasoning but might have larger context windows. Typical AI workflows combine a wide range of AI models at various steps, depending on their capabilities. This makes it extremely challenging to create a universal standard. That’s where we’re going to see the next wave of tooling: proper model versions, prompt registries, lineage of which data was used, and dashboards that show, “This decision came from model X, version Y, fine-tuned on dataset Z, using policy A.”
If you ask, “How far are we?” I’d say we’re in the equivalent of the early internet era, where everyone has their own approach. Over the next three to five years, I expect three things to become non-negotiable in daily operations:
- You must be able to tell which model version and prompt made a decision,
- You must be able to show which systems and data sources it touched, and
- You must have a clear escalation path for humans to override it.
The enterprises that start behaving as if those standards already exist will have a much easier time when regulators eventually catch up.
There is a growing conversation around “agent autonomy.” At what point does too much autonomy become a risk for enterprise governance, and where should the line of human-in-the-loop be drawn?
Autonomy is brilliant for moving information around, but it becomes dangerous the moment it can move money, change records, or commit the company to something without a human noticing.
In practice, I think about three layers.
At the bottom layer, you have “read and recommend” autonomy. The agent can fetch data from multiple systems, join it, run checks, and propose an action: a draft email, a risk score, a suggested discount, a list of overdue invoices to chase. Here, you can give it quite a lot of freedom, because nothing happens until a person clicks “approve” or edits the output.
The second layer is “low-stakes, reversible actions”. Things like creating internal tickets, sending meeting summaries, tagging CRM records, or reordering non-critical inventory. If an agent makes a mistake here, you can correct it tomorrow. You still keep logs and alerts, but you don’t need a human to approve every move. Many back-office workflows can live in this zone quite comfortably.
The top layer is where autonomy becomes a governance risk:
- anything that moves money (pay-outs, refunds, disbursals),
- anything that alters legal, KYC or contract data,
- anything that commits to an offer, price, or binding promise to a customer,
- and anything that can create a regulatory exposure.
Here, the agent should never have full autonomy. It can prepare the decision, document the rationale, and even simulate outcomes, but someone from the organisation owns the final click. In a bank, that might mean an agent pre-screens 1,000 loan applications and surfaces 150 for fast-track approval, but a credit officer still signs off.
So where do you draw the line? My view is simple:
- Let agents be fully autonomous only in workflows that are low-risk and easy to roll back.
- Keep a human firmly in the loop whenever money, compliance, or reputation are on the line.
AI talent scarcity has forced many Indian enterprises into patchwork adoption, tools here, pilots there. Do fractional AI teams genuinely solve this gap, or are they a temporary fix until organizations upskill internally?
A fractional team is really a borrowed R&D unit. In India, most traditional enterprises simply don’t have the in-house mix of people you need for serious AI work: someone who understands the models, someone who can wrangle messy data from legacy systems, someone who knows the business rules, and someone who can productionise things safely. You’re not going to hire that full stack on day one.
The important thing is what you’re asking the fractional team to create. If you ask them “build us a tool”, you’ll get a tool. If you ask them “help us design three or four core AI workflows, prove the ROI, and teach our own people to run them”, you get something much more durable. In our experience, the most successful businesses do three things:
They ring-fence two or three high-value use cases – say compliance check, support-call analysis, or sales co-pilots – and treat them as a proper R&D track, not a side project.
They pair the fractional team with an internal “shadow” team from day one: business, data, and IT sitting together, learning by doing.
They insist on assets, not just outputs: documented data pipelines, model and prompt registries, governance playbooks, internal training sessions.
So, fractional teams act as a very effective bridge. They help businesses avoid the two extremes – doing nothing, or hiring a full AI department before they know what works.
As sovereign AI models rise in India and globally, how do you see the enterprise AI stack evolving, will companies shift to their own controlled AI infrastructure, or will convenience continue to push them toward global hyperscalers?
Sovereign AI is an extremely critical piece - and the ideal scenario for an enterprise is to control their AI infrastructure. But the truth is that it won’t be an either–or, to be honest. For the next decade, most serious Indian enterprises are going to live in a mixed world.
On one side, you’ll have sovereign or locally controlled models – AI workflows that understand local regulatory context and hosted on infrastructure that satisfies data-residency and sectoral rules. BFSI, healthcare, and parts of government will lean in this direction. If you’re a bank, for example, you’ll use open weight models in sandboxes, and your production stack will be: Indian data centres, controlled networking, and models that can be inspected, benchmarked, and, if needed, switched out. Enterprises will leverage sovereign AI in any workflow where sensitive data or regulatory compliance is involved.
On the other side, convenience will keep pulling people towards platform models. The reality is that if you want to experiment quickly, or if you’re running something that isn’t deeply sensitive, like an internal knowledge assistant or a marketing co-pilot, it is very hard to beat the pace of the big AI companies. They give you the plumbing, monitoring, and scaling almost out of the box.
So what changes in the enterprise stack?
First, I expect the “model layer” to become much more interchangeable. Rather than hard-wiring one large model everywhere, you’ll see routing: part of a query goes to a sovereign model running on their cloud, another one goes to a platform model, another to a small specialist model. The application doesn’t care as long as latency, cost, and policy requirements are met.
Second, control shifts from “who hosts the GPU” to “who owns the stack”. Enterprises will demand very clear levers: where logs are stored, how long they are retained, what data is allowed to leave the VPC, and which jurisdictions are involved. That pushes even the hyperscalers to offer more sovereign-style options.
Third, Indian companies that are serious about AI will start treating infrastructure as a strategic choice, not an afterthought. An insurer might run its most sensitive models on Indian GPU clouds or on-prem clusters, connect them to an internal vector store and knowledge graph, and then selectively use hyperscaler APIs for less critical tasks.
What we might see is a layered approach:
- sovereign or tightly controlled models for core data and regulated decisions,
- AI platforms for speed, experimentation, and less sensitive workloads,
- and an orchestration layer in the middle that lets you swap models and infra without rewriting everything.
As AI matures from experimentation to infrastructure, enterprises are being forced to confront uncomfortable truths about data readiness, governance, and organisational design. Superteams.ai’s approach reflects a pragmatic shift: treating AI not as a product, but as a workforce capability that must earn trust over time.
For enterprises navigating this transition, the lesson is clear: AI success depends less on intelligence and more on integration.
/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png)
Follow Us