AI Impact Summit 2026: Identity Emerges As AI’s Next Security Layer

At the AI Impact Summit, miniOrange CEO Anirban Mukherji warns that AI scale depends on machine identity, zero-trust, and governance as autonomous agents gain enterprise access.

author-image
Manisha Sharma
New Update
Anirban Mukherji, founder and CEO of miniOrange

At the India AI Impact Summit 2026, one theme moved beyond infrastructure and models: identity.

Advertisment

As enterprises push AI from experimentation into operations, identity and access management is emerging as a foundational control layer.

Anirban Mukherji, founder and CEO of miniOrange, argues that the industry is entering a phase where the core security question is no longer who is using AI but what AI is allowed to do.

His central message: AI adoption without identity frameworks creates invisible risk.

Interview Excerpts:

As AI systems begin making autonomous decisions, how should identity frameworks evolve from authenticating users to authenticating machine agents and model-to-model interactions?

For decades, identity systems were designed around humans. We built passwords, OTPs, biometric scans, access cards and Aadhaar-linked eKYC to confirm that a person is who they claim to be. Authentication was the gateway through which humans interacted with software.

That assumption no longer holds. AI agents now read emails, schedule meetings, initiate payments, generate code and even communicate with other AI systems. They operate independently and at machine speed. The pressing question is no longer “Who is the user?” but “Who is the agent acting on the user’s behalf?”

Advertisment

Identity frameworks must therefore expand beyond human authentication to include machine identity. Every AI agent should possess its own unique, verifiable digital identity — not a shared API key or a reused admin login. An agent should be able to declare, in policy terms, "I am EmailBot, operating from a specific data centre, and I am authorised only to read emails, not send them or access other systems.”

This shift requires:

  • Dedicated digital identities for AI agents

  • Strictly scoped permissions based on role

  • Authentication protocols for model-to-model interactions

  • Continuous monitoring and detailed audit trails

India demonstrated population-scale digital identity through the Unique Identification Authority of India and transaction-scale verification through the National Payments Corporation of India with UPI. A similar concept, a “machine Aadhaar”, is now required for autonomous systems.

Identity governance must evolve from managing people to managing both people and machines.

In AI-driven enterprises, does zero-trust architecture become more critical or more complex when algorithms themselves initiate actions across systems?

It becomes both more critical and more complex.

Zero Trust is built on a simple principle: never trust, always verify. In a traditional enterprise, this applied primarily to users and devices. In an AI-driven enterprise, algorithms themselves initiate actions across CRM systems, payment gateways, HR databases and production servers.

Advertisment

The risk is obvious. A compromised AI agent can traverse multiple systems within seconds. Yet many organisations deploy AI with broad access rights “just in case” it needs them. That is equivalent to giving a delivery driver keys to your entire house because they might need to enter the kitchen.

Zero Trust in the AI era demands:

  • Just-in-time access rather than persistent permissions

  • Behavioural monitoring, not merely login verification

  • Real-time anomaly detection for agent activity

  • Immediate revocation when behaviour deviates from policy

Trusting an AI’s function is not enough. Every action it initiates must be verified against context, necessity and risk.

Advertisment

India’s data protection regime is still maturing; are current compliance models sufficient for AI systems that continuously learn, adapt, and reprocess personal data?

Not yet; India’s Digital Personal Data Protection Act is a strong foundational step. However, it was not specifically designed for AI systems that continuously learn, infer and generate new forms of data.

Traditional compliance models focus on consent at the point of data collection. AI complicates this because it does more than store data; it generates derived insights. Even if direct identifiers are removed, an AI model may infer sensitive attributes such as income, medical conditions or personal relationships.

Advertisment

For example, if a person uploads WhatsApp conversations into an AI assistant to organise their schedule, the model may later infer behavioural patterns or financial indicators that were never explicitly provided. These inferences can themselves become identifiable data.

Current regulatory frameworks do not fully address:

  • Ownership of inferred data

  • Long-term model memory and retraining governance

  • Transparency around automated inference

  • Revocation of consent after model training

As AI systems continuously adapt, compliance must move beyond static consent models towards dynamic oversight and explainability mechanisms.

In sectors like BFSI, healthcare and critical infrastructure, what new cybersecurity vulnerabilities emerge when AI models are embedded directly into operational workflows?

Every new system introduces new attack surfaces. When AI becomes embedded in operational workflows, it moves from being a support tool to becoming a decision-maker.

In BFSI and healthcare, this shift carries significant risk. An AI model can be manipulated through malicious inputs, corrupted training data or unauthorised tool access. The consequences are not theoretical: a poisoned model could approve fraudulent loans, a hallucinating assistant could schedule the wrong procedure, or a hijacked agent could transfer funds to a fraudulent account.

Beyond direct compromise, AI has also become a tool for attackers. Deepfakes, synthetic voice cloning and AI-generated phishing content have made scams far more sophisticated. The rapid rise of UPI and “digital arrest” scams illustrates how easily AI-generated deception can be weaponised.

Mitigation requires layered safeguards:

  • Human-in-the-loop validation for high-risk decisions

  • Strong output verification mechanisms

  • Segmented system access

  • AI-specific red-team testing

  • Advanced fraud detection tuned for synthetic content

In critical sectors, proactive defence must replace reactive damage control.

How should organisations think about accountability when an AI system is compromised: is it a cybersecurity breach, a governance failure, or an identity management gap?

In most cases, it is all three simultaneously. If an AI system is compromised, there may have been a cybersecurity lapse. However, there is often also a governance failure; no one questioned whether the AI should have been granted that level of authority in the first place. Finally, there is frequently an identity management issue: the system was over-permissioned or insufficiently monitored.

Many Indian companies conduct ransomware simulations. Few conduct drills for scenarios such as:

  • An AI assistant leaking PAN details

  • A chatbot autonomously modifying customer records

  • An automated agent deleting production data

Boards often treat AI as just another software upgrade. Yet AI systems make independent decisions and act across organisational boundaries. That demands structured oversight, defined incident playbooks and clear accountability lines.

Training is equally important. Employees must understand how to use AI tools responsibly, and engineers must understand how to deploy them securely. Without preparation, accountability during a crisis becomes fragmented and defensive.

As enterprises rush to deploy generative and agentic AI, are CISOs being brought into strategic AI decision-making early enough, or are they still reacting post-deployment?

In many organisations, CISOs are still brought in after deployment rather than during strategic planning.

Competitive pressure has driven rapid adoption of cloud-hosted AI models, often operated outside India. This raises concerns not only about cybersecurity but also about sovereignty, continuity and geopolitical exposure.

A notable example was when Microsoft suspended services such as Microsoft Outlook and Microsoft Teams for Nayara Energy in compliance with sanctions. The incident demonstrated that access to digital infrastructure can be influenced by geopolitical developments.

If an organisation’s core operations depend on AI models hosted abroad, key questions arise:

  • Where does the data reside?

  • Who controls the model?

  • Can the organisation function if access is suddenly restricted?

The CISO must evolve from a reactive security gatekeeper into a strategic risk advisor and business continuity architect. AI strategy is no longer solely a technology decision; it is a matter of national alignment, operational resilience and long-term independence.

AI systems now authenticate, decide, transact and infer at scale. Yet our identity, governance and compliance frameworks were built for a human-centric world.

The future requires a dual framework, one that governs both human identities and machine agents with equal rigour, visibility and accountability.