/ciol/media/media_files/2025/08/16/infosys-ai-report-2025-08-16-19-37-24.jpg)
Infosys Knowledge Institute (IKI), the research arm of Infosys (NSE, BSE, NYSE: INFY), today unveiled critical insights into the state of responsible AI (RAI) implementation across enterprises, particularly with the advent of agentic AI. The report, Responsible Enterprise AI in the Agentic Era, surveyed over 1,500 business executives and interviewed 40 senior decision-makers globally. The findings show that while 78% of companies see RAI as a business growth driver, only 2% have adequate RAI controls in place to safeguard against reputational risk and financial loss.
The report analyzed the effects of risks from poorly implemented AI, such as privacy and ethical violations, bias, and regulatory non-compliance. It found that 77% of organizations reported financial loss, and 53% of organizations suffered reputational impact from such AI-related incidents.
Key Findings from the Report
The report highlights a significant gap between the rapid adoption of AI and the readiness to manage its risks.
AI risks are widespread and can be severe: 95% of C-suite and director-level executives report AI-related incidents in the past two years, with 39% characterizing the damage as “severe” or “extremely severe.” Furthermore, 86% of executives aware of agentic AI believe it will introduce new risks and compliance issues.
Responsible AI (RAI) capability is patchy: Only 2% of companies, termed “RAI leaders,” met the full standards set in the Infosys RAI capability benchmark. In contrast, this small group of leaders experienced 39% lower financial losses and 18% lower severity from AI incidents due to their robust controls.
Executives view RAI as a growth driver: 78% of senior leaders see RAI as aiding their revenue growth. However, companies believe they are underinvesting in RAI by an average of 30%.
Infosys Recommendations for a Proactive RAI Approach
With the scale of enterprise AI adoption far outpacing readiness, companies must urgently shift from treating RAI as a reactive compliance obligation to embracing it proactively as a strategic advantage. To help organizations build scalable, trusted AI systems, Infosys recommends the following actions:
Learn from the leaders: Study the practices of high-maturity RAI organizations that have already developed robust governance.
Embed RAI guardrails into secure AI platforms: Use platform-based environments that enable AI agents to operate within pre-approved data and systems.
Establish a proactive RAI office: Create a centralized function to monitor risk, set policy, and scale governance.
Leadership Perspectives on Responsible AI
Balakrishna D.R., EVP – Global Services Head, AI and Industry Verticals, Infosys, said, “Drawing from our extensive experience, we have seen firsthand how delivering more value from enterprise AI use cases would require enterprises to first establish a responsible foundation built on trust, risk mitigation, and data governance. This also means emphasizing ethical, unbiased, safe, and transparent model development. To realize the promise of this technology in the agentic AI future, leaders should strategically focus on platform and product-centric enablement, and proactive vigilance of their data estate.”
Jeff Kavanaugh, Head of Infosys Knowledge Institute, said, “Today, enterprises are navigating a complex landscape where AI's promise of growth is accompanied by significant operational and ethical risks. Our research clearly shows that while many are recognizing the importance of Responsible AI, there's a substantial gap in practical implementation. Companies that prioritize robust, embedded RAI safeguards will not only mitigate risks and potentially reduce financial losses but also unlock new revenue streams and thrive as we transition into the transformative agentic AI era.”