The Need to Simplify the Science of AI

The Need to Simplify the Science of AI, The need for simplifying AI to make it more understandable arises due to various factors.

CIOL Bureau
New Update

As AI is getting democratized across enterprises, it is imperative to make the inner mechanisms of the entire AI ecosystem more comprehensible to those who are not directly in the frontlines. The need for this arises because AI is often looked at as a “black box”, with major focus on the inputs and outputs and not much effort is invested by data scientists to make the systems more explainable. This is exacerbated by the fact that deep learning models are highly abstract to even skilled data scientists. Some of the popular large language models have over 100 billion parameters, and it is a herculean task to shed some light on how the models fundamentally operate.


Why simplify AI

The need for simplifying AIto make it more understandable arises due to various factors.

Ensuring collaboration across the enterprise: With AI projects becoming increasingly cross-cutting and interdisciplinary, enterprises are feeling the need to involve a diverse spectrum of end-users, business stakeholders, domain consultants, and more. As newer applications are being discovered in law, finance, marketing, and various industries, this need is felt even more. The more we make AI understandable, accessible, and less intimidating, the more it will pave the way for boundless innovations.


Building a steady stream of highly skilled AI talent: SimplifyingAI so that it can reach the masses is highly crucial for its evangelization. This will lower the entry barrier as it will be accessible to persons of varying technical expertise. This can also promote diversity and inclusiveness in the field and ensure that AI is developed and used in an equitable manner, free from biases due to under-representation.

Trust, Transparency, and Accountability: End-users can trust AI predictions only if it is comprehensible to them. For example, if a medical practitioner must recommend a treatment based on AI predictions, he/she needs to be confident ofit. If the science of AI is simplified, the right stakeholders can be held accountable for any deviations from acceptable performance.

Increasing adoption: Simplifying the mechanisms of AI for various stakeholders has a direct bearing on increasing its adoption. For example, if AI is used for making an investment decision, one would also need to know the rationale behind it. For rapid adoption of AI, getting the stakeholder buy-in is a major obstacle for scaling from simple point solutions to the enterprise level.


Exposing Risks and building Responsible AI: Demystifying the underlying concepts that power AI can also shed light on various forms of risk in AI projects, which will become apparent once it is simplified. It will ensure that decision-makers understand the ethical and moral dimensions of AI. Simplifying AI can help users make informed decisions on its applications, and help enterprises make better decisions on investments and policy in AI. This is also needed for a seamless human-AI collaboration in human-in-the-loop systems, where human intervention points are there in complex automated workflows.

Uncomplicating AI through three dimensions – cultural, technological, and business

There are primarily three dimensions thatneed to be looked at to simplify AI. These are:

  1. Cultural and social: There is an increasing need for simplification of the language and communication used to describe the workings of AI. We need skilled educators to develop interesting ways of explaining abstract concepts using relatable real-world examples. Visual aids like illustrations and animations help explain complex concepts. The right analogies can help people see the connections between different ideas and understand how they work together in the bigger picture.

Strong industry academia partnerships are needed for simplifying the pedagogy of AI so that the complex concepts can easily disseminate to larger audiences. Development of different types of training programs tailored to different audiences can help in demystifying AI concepts.

  • Technological: There is a growing interest in making Explainable AI (XAI) systems, which incorporate changes in the algorithm level to make AI more understandable and interpretable in the design phase. These may include developing several interpretability algorithms, which analyze the results of a complex model and decompose its behavior into easily understandable fragments. Other avenues are designing user-friendly interfaces that guide the users, developing visualization tools to highlight graphically why certain decisions were taken.

Enterprises can explore integrated AI platforms that has several pre-configured assets ready to deploy components and environments, which can serve as a playground for testing out AI. This platform can enable free experimentations and give the hands-on experience to demystify complex concepts.

  • Business:  From an enterprise perspective, a proper AI strategy and framework that prioritizes implementation of simple use-cases with easily demonstrable benefits can help evangelize the basic concepts of AI. Once the simple use-cases and applications are widely manifested, it would familiarize the uninitiated with the general principles of AI, paving the way for more complex innovations.

Authored By: Balakrishna D. R. (Bali), Executive Vice President – Global Head,AI and Automation and ECS, Infosys


Bali leads the AI and Automation practice at Infosys and helps clients transform their businesses by leveraging AI. He also drives the adoption of AI and automation within Infosys in all service offerings and internal functions to improve efficiency and drive differentiation. As a practice leader he is responsible for competency development, building service offerings, solutions, platforms, and partnerships in this space.

Bali works with governments, think tanks, academia, industries, and other regulatory bodies on AI adoption and responsible and ethical use of AI.

Bali is also responsible for delivery in energy, utilities, services, communication, and media segments (ECS) for Infosys.