Advertisment

Brain-on-a-chip: How does it work?

author-image
CIOL Bureau
New Update
Brain-on-a-chip

Over the last decade, artificial intelligence (AI) and its applications like machine learning and natural language processing have revolutionized many industries and everyday life. As computing power continues to improve and the world gathers more data, the capability of the field is growing exponentially. However, this capability comes to the demand for even faster progress and professionals are already looking at what the next wave of development could be.

Advertisment

New technologies such as edge computing and 5G networks rely on instantaneous responses that cannot afford latency and resource hold-ups that can be inherent within typical data centers. A new type of machine learning, known as neuromorphic computing, is set to hit the market to accelerate innovation.

What is the context with machine learning?

Before delving into the world of neuromorphic computing, it is perhaps important to remind ourselves what we mean by machine learning. When talking about machine learning as an application of AI, it is referring to the use of data to create predictive models and make decisions. For example, Netflix predicts what shows we might like, or factories can tell what the optimal conditions are for machines to have the best outputs. Machine learning relies on being fed data and generally has sets of pre-defined rules to work by.

Advertisment

The common problem with machine learning so far is that it doesn’t tend to understand the concept of the world like a human brain would. For example, a computer can connect a new event to something that has happened before using data but, unlike a human brain, it struggles when trying to put a new connection or event into context.

Neuromorphic computing systems aim to replicate the human brain. They are designed to perceive images and sounds in a way that AI never has before. Traditional AI and machine learning platforms have been brilliant at making strong predictions on problems that can be expressed in numbers but have shown flaws when the challenge goes beyond that.

Neuromorphic computing is the concept of a “brain on a chip.” Whilst machine learning and the more complex theories like deep learning are about the reach decisions, neuromorphic technology attempts to literally take the form of the human brain. This means that as well as making decisions, it can memorize information and deduce things for itself. A neuromorphic machine can identify patterns in visual or auditory data and adjust its predictions based on what it learns.

Advertisment

Intel is one of the market leaders in the race to develop the first neuromorphic computer which will have an architecture enabled machines to do things that silicon chips can’t. For example, a paper by the Intel scientist Charles Augustine has suggested that neuromorphic machines will be able to handle powerful tasks like cognitive computing, adaptive artificial intelligence and sensing data. Beyond the tasks, it will do this at 15-300 times less energy than current chips are able to do.

IBM is another large company at the forefront of neuromorphic technology. Early in 2019, as already mentioned in my previous article, IBM released its open-source quantum computing platform that can offer more power to those looking to develop AI and Big Data enterprises. As well as that, IBM maintains a Neuromorphic Devices and Architectures Project involved with new experiments in analog computation.

How does neuromorphic computing transform AI?

Advertisment

If you take any successful AI application, it relies on cloud-based computing, mass volumes of data, speed, and latency. Internet of Things (IoT) devices are perhaps the best example of this. Consumers using devices like the Amazon Alexa or Google Home expect to receive and respond in an instant. In order to do that, the networks between the voice command and data need to be extremely powerful and efficient. Whilst Smartphones have delivered some aspects of AI, they don’t have the computing power required for more complex “brain-like” algorithms and right now, it would most probably drain their battery if they did.

A neuromorphic chip utilizes less energy and enhances performance when compared to a conventional computer chip because of the way it has been designed. Neuromorphic computing uses a new type of model that better mimics the human brain. Instead of passing information from one transistor to the next, a neuromorphic network will consist of millions of neurons (just like the biological structure of the brain) that can pass information in any direction.

The technical bit

Advertisment

The reason a neuromorphic model is more efficient is due to the way it utilizes neurons, the same way as a human brain would. This is called a “spiking neural network” (SNN). First, a smaller percentage of neurons are required to transmit the information as compared to conventional chips, allowing other neurons that are not in the network to learn on the go.

As well as this, current chips require back-propagation as neuron’s are only triggered by an event occurring. This means that results are not fed back into the system for fine-tuning as learning by brains is generally a feed-forward process and are therefore more efficient in this context.

In shrinking down the power of a neural network onto a single semiconductor chip, the learning and pattern recognition processes of the technology can be embedded into a wider range of systems, from robots to tablets and, in the future, Smartphones. This will lead to a new world of applications that don’t need mains powers or a network connection to fulfill the computational capabilities.

Advertisment

What does the future look like?

Experts believe that neuromorphic chips could be embedded into Smartphones as early as 2025 and drive the growth of IoT. As the devices will be able to perceive the environment around them to deduce decisions, over the next decade we will start to see:

Faster development in transportation and manufacturing. Autonomous vehicles have been teetering on a breakthrough for many years now and much of the procrastination has come through their inability to perceive the environment. Neuromorphic chips will likely take this to the next level that starts driving the commercialization of the technology.

Advertisment

Smartphones that can continually monitor what you do (data security permitting) and potentially even help before you ask based on what they have perceived. Your device might start to understand your intentions in the same way that your brain does.

The field of robotics will grow at speed as robots are able to better respond to commands based on their surroundings. If a robot can navigate independently and think for itself without consuming vast amounts of power and energy, the world will quickly begin to change.

We are set for an exciting decade as the neuromorphic race heats up!

 Ram Narasimhan, Global Executive Director Xebia– AI and Bigdata