NeuroAI Breakthrough: New Model Inspired by Human Brain Enhances AI Efficiency and Learning

NeuroAI

In this post, we discuss NeuroAI, which aims to enhance AI efficiency. Although modern AI can read, talk, and analyze data, it still has significant limitations. Researchers in NeuroAI have developed a new AI model inspired by the efficiency of the human brain.

This model allows AI neurons to receive feedback and adjust in real time, improving learning and memory processes. This innovation could lead to more efficient and accessible AI, bringing AI and neuroscience closer together.

Brain-Inspired Innovation

Researchers in NeuroAI have developed a groundbreaking AI model inspired by the efficiency of the human brain. This new model allows AI neurons to receive feedback and adjust in real time, significantly improving learning and memory processes.

Also Read: OmniAI: Transforming Unused Data into Valuable Insights

Real-Time Feedback for Enhanced Learning

The innovative model enables AI neurons to receive and adjust to feedback instantly, enhancing overall efficiency. This advancement could revolutionise AI by making it more adaptive and responsive, akin to human learning processes.

Potential Impact on AI and Neuroscience

This breakthrough has the potential to integrate AI and neuroscience more closely, leading to more efficient and accessible AI systems that learn in a manner similar to humans.

AI’s Current Capabilities and Limitations

Today’s AI systems can read, talk, analyse data, and recommend business decisions, making them appear more human-like than ever. Despite these capabilities, AI still faces notable limitations. “Technologies like ChatGPT are impressive, but they are limited in interacting with the physical world,” explains Kyle Daruwalla, a NeuroAI Scholar at Cold Spring Harbor Laboratory (CSHL). Tasks such as solving maths problems and writing essays require billions of training examples.

Inspiration from the Human Brain

Kyle Daruwalla has been exploring innovative ways to design AI that can overcome these computational hurdles by turning to one of the most powerful and energy-efficient systems for inspiration—the human brain. He developed a new method for AI algorithms to move and process data more efficiently, mirroring how the brain processes information. This design allows individual AI neurons to receive feedback and adjust on the fly, reducing the distance data needs to travel and enabling real-time processing.

Linking Working Memory and Learning

Daruwalla’s machine-learning model also supports a theory linking working memory with learning and academic performance. “There have been theories in neuroscience about how working memory circuits could facilitate learning, but nothing concrete has linked these ideas together as our rule does,” says Daruwalla. His theory led to a rule where adjusting each synapse individually required working memory alongside it.

Future of AI Learning Like Humans

Daruwalla’s design could lead to a new generation of AI that learns like humans. This would make AI more efficient and accessible, creating a full-circle moment for NeuroAI, where neuroscience has long informed AI development. Soon, AI might reciprocate by advancing our understanding of the brain.

Abstract

The information bottleneck-based Hebbian learning rule connects working memory with synaptic updates. Deep neural networks are effective for various problems but come with high energy costs. Spiking neural networks (SNNs), modelled after realistic neurons, offer a potential solution when deployed on neuromorphic hardware. However, training SNNs directly on this hardware is challenging because back-propagation, crucial for training artificial deep networks, is biologically implausible.

Neuroscientists are uncertain how the brain propagates precise error signals through a neuron network. While recent progress addresses some aspects, a complete solution remains elusive. New learning rules based on the information bottleneck (IB) train each network layer independently, avoiding error propagation across layers. These rules employ a three-factor Hebbian update where a global error signal modulates local synaptic updates within each layer. However, this global signal requires processing multiple samples at once, unlike the brain, which processes one sample at a time.

We propose a new three-factor update rule where the global signal captures information across samples via an auxiliary memory network. This network can be trained independently of the primary network’s dataset. Our experiments show comparable performance to baseline methods on image classification tasks. Unlike back-propagation methods, our rule explicitly links learning with memory.

Our research suggests a new learning perspective where each layer balances memory-informed compression with task performance, encompassing key aspects of neural computation such as memory, efficiency, and locality.

Source

Read more

Leave a Reply