Infomorphic Neurons Bring AI One Action Closer to Brain-Like Learning

Summary: Researchers have developed a new kind of unnatural neuron—called infomorphic neurons—that you freely study and self-organize with local neurons, mimicking the fragmented learning of natural brains. Inspired by pyramid tissues in the cerebral cortex, these neurons method local signals to adjust and focus in tasks without additional control.

Each infomorphic nerve determines whether to work, focus, or coincide with people based on a book information-theoretic measure. This method not only enhances system learning efficiency and transparency but also offers useful insight into how natural neurons learn.

Important Information:

    Local Understanding: Infomorphic neurons learn freely through cousin interactions, eliminating the need for key coordination.

  • Brain-Inspired Design: Styled after pyramid mind tissues, these neurons mimic natural learning mechanisms.
  • Flexible and Transparent: A fresh info-theoretic platform lets neurons focus or cooperate, improving both efficiency and connectivity.

Origin: Max Planck Institute

Both, human mind and current artificial neural networks are extremely strong. At the lowest levels, the cells work jointly as quite simple computing devices.

An artificial neural network generally consists of various levels composed of individual neurons. An insight signal passes through these levels and is processed by synthetic neurons in order to collect important information. However, regular artificial neurons differ substantially from their natural versions in the way they learn.

The new synthetic cells pursue quite basic, easy-to-understand learning goals. Credit: Neuroscience News

While most artificial neural networks depend on overarching cooperation outside the network in order to learn, genetic cells simply receive and process signals from other cells in their immediate region in the community.

Biological neural systems are still far better to synthetic ones in terms of both, mobility and strength performance.

The fresh artificial cells, known as&nbsp, infomorphic cells, are capable of learning freely and self-organized among their surrounding cells. This means that the smallest component in the system has to be controlled no more from the outside, but decides itself which type is important and which is not.

In developing the&nbsp, infomorphic cells, the group was inspired by the way the mind works, particularly by the pyramid cells in the cerebral cortex. These likewise process stimuli from various sources in their immediate environment and use them to adapt and learn.

The new artificial neurons pursue quite basic, easy-to-understand learning targets:” We now instantly understand what is happening inside the system and how the individual artificial cells learn independently”, emphasizes Marcel Graetz from CIDBN.

By defining the learning objectives, the researchers enabled the neurons to find their specific learning rules themselves.

The team focused on the learning process of each individual neuron. They applied a novel information-theoretic measure to precisely adjust whether a neuron should seek more redundancy with its neighbors, collaborate synergistically, or try to specialize in its own part of the network’s information.

” By specializing in certain aspects of the input and coordinating with their neighbors, our&nbsp, infomorphic neurons&nbsp, learn how to contribute to the overall task of the network”, explains Valentin Neuhaus from MPI-DS.

With the&nbsp, infomorphic neurons, the team is not only developing a novel method for machine learning, but is also contributing to a better understanding of learning in the brain.

About this AI and learning research news

Author: Manuel Maidorn
Source: Max Planck Institute
Contact: Manuel Maidorn – Max Planck Institute
Image: The image is credited to Neuroscience News

Original Research: Open access.
A general framework for interpretable neural learning based on local information-theoretic goal functions” by Marcel Graetz et al. PNAS


Abstract

A general framework for interpretable neural learning based on local information-theoretic goal functions

Despite the impressive performance of biological and artificial networks, an intuitive understanding of how their local learning dynamics contribute to network-level task solutions remains a challenge to this date.

Efforts to bring learning to a more local scale indeed lead to valuable insights, however, a general constructive approach to describe local learning goals that is both interpretable and adaptable across diverse tasks is still missing.

We have previously formulated a local information processing goal that is highly adaptable and interpretable for a model neuron with compartmental structure.

Building on recent advances in Partial Information Decomposition ( PID), we here derive a corresponding parametric local learning rule, which allows us to introduce “infomorphic” neural networks.

We demonstrate the versatility of these networks to perform tasks from supervised, unsupervised, and memory learning.

By leveraging the interpretable nature of the PID framework, infomorphic networks represent a valuable tool to advance our understanding of the intricate structure of local learning.

Share This Post

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Do You Want To Boost Your Business?

drop us a line and keep in touch

[ihc-register]