Solving an Equation from 1907 and Liquid Neural Networks

Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species. This is a class of flexible, robust machine-learning models that learn on the job and can adapt to changing conditions. That is important for safety-critical tasks, like driving and flying.

The flexibility of these “liquid” neural nets are great but they are computationally expensive as their number of neurons and synapses increase and require inefficient computer programs to solve their underlying, complicated math.

Now, the same team of scientists has discovered a way to alleviate this bottleneck by solving the differential equation behind the interaction of two neurons through synapses. This unlocks a new type of fast and efficient artificial intelligence algorithm and is orders of magnitude faster, and scalable.

What I find interesting is that the equation that needed to be solved to do this has not had a known solution since 1907. That was the year that the differential equation of the neuron model was introduced. I recall when I was a student and when I was teaching at a university (in the humanities) hearing the complaints of students battling away in a course on differential equations.

These models are ideal for use in time-sensitive tasks like pacemaker monitoring, weather forecasting, investment forecasting, or autonomous vehicle navigation. On a medical prediction task, for example, the new models were 220 times faster on a sampling of 8,000 patients. 

Specifically, the team solved, “the differential equation behind the interaction of two neurons through synapses… to unlock a new type of fast and efficient artificial intelligence algorithms.” “The new machine learning models we call ‘CfC’s’ [closed-form Continuous-time] replace the differential equation defining the computation of the neuron with a closed form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration,” MIT professor and CSAIL Director Daniela Rus said. By solving this equation at the neuron-level, the team is hopeful that they’ll be able to construct models of the human brain that measure in the millions of neural connections, something not possible today. The team also notes that this CfC model might be able to take the visual training it learned in one environment and apply it to a wholly new situation without additional work, what’s known as out-of-distribution generalization. That’s not something current-gen models can really do and would prove to be a significant step toward the generalized AI systems of tomorrow.

Source  https://news.mit.edu/2022/solving-brain-dynamics-gives-rise-flexible-machine-learning-models-1115

Trackbacks

Trackback specific URI for this entry

Comments

Display comments as Linear | Threaded

No comments

The author does not allow comments to this entry