Image: Nick Youngson CC BY-SA 3.0 Pix4free

Have you heard the term "metaversity"? What Is a Metaversity? Should You Create One on Your Campus?

Metaversities are campuses created in the metaverse and, in some ways, they represent the next evolution beyond the immersive learning opportunities that currently exist for students at many colleges and universities. The metaversity has gone from a theory to a concept to an actual realm at schools such as Morehouse, and more are likely on the way.

Advances in virtual and augmented reality have made it possible to create digital twins of universities.

What should you consider before building one?  some suggestions

Is Your Phone Smarter Than You Yet?

      Image by Chen from Pixabay

Predictions can be interesting, but people rarely look back at ones to see if they were correct. I wrote a post titled "In 4 Years Your Phone Will Be Smarter Than You (and the rise of cognizant computing)"  It has more than 969,000 views since I posted it in November 2013. Next year will be 10 years since that prediction. Is your phone smarter than you yrt?

That was not my prediction but it was an analysis from the market research firm Gartner. They weren't as concerned with hardware as with data and cloud computational ability. I said then that phones will appear smarter than you IF you equate smarts with being able to recall information and make inferences. Surely, those two things are part of being "smart" but not all of it.

"Smart" is also defined sometimes as being knowledgeable of something especially through personal experience, mindful, even cognizant of the potential dangers. Cognizant is a synonym for awareness. I have bee reading a lot about artificial intelligence lately. While cognizant computing does use algorithms to anticipate users' needs, dpong so doesn't approach actual "consciousness."

If an app has my browsing history, purchase records, financial information, and whatever is available somewhere on the cloud (known or unbeknownst to me) it can be pretty good at predicting somethings about me.

Cognitive computing isn't the same thing, though so much of all this seems to overlap. Cognitive computing (part of cognitive science) and attempts to simulate the human thought process.

As I said, these things overlap, at least to someone like myself who isn't really working in these fields. Maybe it makes a kind of sense that AI, cognitive and cognizant computing, signal processing, machine learning, natural language processing, speech and vision recognition, human-computer interaction and probably a dozen I'm forgetting. I suspect that all these things will converge at some point in the future to create the ultimate AI.

I don't see as many mentions these days to the Internet of things (IoT) as I did a decade ago. Internet-enabled objects exist in my home as "appliances." This morning I was checking my Ecobee app which is my wireless home energy monitor. I assume that it is already and will in the future be better at a kind of cognizant device that monitors my home environmental conditions and make adjustments based on my settings and the three sensors that monitor our activity. It knows that no one is upstairs and so drops the temperature there - though no lower than what I have told it. It also suggests changes to my settings and reminds me to change the filter every three months. I always di that on the solstices and equinoxes anyway but if I miss that date by a day or two, it adjust the next change accordingly. Quite a fussy and OCD device. It could connect to my Alexa devices but I haven't allowed that yet. Maybe one day it will just do it on its own and tell me "It's for your own good, Kenneth."

Solving an Equation from 1907 and Liquid Neural Networks

Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species. This is a class of flexible, robust machine-learning models that learn on the job and can adapt to changing conditions. That is important for safety-critical tasks, like driving and flying.

The flexibility of these “liquid” neural nets are great but they are computationally expensive as their number of neurons and synapses increase and require inefficient computer programs to solve their underlying, complicated math.

Now, the same team of scientists has discovered a way to alleviate this bottleneck by solving the differential equation behind the interaction of two neurons through synapses. This unlocks a new type of fast and efficient artificial intelligence algorithm and is orders of magnitude faster, and scalable.

What I find interesting is that the equation that needed to be solved to do this has not had a known solution since 1907. That was the year that the differential equation of the neuron model was introduced. I recall when I was a student and when I was teaching at a university (in the humanities) hearing the complaints of students battling away in a course on differential equations.

These models are ideal for use in time-sensitive tasks like pacemaker monitoring, weather forecasting, investment forecasting, or autonomous vehicle navigation. On a medical prediction task, for example, the new models were 220 times faster on a sampling of 8,000 patients. 

Specifically, the team solved, “the differential equation behind the interaction of two neurons through synapses… to unlock a new type of fast and efficient artificial intelligence algorithms.” “The new machine learning models we call ‘CfC’s’ [closed-form Continuous-time] replace the differential equation defining the computation of the neuron with a closed form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration,” MIT professor and CSAIL Director Daniela Rus said. By solving this equation at the neuron-level, the team is hopeful that they’ll be able to construct models of the human brain that measure in the millions of neural connections, something not possible today. The team also notes that this CfC model might be able to take the visual training it learned in one environment and apply it to a wholly new situation without additional work, what’s known as out-of-distribution generalization. That’s not something current-gen models can really do and would prove to be a significant step toward the generalized AI systems of tomorrow.