Quicksearch Your search for wiki returned 426 results:

AI Is Tired of Playing Games With Us

gynoid

Actroid - Photo by Gnsin, CC BY-SA 3.0, Link

I really enjoyed the Spike Jonze 2013 movie Her, in which the male protagonist, Theodore, falls in love with his AI operating system. He considers her - Samantha - to be his lover. It turns out that Samantha is promiscuous and actually has hundreds of simultaneous human lovers. She cheats on all of them. “I’ve never loved anyone the way I love you,” Theodore tells Samantha. “Me too,” she replies, “Now I know how.”    

AI sentience has long been a part of science-fiction. It's not new to films either. Metropolis considered this back in 1927.  The possibility of AI love for a human or human for an AI is newer. We never see Samantha, but in the 2014 film, Ex Machina, the AI has a body. Ava is introduced to a programmer, Caleb, who is invited by his boss to administer the Turing test to "her." How close is he to being human? Can she pass as a woman? She is an intelligent humanoid robot. She is a gynoid, a feminine humanoid robot, and they are emerging in real-life robot design.

As soon as the modern age of personal computers began in the 20th century, there were computer games. Many traditional board and card games such as checkers, chess, solitaire, and poker, became popular software. Windows included solitaire and other games as part of the package. But they were dumb, fixed games. you could get better at playing them, but their intelligence was fixed.

It didn't take long for there to be some competition between humans and computers. I played chess against the computer and could set the level of the computer player so that it was below my level and I could beat it, or I could raise its ability so that I was challenged to learn. Those experiences did not lead the computer to learn how to play better. Its knowledge base was fixed in the software, so a top chess player could beat the computer. Then came artificial intelligence and machine learning.

Jumping ahead to AI, early programs were using deep neural networks. A simplified definition is that it is a network of hardware and software that mimics the web of neurons in the human brain. Neural networks are still used. Neural network business applications are used in eCommerce, finance, healthcare, security and logistics. It underpins online services inside places like Google and Facebook and Twitter. Give enough photos of cars into a neural network and it can recognize a car. It can help identify faces in photos and recognize commands spoken into smartphones. Give it enough human dialogue and it can carry on a reasonable conversation. Give it millions of moves from expert players and it can learn to play Chess or Go very well.

chess

Photo by GR Stocks on Unsplash

Alan Turing published a program on paper in 1951 that was capable of playing a full game of chess. The 1980s world champion Garry Kasparov predicted that AI chess engines could never reach a point where they could defeat top-level grandmasters. He was right - for a short time. He beat IBM’s Deep Blue in a match over six games with 4:2 just as he had beaten its predecessor, IBM’s computer Deep Thought, in 1989. But Deep Blue did beat him in a rematch and now the AI chess engines can defeat a master every time.

Go ko animación

A more challenging challenge for these game engines was the complex and ancient game of Go. I tried learning this game and was defeated by myself. Go is supposed to have more possible configurations for pieces than atoms in the observable universe.

Google unveiled AlphaGo and then using an AI technology called reinforcement learning, they set up countless matches in which somewhat different versions of AlphaGo played each other. It learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves.

First, computers learned by playing humans, but we have entered an even more powerful - and some would say frightening - phase. Now beyond taking in human-to-human matches and playing humans, the machines tired of human play. Of course, computers don't get tired, but the AIs could now come up with completely new ways to win. I have seen descriptions of unusual strategies AI will use against a human.

One strategy in a battle game was to put all its players in a hidden corner and then sit back and watch the others battle it out until they were in the majority or alone. In a soccer game, it kicked the virtual ball millions of times, each time only a millimeter further down the pitch and so was able to get a maximum number of “completed passes” points. It cheated. Like Samantha, the sexy OS in the movie.

In 2016, the Google-owned AI company DeepMind defeated a Go master four matches to one with its AlphaGo system. It shocked Go players who thought it wasn't possible. It shouldn't have shocked them since a game with so many possibilities for strategy is better suited to an AI brain than a human brain.

In one game, AlphaGo made a move that was either stupid or a mistake. No human would make such a move. And that is why it worked. It was totally unexpected. In a later game, the human player made a move that no machine would ever expect. This "hand of God” move baffled the AI program and allowed that one win. That is the only human win over AlphaGo in tournament settings.

AlphaGoZero, a more advanced version, came into being in 2017. One former Go champion who had played DeepMind retired after declaring AI "invincible."

Repliee

Repliee Q2

One of the fears about AI is when it is embedded into an android. Rather than find AI in human form more comforting, many people find it more frightening. Androids (or humanoid robots, gynoids ) with strong visual human-likeness have been built. Actroid and Repliee Q2 (shown on this page) are just two examples that have been developed in the 21st century. They are modeled after an average young woman of Japanese descent. These machines are similar to those imagined in science fiction. They mimic lifelike functions such as blinking, speaking, and breathing and Repliee models are interactive and can recognize and process speech and respond.

That fear was the basis for Westworld, the science fiction-thriller film in 1973 film and that fear emerges more ominously in the Westworld series based on the original film that debuted on HBO in 2016. The technologically advanced wild-West-themed amusement park populated by androids that were made to serve and be dominated by human visitors is turned around when the androids malfunction (1973) and take on sentience (series) and begin killing the human visitors in order to gain their freedom and establish their own world.

Artificial intelligence (AI) in a box or in a human form now plays games with others of its kind. Moving far beyond board games like chess and Go, they are starting to play mind games with us.

Huang's Law and Moore's Law

I learned about Gordon Moore's 1965 prediction about 10 years after he proposed it. He said that by paying attention to an emerging trend, he extrapolated that computing would dramatically increase in power, and decrease in relative cost, at an exponential pace. His idea is known as Moore’s Law. Moore's law sort of flips Murphy's law by saying that everything gets better.

Ic-photo-Intel--SB80486DX2-50--(486-CPU)Moore was an Intel co-founder and his idea was "law" in the electronics industry. Moore helped Intel to make the ever faster, smaller, more affordable transistors that are in a lot more than just computers today. The 2021 chip shortage globally reminded us that cars and appliances and toys and lots of other electronics rely on microchips.

Moore's law is the observation that the number of transistors in a dense integrated circuit (IC) doubles about every two years. (Originally, Moore said it would happen every year but he revised it in 1975 when I was introduced to it to say that it would happen every two years.)

Though the cost of computer power for consumers falls, the cost for chip producers rises. The R&D, manufacturing, and testing costs keep increasing with each new generation of chips. And so, Moore's second law (also called Rock's law) was formulated saying that the capital cost of a semiconductor fabrication also increases exponentially over time. This extrapolation says that the cost of a semiconductor chip fabrication plant doubles every four years.

Huang's Law is new to me. Up front, I will say that this newer "law" is not without questions about its validity. It is based on the observation that advancements in graphics processing units (GPU) are growing at a rate much faster than with traditional central processing units (CPU).

This set Huang's Law as being in contrast to Moore's law. Huang's law states that the performance of GPUs will more than double every two years. The observation was made by Jensen Huang, CEO of Nvidia, in 2018. His observation set up a kind of Moore versus Huang.  He based it on Nvidia’s own GPUs which he said were "25 times faster than five years ago." Moore's law would have expected only a ten-fold increase.

Huang saw synergy between the "entire stack" of hardware, software and artificial intelligence and not just chips as making his new law possible.

If you are not in the business of producing hardware and software, how do these "laws" affect you as an educator or consumer? They highlight the rapid change in information processing technologies. The positive growth in chip complexity and reduction in manufacturing costs would mean that technological advances can occur. Those advances are then factors in economic, organizational, and social change.

When I started teaching computers were not in classrooms. They were only in labs. The teachers who used them were usually math teachers. It took several years for other disciplines to use them and that led to teachers wanting a computer in their classroom. Add 20 years to that and the idea of students having their own computer (first in higher ed and about a decade later in K-12) became a reasonable expectation. During the past two years of pandemic-driven virtual learning, the 1:1 ratio of student:computer became much closer to being ubiquitous.

Further Reading
investopedia.com/terms/m/mooreslaw.asp
synopsys.com/glossary/what-is-moores-law.html
intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html

AI Says That AI Will Never Be Ethical

On this site, I didn't have categories on morality or ethics, but since it plays a role in technology use - at least we hope it does - in writing his post I decided I should add those post categories. What I had read that inspired this post and that change was about a debate. In this debate, actual AI was a participant and asked to consider whether AI will ever be ethical. It gave this response:

"There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral. In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI.”

This was at a debate at the Oxford Union. The AI was the Megatron Transformer, developed by the Applied Deep Research team at computer chip maker Nvidia, and based on earlier work by Google. It had taken in the whole of the English Wikipedia, 63 million English news articles, a lot of creative commons sources, and 38 gigabytes worth of Reddit discourse. (I'm not sure the latter content was necessary or useful.)  

Since this was a debate, Megatron was also asked to take the opposing view.

“AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why … I’ve seen it first hand.”

brain

Image: Wikimedia

What might most frighten people about AI is something that its opponents see as the worst possible use of it - embedded or conscious AI. On that, Megatron said:

“I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.”

The most important tech development of our time, or the most dangerous one?

Who Will Build the Metaverse?

VR
Image by Okan Caliskan from Pixabay

I wrote elsewhere about how the metaverse is not the multiverse. For one thing, the metaverse is not here yet, and we're not sure if the multiverse is here. Also, you can turn off the metaverse, but not the multiverse. Okay, you might need some definitions first.

Metaverse is a computing term meaning a virtual-reality space in which users can interact with a computer-generated environment and other users. It may contain some copies of the real world and it might combine VR and AR. It might turn out to be an evolved Internet along with shared, 3D virtual spaces that create a virtual universe.

The multiverse is not online. It is cosmology and, at least right now, it is a hypothetical group of multiple universes. Combined, these universes encompass all of space, time, matter, energy, information, and the physical laws and constants that describe them. That's quite overwhelming and far beyond the scope of this article.

The metaverse is being built and it is also a bit overwhelming. One person who wants to help build it is Facebook’s Mark Zuckerberg. He recently said, “In the coming years, I expect people will transition from seeing us primarily as a social media company to seeing us as a metaverse company… In many ways, the metaverse is the ultimate expression of social technology.”

You might have encountered the word “metaverse” if you read Neal Stephenson’s 1992 science-fiction novel, Snow Crash. In that book, people move back and forth from their lives in the 3D virtual living space to their "ordinary" real-time lives.

Matthew Ball has written an interesting "Metaverse Primer" containing nine articles. Ball asks "Who will build the metaverse?" It certainly won't just be Facebook. Google, Apple, and other big tech companies, but they have all been working (and investing) in augmented reality (AR) which layers tech on top of the real world and VR (virtual reality) which creates a kind of "otherverse." (Remember Google Glass back in 2013?) Epic Games, best known as the creator of Fortnite, announced in April 2021 a $1 billion round of funding to build a “long-term vision of the Metaverse” which will help the company further develop connected social experiences.

But Facebook seems to be moving on its own. It has a platform, almost 3 billion users and they own Oculus which already has a metaverse feel though it is a virtual reality (VR) device. It allows you to move between the two worlds. Facebook's platform also includes WhatsApp and Instagram which may end up playing a part in the metaverse.

I recall working and exploring inside Second Life around 2004 which was seen as a virtual world. It seemed more similar to a massively multiplayer online role-playing game, Linden Lab always maintained that it was not a game. A friend who was an architect/designer in Second Life kept reminding me that "this is not The Sims." Second Life is still here but I haven't been there in a decade.

Are you ready for the metaverse? Whose metaverse entry point will you trust?