Harmful Content Online

girl on phone

Photo by Andrea Piacquadio

It is an important issue to cover but, unfortunately, I am not surprised to see a report covered with a BBC headline "More girls than boys exposed to harmful content online."

Teenage girls are more likely to be asked for nude photos online or be sent pornography or content promoting self-harm than boys, a report has found. The report is based on survey responses from around 6,500 young people, and they found that girls are "much more likely to experience something nasty or unpleasant online."

YouTube, WhatsApp, Snapchat, and TikTok were the most popular social media sites for both age groups, but more than three-quarters of 14-18-year-olds also used Instagram.

Many respondents reported spending significant amounts of time online. For instance, a third of 14-18-year-olds reported spending four hours or more online during a school day.  Almost two-thirds reported spending more than four hours online at weekends. One in five 14-18-year-olds said they spent more than seven hours a day online on weekends.

One example is that one in five children and young people who took part in the research said something nasty or unpleasant had recently happened to them online. The most common experience was that "mean or nasty comments" were made about them or sent to them. But there was a difference between boys and girls when it came to the type of nasty online experience they had. Girls were more likely to have mean or nasty comments made about them or rumors spread about them.

More than 5% of girls aged 14-18 said they had been asked to send nude photos or videos online or expose themselves, three times higher than the rate among boys. More than 5% of 14-18 year-old girls also said they had seen or been sent pornography, and twice as many girls as boys reported being sent "inappropriate photos" they had not asked for. More girls than boys also reported being sent content promoting suicide, eating disorders and self-harm.

ChatGPT - That AI That Is All Over the News

Dear ChatGPTSo far, the biggest AI story of 2023 - at least in the education world - is ChatGPT. Chances are you have heard of it. If you have been under a rock or buried under papers you have to grade, ChatGPT is Chat Generative Pre-trained Transformer. ChatGPT is the newest iteration of the chatbot that was launched by OpenAI in late 2022.

OpenAI has a whole GPT-3 family of large language models. It has gotten attention for its detailed responses and articulate answers across many domains of knowledge. But in Educationland, the buzz is that it will allow students to use it to write all their papers. The first article someone sent me had a title like "The End of English Classes."

People started to test it out and there were both reactions of amazement at how good it worked, and also criticisms of very uneven factual accuracy.

Others have written about all the issues in great detail and so I don't need to go into great detail here, but I do want to summarize some things that have emerged in the few months it has been in use with the public, and provide some links to further inquiry.

  • Currently, you can get a free user account at https://chat.openai.com/  I was hesitant at first to register because it required giving a mobile phone number and I don't need to get more spam phone calls but I finally created an account so I could do some testing (more on that in my next post)
  • OpenAI is a San Francisco-based company doing AI research and deployment and states that their mission is "to ensure that artificial general intelligence benefits all of humanity."
  • "Open" may be a misnomer in that the software is not open in the sense of open source and the chatbot will not be free forever.
  • ChatGPT can write essays, and articles and even come up with poems, scripts and answer math questions or write code - all with mixed results.
  • AI chatbots have been around for quite a while. You probably have used one online to ask support questions and tools like Siri, Alexa, and others are a version of this. I had high school students making very crude versions of chatbots back in the last century based on an early natural language processing program called Eliza that had been written in the mid-1960s at MIT.
  • Schools have been dealing with student plagiarism since there have been schools, but this AI seems to take it to a new level since OpenAI claims that the content the bot produces is not copied but that the bot generates text based on the patterns it learned in the training data.
  • This may be a good thing for AI in general or further fuel fears of an "AI takeover." You can find more optimistic stories about how AI is shaping the future of healthcare. It can accurately and quickly analyze medical tests and find connections between patient symptoms, lifestyle, drug interactions, etc.
  • I also see predictions that as AI makes the once humans-only skill of writing automated that our verbal skills will carry more weight.

You can write to me at serendipty35blog at gmail.com with your thoughts about these chatbot AI programs or any issues where tech and education cross paths for better or worse.

FURTHER INQUIRY

Forbes says that ChatGPT And AI Will Fuel New EdTech Boom because venture capitalists predict artificial intelligence, virtual reality and video-learning startups will dominate the space in 2023.

This opinion piece compares ChatGPT to the COVID pandemic! insidehighered.com/views/2023/02/09/chatgpt-plague-upon-education-opinion

The New York Times podcast, The Daily, did an episode that included tests of the bot. Listen on Apple Podcasts

A teacher friend posted on his blog a reaction to the idea that ChatGPT is the death of the essay. he says "And here's my point with regard to artificial intelligence: if students are given the chance and the encouragement to write in their own voices about what really matters to them, what possible reason would they have for wanting a robot to do that work for them?  It's not about AI signaling the death of writing. It's about giving students the chance to write about things they care enough about not to cheat."

OpenAI is not alone in this AI approach. Not to be outdone, Google announced its own Bard, and Microsoft also has a new AI that can do some scary audio tricks.

People are already creating "gotcha" tools to detect things written by ChatGPT.

I found a lesson for teachers about how to use ChatGPT with students. Here is a result of asking ChatGPT to write a lesson plan on how teachers can use ChatGPT with students.

AI Is Tired of Playing Games With Us

gynoid

Actroid - Photo by Gnsin, CC BY-SA 3.0, Link

I really enjoyed the Spike Jonze 2013 movie Her, in which the male protagonist, Theodore, falls in love with his AI operating system. He considers her - Samantha - to be his lover. It turns out that Samantha is promiscuous and actually has hundreds of simultaneous human lovers. She cheats on all of them. “I’ve never loved anyone the way I love you,” Theodore tells Samantha. “Me too,” she replies, “Now I know how.”    

AI sentience has long been a part of science-fiction. It's not new to films either. Metropolis considered this back in 1927.  The possibility of AI love for a human or human for an AI is newer. We never see Samantha, but in the 2014 film, Ex Machina, the AI has a body. Ava is introduced to a programmer, Caleb, who is invited by his boss to administer the Turing test to "her." How close is he to being human? Can she pass as a woman? She is an intelligent humanoid robot. She is a gynoid, a feminine humanoid robot, and they are emerging in real-life robot design.

As soon as the modern age of personal computers began in the 20th century, there were computer games. Many traditional board and card games such as checkers, chess, solitaire, and poker, became popular software. Windows included solitaire and other games as part of the package. But they were dumb, fixed games. you could get better at playing them, but their intelligence was fixed.

It didn't take long for there to be some competition between humans and computers. I played chess against the computer and could set the level of the computer player so that it was below my level and I could beat it, or I could raise its ability so that I was challenged to learn. Those experiences did not lead the computer to learn how to play better. Its knowledge base was fixed in the software, so a top chess player could beat the computer. Then came artificial intelligence and machine learning.

Jumping ahead to AI, early programs were using deep neural networks. A simplified definition is that it is a network of hardware and software that mimics the web of neurons in the human brain. Neural networks are still used. Neural network business applications are used in eCommerce, finance, healthcare, security and logistics. It underpins online services inside places like Google and Facebook and Twitter. Give enough photos of cars into a neural network and it can recognize a car. It can help identify faces in photos and recognize commands spoken into smartphones. Give it enough human dialogue and it can carry on a reasonable conversation. Give it millions of moves from expert players and it can learn to play Chess or Go very well.

chess

Photo by GR Stocks on Unsplash

Alan Turing published a program on paper in 1951 that was capable of playing a full game of chess. The 1980s world champion Garry Kasparov predicted that AI chess engines could never reach a point where they could defeat top-level grandmasters. He was right - for a short time. He beat IBM’s Deep Blue in a match over six games with 4:2 just as he had beaten its predecessor, IBM’s computer Deep Thought, in 1989. But Deep Blue did beat him in a rematch and now the AI chess engines can defeat a master every time.

Go ko animación

A more challenging challenge for these game engines was the complex and ancient game of Go. I tried learning this game and was defeated by myself. Go is supposed to have more possible configurations for pieces than atoms in the observable universe.

Google unveiled AlphaGo and then using an AI technology called reinforcement learning, they set up countless matches in which somewhat different versions of AlphaGo played each other. It learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves.

First, computers learned by playing humans, but we have entered an even more powerful - and some would say frightening - phase. Now beyond taking in human-to-human matches and playing humans, the machines tired of human play. Of course, computers don't get tired, but the AIs could now come up with completely new ways to win. I have seen descriptions of unusual strategies AI will use against a human.

One strategy in a battle game was to put all its players in a hidden corner and then sit back and watch the others battle it out until they were in the majority or alone. In a soccer game, it kicked the virtual ball millions of times, each time only a millimeter further down the pitch and so was able to get a maximum number of “completed passes” points. It cheated. Like Samantha, the sexy OS in the movie.

In 2016, the Google-owned AI company DeepMind defeated a Go master four matches to one with its AlphaGo system. It shocked Go players who thought it wasn't possible. It shouldn't have shocked them since a game with so many possibilities for strategy is better suited to an AI brain than a human brain.

In one game, AlphaGo made a move that was either stupid or a mistake. No human would make such a move. And that is why it worked. It was totally unexpected. In a later game, the human player made a move that no machine would ever expect. This "hand of God” move baffled the AI program and allowed that one win. That is the only human win over AlphaGo in tournament settings.

AlphaGoZero, a more advanced version, came into being in 2017. One former Go champion who had played DeepMind retired after declaring AI "invincible."

Repliee

Repliee Q2

One of the fears about AI is when it is embedded into an android. Rather than find AI in human form more comforting, many people find it more frightening. Androids (or humanoid robots, gynoids ) with strong visual human-likeness have been built. Actroid and Repliee Q2 (shown on this page) are just two examples that have been developed in the 21st century. They are modeled after an average young woman of Japanese descent. These machines are similar to those imagined in science fiction. They mimic lifelike functions such as blinking, speaking, and breathing and Repliee models are interactive and can recognize and process speech and respond.

That fear was the basis for Westworld, the science fiction-thriller film in 1973 film and that fear emerges more ominously in the Westworld series based on the original film that debuted on HBO in 2016. The technologically advanced wild-West-themed amusement park populated by androids that were made to serve and be dominated by human visitors is turned around when the androids malfunction (1973) and take on sentience (series) and begin killing the human visitors in order to gain their freedom and establish their own world.

Artificial intelligence (AI) in a box or in a human form now plays games with others of its kind. Moving far beyond board games like chess and Go, they are starting to play mind games with us.

AI Says That AI Will Never Be Ethical

On this site, I didn't have categories on morality or ethics, but since it plays a role in technology use - at least we hope it does - in writing his post I decided I should add those post categories. What I had read that inspired this post and that change was about a debate. In this debate, actual AI was a participant and asked to consider whether AI will ever be ethical. It gave this response:

"There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral. In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI.”

This was at a debate at the Oxford Union. The AI was the Megatron Transformer, developed by the Applied Deep Research team at computer chip maker Nvidia, and based on earlier work by Google. It had taken in the whole of the English Wikipedia, 63 million English news articles, a lot of creative commons sources, and 38 gigabytes worth of Reddit discourse. (I'm not sure the latter content was necessary or useful.)  

Since this was a debate, Megatron was also asked to take the opposing view.

“AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why … I’ve seen it first hand.”

brain

Image: Wikimedia

What might most frighten people about AI is something that its opponents see as the worst possible use of it - embedded or conscious AI. On that, Megatron said:

“I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.”

The most important tech development of our time, or the most dangerous one?