Report: AI and the Future of Teaching and learning

I see articles and posts about artificial intelligence every day. I have written here about it a lot in the past year. You cannot escape the topic of AI even if you are not involved in education, technology or computer science. It is simply part of the culture and the media today. I see articles about how AI is being used to translate ancient texts at a speed and accuracy that is simply not possible with humans. I also see articles about companies now creating AI software for warfare. The former is a definite plus, but the latter is a good example of why there is so much fear about AI - justifiably so, I believe.

Many educators seem to have had the initial reaction to the generative chatbots that became accessible to the public late last year and were being used by students to write essays and research papers. This spread through K-12 and into colleges and even into academic papers being written by faculty.

A chatbot powered by reams of data from the internet has passed exams at a U.S. law school after writing essays on topics ranging from constitutional law to taxation and torts. Jonathan Choi, a professor at Minnesota University Law School, gave ChatGPT the same test faced by students, consisting of 95 multiple-choice questions and 12 essay questions. In a white paper titled "ChatGPT goes to law school," he and his coauthors reported that the bot scored a C+ overall.

ChatGPT, from the U.S. company OpenAI, got most of the initial attention in the early part of 2023. They received a massive injection of cash from Microsoft. In the second half of this year, we have seen many other AI chatbot players, including Microsoft and Google who incorporated it into their search engines. OpenAI predicted in 2022 that AI will lead to the "greatest tech transformation ever." I don't know if that will prove to be true, but it certainly isn't unreasonable from the view of 2023.

Chatbots use artificial intelligence to generate streams of text from simple or more elaborate prompts. They don't "copy" text from the Internet (so "plagiarism" is hard to claim) but create based on the data they have been given. The results have been so good that educators have warned it could lead to widespread cheating and even signal the end of traditional classroom teaching methods.

Lately, I see more sober articles about the use of AI and more articles about teachers including lessons on the ethical use of AI by students, and on how they are using chatbots to help create their teaching materials. I knew teachers in K-20 who attended faculty workshops this past summer to try to figure out what to do in the fall.

Report coverThe U.S. Department of Education recently issued a report on its perspective on AI in education. It includes a warning of sorts: Don’t let your imagination run wild. “We especially call upon leaders to avoid romancing the magic of AI or only focusing on promising applications or outcomes, but instead to interrogate with a critical eye how AI-enabled systems and tools function in the educational environment,” the report says.

Some of the ideas are unsurprising. For example, it stresses that humans should be placed “firmly at the center” of AI-enabled edtech. That's also not surprising since an earlier White House “blueprint for AI,” said the same thing. And an approach to pedagogy that has been suggested for several decades - personalized learning - might be well served by AI. Artificial assistants might be able to automate tasks, giving teachers time for interacting with students. AI can give instant feedback to students "tutor-style." 

The report's optimism appears in the idea that AI can help teachers rather than diminish their roles and provide support. Still, where AI will be in education in the next year or next decade is unknown.

An AI Code of Conduct

code

CC BY-SA 3.0 Alpha Stock Images - Nick Youngson

 

In what I consider to be more of a "business" story, Canada's voluntary AI code of conduct is coming. As an article headline says "not everyone is enthused." Some businesses are concerned rules could stifle innovation and dull competitive edge.

Companies that sign onto the code are agreeing to multiple principles, including that their AI systems are transparent about where and how information they collect is used, and that there are methods to address potential bias in a system. In addition, they agree to human monitoring of AI systems and that developers who create generative AI systems for public use create systems so that anything generated by their system can be detected.

It is interesting that the URL for a story on this includes the term "stopgap"  cbc.ca/news/business/ai-code-of-conduct-stopgap which is defined as a temporary way of dealing with a problem or satisfying a need.

Something similar to Canada's plan is expected to be done in the United States and in the European Union via the EU Artificial Intelligence Act.

AI and Bias

Bias has always existed. It has always existed online. Now, with AI, there is another level of bias.

Bias generated by technology is “more than a glitch,” says one expert.

For example, why does AI have a bias against dark skin? It is because its data is scraped from the Internet, and the Internet is full of biased content.

This doesn't give AI a pass on bias. It is more of a comment or reflection on bias in general.

Jobs and Bots

chatgpt on phone
Workers are already using bots to help them work. Will that AI replace them?

On the same day, I saw three articles about artificial intelligence that made me view AI in different ways. One article was about how a chatbot powered by the Internet has passed exams at a U.S. law school after writing essays on law topics. Another article was about a company that is developing AI for warfare, but said they would only sell it to "democratic nations." The third article was about how AI makes the translation of difficult "dead" languages as well as interpreting medical tests faster and more accurately. 

Jonathan Choi, a professor at Minnesota University Law School, gave ChatGPT the same test faced by students. It had 95 multiple-choice questions and 12 essay questions. He reported that the bot scored a C+ overall.

In my own essay testing, I have found that the bot can produce in seconds a "C" paper or the start of a better paper. It is impressive but it is not like a really good student's work. So far.

But many of the AI bot stories in the media are about jobs that are likely to be replaced by AI. One popular media story at cbsnews.com/ supposes that computer programmers and people doing administrative work that they term "mid-level writing" can be handled by AI. That latter category would include work like writing emails, human resources letters, producing advertising copy, and drafting press releases. Of course, there is always the possibility that a worker doing that could be freed from those tasks and put onto higher level tasks and actually benefit from the AI.

I have seen positive and negative results from using AI in media work and law. Some of the negative examples seem to me to be when the user expects too much from AI at this stage in its development.

I don't think we know today what AI and bots will change in the world of work by next year, but it is certainly an area that requires concern by individuals and those who can affect the broader culture.