Quicksearch Your search for competency-based returned 483 results:

Report: AI and the Future of Teaching and learning

I see articles and posts about artificial intelligence every day. I have written here about it a lot in the past year. You cannot escape the topic of AI even if you are not involved in education, technology or computer science. It is simply part of the culture and the media today. I see articles about how AI is being used to translate ancient texts at a speed and accuracy that is simply not possible with humans. I also see articles about companies now creating AI software for warfare. The former is a definite plus, but the latter is a good example of why there is so much fear about AI - justifiably so, I believe.

Many educators seem to have had the initial reaction to the generative chatbots that became accessible to the public late last year and were being used by students to write essays and research papers. This spread through K-12 and into colleges and even into academic papers being written by faculty.

A chatbot powered by reams of data from the internet has passed exams at a U.S. law school after writing essays on topics ranging from constitutional law to taxation and torts. Jonathan Choi, a professor at Minnesota University Law School, gave ChatGPT the same test faced by students, consisting of 95 multiple-choice questions and 12 essay questions. In a white paper titled "ChatGPT goes to law school," he and his coauthors reported that the bot scored a C+ overall.

ChatGPT, from the U.S. company OpenAI, got most of the initial attention in the early part of 2023. They received a massive injection of cash from Microsoft. In the second half of this year, we have seen many other AI chatbot players, including Microsoft and Google who incorporated it into their search engines. OpenAI predicted in 2022 that AI will lead to the "greatest tech transformation ever." I don't know if that will prove to be true, but it certainly isn't unreasonable from the view of 2023.

Chatbots use artificial intelligence to generate streams of text from simple or more elaborate prompts. They don't "copy" text from the Internet (so "plagiarism" is hard to claim) but create based on the data they have been given. The results have been so good that educators have warned it could lead to widespread cheating and even signal the end of traditional classroom teaching methods.

Lately, I see more sober articles about the use of AI and more articles about teachers including lessons on the ethical use of AI by students, and on how they are using chatbots to help create their teaching materials. I knew teachers in K-20 who attended faculty workshops this past summer to try to figure out what to do in the fall.

Report coverThe U.S. Department of Education recently issued a report on its perspective on AI in education. It includes a warning of sorts: Don’t let your imagination run wild. “We especially call upon leaders to avoid romancing the magic of AI or only focusing on promising applications or outcomes, but instead to interrogate with a critical eye how AI-enabled systems and tools function in the educational environment,” the report says.

Some of the ideas are unsurprising. For example, it stresses that humans should be placed “firmly at the center” of AI-enabled edtech. That's also not surprising since an earlier White House “blueprint for AI,” said the same thing. And an approach to pedagogy that has been suggested for several decades - personalized learning - might be well served by AI. Artificial assistants might be able to automate tasks, giving teachers time for interacting with students. AI can give instant feedback to students "tutor-style." 

The report's optimism appears in the idea that AI can help teachers rather than diminish their roles and provide support. Still, where AI will be in education in the next year or next decade is unknown.

Harmful Content Online

girl on phone

Photo by Andrea Piacquadio

It is an important issue to cover but, unfortunately, I am not surprised to see a report covered with a BBC headline "More girls than boys exposed to harmful content online."

Teenage girls are more likely to be asked for nude photos online or be sent pornography or content promoting self-harm than boys, a report has found. The report is based on survey responses from around 6,500 young people, and they found that girls are "much more likely to experience something nasty or unpleasant online."

YouTube, WhatsApp, Snapchat, and TikTok were the most popular social media sites for both age groups, but more than three-quarters of 14-18-year-olds also used Instagram.

Many respondents reported spending significant amounts of time online. For instance, a third of 14-18-year-olds reported spending four hours or more online during a school day.  Almost two-thirds reported spending more than four hours online at weekends. One in five 14-18-year-olds said they spent more than seven hours a day online on weekends.

One example is that one in five children and young people who took part in the research said something nasty or unpleasant had recently happened to them online. The most common experience was that "mean or nasty comments" were made about them or sent to them. But there was a difference between boys and girls when it came to the type of nasty online experience they had. Girls were more likely to have mean or nasty comments made about them or rumors spread about them.

More than 5% of girls aged 14-18 said they had been asked to send nude photos or videos online or expose themselves, three times higher than the rate among boys. More than 5% of 14-18 year-old girls also said they had seen or been sent pornography, and twice as many girls as boys reported being sent "inappropriate photos" they had not asked for. More girls than boys also reported being sent content promoting suicide, eating disorders and self-harm.

Push and Pull Learning

push pull

Recently, a former colleague asked me what I thought about push versus pull learning. I knew the terms more from social media marketing but hadn't really used them in learning situations. In marketing, examples include whether to decide to subscribe to a newsletter by email or snail mail (you pull that information by choice) or a newsletter that comes to you automatically (it is pushed at you).

In general, I think people prefer to pull (choice) over having it pushed at them. Companies might prefer to push, but that probably comes with the option to stop that push (unsubscribe.)

Moving these approaches - or just the terms - to education makes some sense.

In a push approach, teachers decide on the information, approach, delivery method, and speed of delivery. It is how education has been done for centuries. It tends to start with what Bloom and his taxonomy would categorize as knowledge-level remember and understand questions. These would build toward more critical and creative thinking. With pull, students enter into creating, evaluating and analyzing that requires them to seek knowledge and understanding.

This conventional classroom-styled learning is not the only approach in the 21st century. Pull learning allows learners to access information at the point of need, the way they prefer (in some settings) at the speed they find comfortable. I think that the initial surge of MOOCs back in 2012 is a good example of learning that learners pulled as needed.

Pull puts learners more in control It flips the teacher-centered learning setting. However, we must acknowledge that learning in school at all levels is still very much push learning. Fortunately, the idea that students should be able to pull some learning as they feel they need it is gaining more acceptance and is being incorporated in instructional design planning.

Currently, pull learning experiences are probably best suited to workers who have learning needs based on job roles, personal knowledge, and advancing their career interests.

Ideally, learning is "push-pull" with appropriate information provided by a push and additional information required to complete tasks and goals pulled as needed. This is not really a new approach. When you were a student, you were certainly pushed information, but you might well have gone beyond what was provided and pulled additional information that you felt you needed.

MORE
https://www.responsiveinboundmarketing.com/blog/the-difference-between-push-and-pull-learning

https://www.teachthought.com/education/push-teaching-vs-pull-teaching-thinking/

https://barkleypd.com/blog/pushing-or-pulling/

China Regulating Generative AI Use

Chinese regulators have released draft rules designed to manage how companies develop generative artificial intelligence products like ChatGPT.
The CAC's (Cyberspace Administration of China) draft measures lay out ground rules that generative AI services have to follow, including the type of content these products are allowed to generate.

One rule is that content generated by AI needs to reflect the core values of socialism and should not subvert state power. The rules are the first of their kind in the country. China is not the only country concerned with the development of generative AI. Italy banned ChatGPT in March citing privacy concerns.

Chinese technology giants Baidu and Alibaba have launched their own ChatGPT-type applications. Alibaba unveiled Tongyi Qianwen and Baidu launched its Ernie Bot.

Though some people fear AI, others will fear restrictions and rules governing tech development. I am cautious on both of those issues but some of the CAC rules seem reasonable. For example, requiring that the data being used to train these AI models will not discriminate against people based on things like ethnicity, race, and gender,

These measures are scheduled to come into effect later this year. China already has regulations around data protection and algorithm development.