Skip to Content
Artificial intelligence

AI literacy might be ChatGPT’s biggest lesson for schools

Plus: The complex math of counterfactuals could help Spotify pick your next favorite song.

""
Stephanie Arnett | Envato, Getty (hand)

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This year millions of people have tried—and been wowed by— artificial-intelligence systems. That’s in no small part thanks to OpenAI’s chatbot ChatGPT. 

When it launched last November, the chatbot became an instant hit among students, many of whom embraced it as a tool to write essays and finish homework. Some media outlets went as far as to declare that the college essay is dead

Alarmed by an influx of AI-generated essays, schools around the world moved swiftly to ban the use of the technology. 

But nearly half a year later, the outlook is a lot less bleak. For MIT Technology Review’s upcoming print issue on education, my colleague Will Douglas Heaven spoke to a number of educators who are now reevaluating what chatbots like ChatGPT mean for how we teach our kids. Many teachers now believe that far from being just a dream machine for cheaters, ChatGPT could actually help make education better. Read his story here

What’s clear from Will’s story is that ChatGPT will change the way schools teach. But the biggest educational outcome from the technology might not be a new way of writing essays or homework. It’s AI literacy. 

AI is becoming an increasingly integral part of our lives, and tech companies are rolling out AI-powered products at a breathtakingly fast pace. AI language models could become powerful productivity tools that we use every single day. 

I’ve written a lot about the dangers associated with artificial intelligence, from biased avatar generators to the impossible task of detecting AI-generated text

Every time I ask experts about what ordinary people can do to protect themselves from these types of harm, the answer is the same. They say there is an urgent need for the public to be better informed about how AI works and what its limitations are, in order to prevent ourselves from being fooled or harmed by a computer program.

Until now, the uptake of AI literacy schemes has been sluggish. But ChatGPT has forced many schools to quickly adapt and start teaching kids an ad hoc curriculum of AI 101. 

The teachers Will spoke to had already started applying a critical lens to technologies such as ChatGPT. Emily Donahoe, a writing tutor and educational developer at the University of Mississippi, said she thinks that ChatGPT could help teachers shift away from an excessive focus on final results. Getting a class to engage with AI and think critically about what it generates could make teaching feel more human, she says, “rather than asking students to write and perform like robots.”

And because the AI model has been trained with North American data and reflects North American biases, teachers are finding that it is a great way to start a conversation about bias. 

David Smith, a professor of bioscience education at Sheffield Hallam University in the UK, allows his undergraduate students to use ChatGPT in their written assignments, but he will assess the prompt as well as—or even rather than—the essay itself. “Knowing the words to use in a prompt and then understanding the output that comes back is important,” he says. “We need to teach how to do that.” 

One of the biggest flaws of AI language models is that they make stuff up and confidently present falsehoods as facts. This makes them unsuitable for tasks where accuracy is extremely important, such as scientific research and health care. But Helen Crompton, an associate professor of instructional technology at Old Dominion University in Norfolk, Virginia, has found the AI’s model’s “hallucinations” a useful teaching tool too. 

“The fact that it’s not perfect is great,” Crompton says. It’s an opportunity for productive discussions about misinformation and bias. 

These kinds of examples give me hope that education systems and policymakers will realize just how important it is to teach the next generation critical thinking skills around AI. 

For adults, one promising AI literacy initiative is a free online course called Elements of AI, which is developed by startup MinnaLearn and the University of Helsinki. It was launched in 2018 and is now available in 28 languages. Elements of AI teaches people what AI is and, most important, what it can and can’t do. I’ve tried it myself, and it’s a great resource.

My bigger concern is whether we will be able to get adults up to speed quickly enough. Without AI literacy among the internet-surfing adult population, more and more people are bound to fall prey to unrealistic expectations and hype. Meanwhile, AI chatbots could be weaponized as powerful phishing, scamming, and misinformation tools

The kids will be alright. It’s the adults we need to worry about.  

Deeper Learning

The complex math of counterfactuals could help Spotify pick your next favorite song

A new kind of machine-learning model built by a team of researchers at the music-streaming firm Spotify captures, for the first time, the complex math behind counterfactual analysis, a precise technique that can be used to identify the causes of past events and predict the effects of future ones. By tweaking the right things, it’s possible to separate true causation from correlation and coincidence.

What’s the big deal: The model could improve the accuracy of automated decision-making, especially personalized recommendations, in a range of applications from finance to health care. In Spotify’s case, that might mean choosing what songs to show you or when artists should drop a new album. Read more from Will Douglas Heaven here

Bits and Bytes

Sam Altman’s PR blitz continues
It’s fascinating to see the birth of tech folklore in real time. Two profiles of OpenAI founder Sam Altman from the New York Times and the Wall Street Journal paint a picture of Altman as a new tech luminary, akin to Steve Jobs or Bill Gates. The Times calls Altman “ChatGPT King,” while the Journal goes for “AI Crusader.” Yet more proof that the Great Man myth is still alive and well in tech. 

ChatGPT invented a sexual harassment scandal and accused a real law professor 
AI models make things up, and sometimes they even offer legitimate-looking citations for their nonsense. This story about an innocent professor who was accused of sexual harassment illustrates the very real harm that can result. “Hallucinations” are already getting OpenAI in legal problems. Last week, an Australian mayor threatened to sue OpenAI for defamation unless it corrects false claims that he served time in prison for bribery. This is something I warned about last year. (Washington Post

How Lex Fridman's podcast became a safe space for the “anti-woke” tech elite
A fascinating read on the rise of Lex Fridman, the controversial and hugely popular AI researcher turned podcaster, and his complicated relationship with the AI community—and Elon Musk. (Business Insider

Pollsters are starting to survey AIs instead of people 
People don’t reply to political polls. A new research experiment is trying to see if AI chatbots could help by mirroring how certain demographics would answer polling questions. Polling is already a dubious science, and this is likely to make it even more so. (The Atlantic

Fashion brands are using AI-generated models in the name of diversity
Brands such as Levi’s and Calvin Klein are using AI-generated models to “supplement” their representation of people of various sizes, skin tones, and ages. But why not just hire diverse humans? *Screams into the void* (The Guardian)

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.