Jan. 28, 2026

AI Literacy Part 1 "Where Angels Fear to Tread" with Sam Wineburg

AI Literacy Part 1 "Where Angels Fear to Tread" with Sam Wineburg

Schools and teachers are being directed to teach and learn "AI Literacy" but do we know enough to do that responsibly?

Apple Podcasts podcast player badge
Spotify podcast player badge
Castro podcast player badge
RSS Feed podcast player badge
Apple Podcasts podcast player iconSpotify podcast player iconCastro podcast player iconRSS Feed podcast player icon

Over the last two years, teachers and schools have felt immense pressure to incorporate AI literacy into their curricula. In the fall of 2024, California became the first state to pass a law mandating AI literacy instruction in schools, and several others have since followed suit. In the summer of 2025, the Department of Education released the AI Action Plan for Education, which stated in part: "The Action Plan encourages schools to teach AI literacy and supports the responsible integration of AI in classrooms. AI is seen as a key education tool to enhance individual student preparation for the real world and to bolster the United States as a leader in AI."

Most major AI companies have pledged significant capital to train teachers or educate students in AI literacy. Google alone has committed over 40 million dollars toward these initiatives, while OpenAI, Microsoft, and NVIDIA have all launched similar donation programs.
But do we actually know what "AI literacy" means? Sam Wineburg doesn't think so. Sam is a professor emeritus of education and history at Stanford and the co-founder of the Digital Inquiry Group. He previously led a landmark study for the Stanford History Education Group (SHEG) that exposed how standard school methods for teaching web literacy were failing K-12 students.

In part one of this two-part miniseries, Wineburg shares his observations on how educators have gotten "literacy" wrong in the past. He suggests there are more responsible ways to adapt to transformative new technologies than to hastily stand up literacy guidelines that may repeat old mistakes.

Justin Reich: From the MIT studios of the Teaching Systems Lab, this is Teach Lab, a podcast about the art and craft of teaching. I'm Justin Reich. If you've been listening and many of you have, thank you. You know, we recently wrapped up our seven parts series The Homework Machine about AI and K12 education. And one topic that kept coming up as we worked on those episodes was AI literacy.

Teachers mentioned it, school leaders talked about the need to promote it. Right around the time  The Homework Machine launched, president Trump's Department of Education announced an AI action plan for education. That stated, and I quote, “The action plan encourages schools to teach AI literacy and supports the responsible integration of AI in classrooms.AI is seen as a key educational tool. To enhance individual student preparation for the real world and to enhance the United States as a leader in AI.” 

Now most of the major AI companies have pledged money to train teachers or educate students in AI literacy. Google has already donated at least $40 million to this end. OpenAI, Microsoft, and NVIDIA have all made donations or launch programs. Now, I am something of an AI literacy skeptic. I'm very sympathetic to teachers or principals who want to provide some information to students about AI. And I know that there's a sense out there that if we don't teach our kids enough about AI, they're gonna fall behind.

But as I've written in a few opinion pieces, even if the notion of AI literacy is a good one, we don't actually know what AI literacy should be or what AI literacy instruction should be. And I actually think there are good reasons to believe that the whole framing around AI literacy might be the wrong way of thinking about preparing young people for the future to begin with.

Generative AI is a new technology and practitioners in the adult world are still figuring out how to deploy it, how to use it, how to take advantage. It keeps changing, and as we've discussed, its performances are uneven across tasks. We're still learning about that jagged frontier. So when I say we don't know how to teach AI literacy, I'm thinking about how little we know at the moment about how to use AI effectively.

But I'm also thinking about the past, schools historically have had a hard time rapidly creating new instructional approaches around new types of media. And generative AI is a new media. In particular, I am haunted by the story of how schools have taught web search literacy for decades. We have taught tens of millions of students demonstrably wrong ways to search the web and verify information.We have done harm.

 And I know this because of Sam Weinberg's research. Sam is a professor emeritus of Education and History at Stanford. He currently leads the Digital Inquiry group. He co-led a study for the Stanford History Education Group that exposed how standard school methods for teaching web literacy were failing students in K12 schools.And Sam has been a guest on Teach Lab in the past. 

So last fall when California announced a law mandating AI literacy in schools, I wanna know what Sam thought about it. We called him up and talked with them, which was then about a year ago. So I wanna share that conversation now as a lead in to our two-part series on AI literacy. Next time we're gonna hear from teachers and school leaders who are struggling with the question of what to teach students about AI in schools. But first, our conversation with Sam Weinberg. Now, keep in mind, we talked back in October of 2024 when California had just passed its AI literacy mandate. Now many other states in the federal government have followed suit, and we're gonna begin the conversation talking about California's law.

Justin Reich: Sam, welcome to Teach Lab. 

Sam Wineburg: Oh, thank you for having me. 

Justin Reich: So, Sam California recently passed a law requiring their instructional quality commission to develop and teach material related to AI literacy. Here's the language. AB 2876 will require the Instructional Quality Commission to begin incorporating artificial intelligence literacy content into the mathematics, science, and history, social science curriculum frameworks when those frameworks are next revised after January 1st, 2025. Sam, I'm curious what your initial thoughts or reactions are to that.

Sam Wineburg: My initial reaction is that legislative hand waving is very easy. And the hard part is when there is a budget line that accompanies that particular mandate and a set of teacher,  teacher professional development, experiences that are linked to that. And then instructional materials that are developed for students that are not freestanding in some non-existent AI course.

But that would actually correlate with the kinds of topics and the kinds of subjects that teachers are mandated to teach already without all of those things in place. It's a great, great window display coming up, on the Christmas window. Displays are very, very important, but beyond a window display, it needs to have teeth for it to really have any effect.

Justin Reich: If you were on the Instructional Quality Commission and it was your job to start trying to incorporate AI literacy content, probably into history and social studies where you know them both, but math, science, whatever else. Where would you start? What would be your strategy for figuring out how to do that?

Sam Wineburg: Well, I certainly wouldn't start with AI literacy. 

Justin Reich: Great. 

Sam Wineburg: There's already a California mandate in place for media and digital literacy to teach students how to search. And so one of the biggest problems that we know with generative AI is that it makes stuff up, including facts to things that did not exist to references that no one can find in the literature. And so if you don't know how to search, then you lay over and play dead in front of whatever it is that the model tells you. So do you start with AI literacy or do you start with its prerequisite? Do you learn how to do algebra and trigonometry before you take calculus or do you go straight to calculus? That is not a question, that's a rhetorical question because the answer's obvious. 

Justin Reich: The prerequisite that Sam's talking about is web literacy, how to find and evaluate information on the internet. And when it comes to that basic web literacy, Sam says, we have a lot of work to do. 

Sam Wineburg: People don't know how to search. They believe what their eyes show them. They look at websites and if they're intelligent, they say I'm a good reader. I got very good SAT scores. I got very good GRE scores. I'm a smart person. I am good at reading comprehension and I can read critically. Now if you are reading about something where you have a dramatic amount of subject matter knowledge, that works.

So for example, if you are a historian of the Civil War and you come to a Civil War site, then you are bringing an armamentarium of knowledge in which you can reference what the site says with the background knowledge that you bring. If you're a virologist and you're reading about new scientific research about vaccines. Then you are in a good position to evaluate that site. 

But if I am going to a site about new vaccines and the last significant time that I dealt with vectors and I dealt with the parts of the cell was ninth grade biology, then I'm really not in a position to know, despite the fact that I'm a very good reader.

And I actually know some statistics so I can evaluate some statistics. But, the idea that we can, given the sophistication of what goes on in the web, when we as informed citizens are trying to learn about things for which we don't have a lot of background knowledge, we have to be very, very careful.

And our research shows that people generally aren't, they look, they trust the aesthetics of the site. They trust a variety of proxies that are easily gameable. So for instance, one of the ones that's most common is that people will impute some meaning to the “dot org” top level domain, largely because of a mythology that they believe in that a “dot org”site had to go through some type of vetting process. They had to go through some type of government process in order to be registered as a “dot org”. Well. Right now with 15 minutes and your goldfish, you can get “I love my goldfish.org” with 15 minutes and $15, and you got yourself a “dot org” site. 

Justin Reich: I do wanna confirm that my goldfish.org appears to be available. Let's put two important points here.

The first point is that it's not just that informed citizens misread the web with these myths and things like that, but in many, many cases, schools taught them these myths. Would you say that's fair? That is the instruction that young people received about, um, searching about web literacy. Over the last 20 years, pretty regularly been somewhere between unhelpful and wrong.

Sam Wineburg: Exactly. The most used tool for teaching web credibility is a checklist called the CRAAP List. 

Justin Reich: Breaking In Here a moment. The CRAAP List or CRAAP test is a framework for evaluating online sources. CRAAP is an acronym for Currency Relevance, Authority, Accuracy, and Purpose. The idea is that when you're on a website, you run through a checklist to see how the site lives up to those five criteria.

Sam Wineburg: There's variations of it. Some are 10 questions. I've seen one there are 30 questions and the vast majority of the questions essentially keep the user's eyes glued to the screen. Is there a physical address? Uh, are there pop-up ads? Is there in, is there information on the about page complete is the URL.org.

All of these things essentially. Put the person who's viewing the site into the clutches of the site designer. Now, if we lived in a world where everyone who had to create a website was an altruist and was honest, that would be fine. But the web is big business. It's a scam machine. It is a lobby machine.It is the way that public policy is influenced, it's the way that keywords are manipulated. It's the way that meta-language on websites are gamed in order to elevate a site or to, to, uh, uh, create a data void so that you are. Searching for keywords where a site, those keywords will bring you to a preordained conclusion.

You think Google is a neutral mechanism, but you have no aware of its understructure that actually leads you to a particular place. So these things are  not taught and what is taught, as you just said, Justin, are some very, very antiquated ideas that you should look at the about page. Well, if I wanna scam you, or you should look at the, the language of the about page and if the language of the about page is objective sounding and is not conspiratorial, then it somehow is elevated in its authenticity. 

Well, if I wanna scam you and I'm intelligent and I know how to use the English language, or I know how to use an LLM to say rewrite my pro so it sounds objective and sound scientific, then that's what I'm gonna slap on my about page.

And this is stuff that's still taught. 

Justin Reich: So let's, let's put a pin in that for a minute. That one, that one of the things that your research has shown is that some of the earliest ways we started teaching people about web literacy, although they sounded reasonable from a certain point of view, it sounds reasonable to like, read closely and think critically about a web page turns out they're not that helpful. 

Sam Wineburg: I mean, I can, I can add to that. I mean, because there's, there's an underlying set of assumptions that have been carried over from an analog world and applied to a digital reality in which they don't match these ways of reading, that you should mine a particular text or a particular site for everything that it can disclose.

They are carryovers from a time where sources were scarce and we really had to eval because sources were limited. I grew up with three, uh, three television stations. I grew up with a daily newspaper. If I didn't understand something in the daily newspaper, yes, I had the encyclopedia, but, but no, I was taught to read carefully and read for bias.

Well, we've taken this idea of careful reading into an environment where there's an overabundance of sources where the whole model of the internet is, as Tim Wu has taught us an attention economy, where the goal is to keep my eyes on a site for as long as I possibly can. So if I am a scamming site and I am, and I'm a student, and a student is looking at a site, that is not what it says it is. The longer that student stays on that site, the greater the probability that they will get sucked down into that rabbit hole. 

The typical user will come to a site and their focus will be their, their initial question is, is this site credible? Implicit in that is that they have the wherewithal to make that judgment of credibility. What experts do, what the professional fact checkers that some of the nation's leading news outlets do that crack these sites in less than 30 seconds is they begin with an act of humility. They look at a site that looks like it's a major medical association, and they're not familiar with the pediatric medical associations.

And so their first question is not, is this site credible? The first question is, do I know what I'm looking at? It looks like a medical site. It's got a bunch of doctor's names. But I'm not, I'm not sure. And so, think about where the direction of concentration goes rather than from my face to the screen, it goes from my face to the screen, but back to my initial sizing up of what I think the site is.

And that is the set of understandings that undergird a behavior that we call lateral reading. And lateral reading is the recognition. That if we wanna understand reputation, a site cannot attest to its own reputation. If a plumber comes to your door because they're in the neighborhood and they say, I'm in the neighborhood and I've noticed you've got some rusting pipes outside, and I'm a really honest guy and I am gonna give you a really good price, and I just happened to be walking by and I saw this. He might be a great person. He might indeed be honest, but he cannot attest to his own reputation. 

Similarly, when I come to a site called the International life Science institute.org, they will show you their scientific advisory board. They will show you that they have referee publications. They will show you, they have wonderful pronouncements, how they are non-partisan, and that may be the case. But they cannot attest to their own credibility. To use the web most effectively is to use it as a mechanism for reputational calibration and lateral reading, where you go off the site and you search for the name of this organization. You are finding those things about reputation that are harder to game.

If they have a lawsuit against them, if they have major exposes about how they are a lobby group for agribusiness. These are things, it's hard for an organization can control its website. It's hard for them to control what other people say about them, and that is essentially a definition of reputation.

Justin Reich: In addition to search literacy and AI literacy, I think you could add just like computer literacy, computer programming literacy. You know, there's like now we have sort of 20 years maybe of these proposals to add different tech literacies, and so I have a similar reaction to you listening to the folks sort of proclaiming that AI literacy is the right policy response to the emergence of AI.

To me, the tech literacy approach does not work. Like if we, if you wanted to find a thing that you were like, pretty sure did not work in education, it would be this kind of new tech literacy approach. And I mean, I can understand why people would reach for it. I often remember the quip that when the French have a problem, they have a general strike, and when Americans have a problem, they create a new course, well, we'll just teach everyone how to deal with it.

Justin Reich: Even if the last three or four things, including as you point out, it like an urgent prerequisite of search hasn't been taken care of yet. 

Sam Wineburg:  That's an incredible point. I think you should write a book about it. 

Justin Reich: Good. No more books for me for a little while. Is there an example previously where we could say like, schools that rushed into a technology really ended up better than schools that took their time. Can you think of any evidence to suggest actually, we really need to do this now because otherwise our kids just won't be prepared and somebody else's kids will be, or something like that. 

Sam Wineburg: I can't think of a single example I can think of you know, I think historically post-Sputnik, you have a spirit of innovation that is largely federally funded. I mean it was a very different time in American history, recognized that the post-Sputnik educational initiative was called the National Educational Defense Act, and it was seen as, we are falling behind. The kind of curriculum efforts, again, it began with curriculum, right? So, the new math curriculum, makos Man, a course of Study and Social Studies, these were research projects.

They were not projects that said immediately, let's put this into school and because if we don't do it, we're gonna fall behind. There was this vision of falling behind, but it began with a recognition. That our existing means of schooling need to be fundamentally rethought. 

Now, it also led to some really stupid ideas. The idea of a teacher proof curriculum that adults are, the teachers are just not that smart, and we have to figure out a way that we can bypass them and get straight to the kid where the teacher is the conduit of our expert thinking, well, that one didn't go too well. Ultimately, curricula are not for kids.

Curricula are for the teachers. And if the teachers don't feel exuberant and don't feel ennobled by being the mediators of and the adapters of those curricula, they can be the best and most thought out curricula world, but they're gonna ultimately find dust on some shelf. 

Justin Reich: Alright, so I'm, here's how I'm gonna formulate one of your objections to this mandate and this idea, which is that even if mandating curriculum about AI literacy was a good idea, if you are not also mandating all of the other supports that help teachers figure out how to make sense of that curriculum, how to know that domain, how to translate it in interesting ways to their students. Then you have, as you said, waved your hands at a problem, but not necessarily done anything that will really end up solving it. Um, I wonder if I can pose to you sort of a few other objections that I've come up with to the idea of AI literacy and get your reaction to them.

Sam Wineburg: Shoot.

Justin Reich: My first objection is that we don't know what AI literacy is, even two years in, to widely available technologies that produce text. We really don't know what are good ways to use them or not. We have some guesses, some people have tried some things that maybe worked, but, especially sort of within the disciplines, within history, within science, within math, we don't know what AI literacy is. Not only that, but we could sort of guess. But we could very well be wrong. How did those objections strike you? Fair, unfair. 

Sam Wineburg: They strike me as fair. Now, I would introduce a couple distinctions regarding AI literacy. There's knowledge about what AI is. 

Justin Reich: Mm-hmm. 

Sam Wineburg: And then there's knowledge about what do you do with it, with AI. And I actually think that helping human beings, and I including teachers, adults, students, understand what it is that the model does, and its basis is pretty important because people are using these things, right? And they think that the machine thinks. 

And it's important for them to understand that these are probabilistic models, um, to use, you know, the term that, that Emily Bender has made famous. “They're stochastic parrots.” You know, when the Parrot says “I want a cracker.” They're not saying I want a cracker, or when, you know, my dentist's office, my dentist, Dr. Heidi Horowitz had a parrot in her office and the parrot when you walked in said, brush and floss. Brush and floss. Now that parrot really is not caring about my dental hygiene. But it knows how to produce that particular set of sounds and these probabilistic models, which are so exceedingly complex. The question is that we can't explain how these LLMs, these large language models, arrive at an answer. We've lost it. It's too complex. We cannot reverse engineer an answer at this point with something, an open AI or Claude or Gemini.

So I think it's really important to understand that these models have been programmed to mimic human speech and to respond in very human-like ways with, I'm sorry. And Good point. And, and yes. I understand what you're saying and I would be happy to oblige. So they've done all of the kind of artificial ways of mimicking human speech, and in many ways the responses look incredibly sophisticated. I mean, it's impossible not to be blown away, and it's impossible not to think that the damn thing is thinking, and that's really dangerous. 

When we forget that these are probabilistic models based on very complex calculations that are beyond even the people who engineered them's understanding of what they're doing at this point, there's a process I think that could very much lead to our own dehumanization.

So I do think that to know something about these models, even at a rudimentary level, is important. I think it's important for people to understand that the models have been trained on data sets. That are not neutral. They were not data sets that came out purified from being given on Mount Sinai.

They were scraped from the internet. And the internet is a very human place with human foibles, with human prejudices, with human biases. And so again, you've got very good work. I think. There's a person at MIT, the, the woman who did all of the work on facial recognition, who showed that when the models are trained on faces, they do really a poor job on black and brown faces because of the overwhelming, uh, nature of the training data, which was on white faces.

So again, when people understand that. Our biases are built into these models. I would hope they would be a little bit chastened and a little bit more cautious about imputing a kind of supernatural intelligence to what these things do. 

Justin Reich: So you, you made a distinction there's an AI literacy, which is like, how does this thing kind of work? And then there's another AI literacy, which is, what should we do about it? Or what should we do with them? And it sounds like you have more enthusiasm for the first one than for the second one. 

Sam Wineburg: Well, first, as a preface to that, recognize that we're in the apex of a hype cycle. And who better than you, Justin, to tell us about the expected hype cycles that reach an Apogee only to kind of let the dust settle and we recognize, okay, well there's some good uses, but a lot of this hype was just, you know, hype.

So let's recognize that we're, we're really at a moment where there are a whole bunch of forces and, you know, again, I don't want to go into some kind of night school Marxism, superficial analysis, but let's just recognize that there are financial interests that are fueling this whole craze. These are not nonprofit companies. These are companies that have billions of dollars of venture capital behind them. And they are competing and they are in an arms race of who's going to get the chunk of pie that education represents. 

Listen, if we were in a society that was not motivated by the unfettered pursuit of profit, and we had these technologies at our doorstep, we would ask ourselves, what are the research studies? We would need to undertake in order to come up with a sober and judicious plan for how they're useful. And as Robert K. Merton has taught us the unanticipated consequences of purposeful human action, what are the kinds of things that we don't anticipate? The unknown unknowns. That actually lead to, who knows? A kind of learning dysphoria that goes on in alienation. 

You know, when you hook up everybody to a machine or to something that really isn't human. In a setting where there are other 26 human beings around, but you create essentially  digital walls around each one of them, the social consequences and ultimately the consequences cognitively can be deleterious and we ought to investigate that.

So, you know what I would say at this point, to the, let's call them the mandaters, those who are creating the mandates is, folks. This is a powerful technology. It is going to invariably change our lives, and at some point it will, I believe it will have some bonafide and very useful educational implications.

Let's undertake a program of research. Let's figure this out without rushing in where fools fear to tread. We're angels. Fear to tread. Fools rushing in we're angels. Fear to tread, change that. 

Justin Reich: And on that cheery note, Sam, thanks for joining us at Teach Lab. 

Sam Wineburg: My pleasure. It is my pleasure to be a part of this conversation.

Justin Reich: I am very grateful to Sam Weinberg from the Digital Inquiry Group for joining us. The Digital Inquiry Group site, which is full of research backed classroom tested materials for learning about searching websites and teaching history can be found@inquirygroup.org. A lot of folks have argued, certainly in good faith over the last few years. Man. I think the only way to deal with AI in schools is to quickly make up a definition of AI literacy and pass policy mandates to have teachers teach that AI literacy in schools.

 Sam offers some useful alternatives from history to that model. One example relates to the Sputnik moment where we responded with urgency but not urgency to immediately change schools and schooling. But to immediately launch programs of research to figure out effective ways to change schools and schooling. The launch of ChatGPT is certainly different from the launch of Sputnik, but that seems like a good question to keep asking. How much do we need to intervene in schools right now versus how much do we need to urgently figure out the best things to do?

Sam's second reminder is from the web era in which he argues, look, we didn't finish that one. How are we gonna start the next one? If you're a big champion of AI literacy, I think those are great questions to ask about all of the failures of the movement around web literacies. Why did we provide faulty guidance for students and teachers? Why did it take so long to develop better guidance? Why can't we get the faulty guidance out of schools now and replace it with the better guidance if we haven't finished the project of getting students good instruction in their school subjects about the web, which in many ways is the basis of generative AI, why should we start a new project of AI literacy before we finished the last one?

Those I think are gonna be good questions for us to be wrestling with.

One conversational line that Sam and I started but didn't finish was about this distinction. Sam's idea that there are two ways of defining AI literacy. Number one. Having some understanding of what generative AI is and what it isn't. And number two, a set of knowledge about best practices for using AI.

Those two definitions are being used interchangeably by teachers, school leaders, and politicians. And next time we're gonna dive into that distinction and hear from a teacher who thinks other teachers need some AI literacy, even if, especially if they want to keep it completely out of their classroom.

I am Justin Reich. This is The Homework Machine Special Edition, a production of the Teach Lab podcast. This episode was produced by Jesse Dukes and Steven Jackson. Special thanks to Paul Kim for recording Sam Weinberg's side of the conversation. The Homework Machine is a program of the Teaching Systems Lab located at MIT.

Where it is currently snowing, which means that there will be salt on the roads, which means that when I get home, I'll have to wash my bike.