AI Literacy Part II "What We Talk About When We Talk About AI Literacy"

Teachers face a dilemma. They know we may not know the best way to teach students about how to prepare for AI, but many feel we have to do something.
Teachers are aware that AI is present in schools and learning environments, whether we like it or not. And many feel pressure, internally and externally, to learn and teach some form of "AI literacy". Justin has cautioned that it's too early for us to really understand what AI literacy is, and that just guessing at what might constitute AI literacy might do harm.
Teachers and schooleaders appreciate that warning, but many feel that we can't do nothing. It's essential for teachers to start getting some knowledge about how AI works, to start experimenting with AI powered practices, and think about implementing them into our instruction.
In our research, the most eloquent proponent that teachers should gain, and perhaps teach, some kind of AI literacy is Maureen Russo Rodriguez. Maureen is a Spanish and English teacher at St. Mark’s School in Massachussets. She is a cofounder (with Nate Green) of a network of educators called AI Co-Lab, which started in 2024 and now includes over nine hundred educators from over three hundred schools. We talk with her about her path to leading a process by which teachers design their own AI literacy professional development process, and Justin and Maureen try to pin down where they are in agreement, and disagreement.
This episode was produced by Jesse Dukes. You can learn more about Co-lab at https://www.educolab.org/.
We got support for our interview with Maureen from RAISE at MIT: Responsible AI for Social Empowerment and Education. Thanks to Eric Klopfer and Cynthia Brezeal. RAISE also sponsors a series of professional development opportunities around AI for teachers, in a similar spirit to Co-Lab called Day of AI. We had editorial help this week from Steven Jackson, Alexandra Salomon, Adam Brock, Sara Falls, and Steve Oulette.
Teach Lab is a production of the Teaching Systems lab at MIT Justin Reich, Director
Jesse: Hello everybody, it's Jesse Dukes. Welcome to Teach Lab and our second episode on AI literacy. You could think of this as a homework machine bonus episode, a further exploration. And I have Justin here.
Justin: Happy Winter, Jesse.
Jesse: Happy Winter. Thanks for joining me. This episode is going to feature something of a debate… and it’s a debate sparked by a dilemma I have heard about from many teachers. The dilemma goes something like this:
AI is here, and as Justin and I have argued, it is present in schools and learning environments whether teachers or school leaders want it around or not. Students have access to ChatGPT, and all sorts of AI tools on their phones, on their computers.
So to do NOTHING about AI in this moment feels dangerous. It can mean that students will use AI and irresponsible ways that might harm their learning and make them vulnerable to the downsides of AI.
Justin: But, we would argue that it’s too early to say what good AI policy and practice definitively looks like.
Jesse: Nevertheless, there is a proposed solution on the table, which is that teachers should learn, and teach something called “AI literacy”. And many teachers and school leaders are feeling pressure, internal and external, to do so.
Justin: Last time we mentioned California’s 2024 bill mandating AI literacy instruction in the state, and Trump’s 2025 AI Action encouraging AI literacy. And these hits keep coming. In January of 2026, the Brookings Institute published a policy paper titled A New Direction for students in an AI world. Prosper, prepare, protect.
Interestingly, the executive summary for the document cites an article of mine for which I’m grateful about past education technology failures, where I caution that determining the best way to teach about AI could realistically take a decade or more. The executive summary actually cites this twice. But then the authors go one to offer one of their main recommendations to “Promote holistic AI literacy for students, teachers, parents, and education leaders.”
Jesse: So they're citing your work, but not necessarily taking your advice Justin, but not necessarily taking your advice. And, actually, I think that’s emblematic of the ambivalence many teachers feel in this moment. They want to be cautious and not do the wrong thing, but there’s this strong sense of “we should be doing something” when it comes to AI.
And one teacher who I spoke to pretty recently, and we’ve heard from a few times, is Maria, in Los Angeles. And she told me this in September of 2025:
Maria: At the end of last year, I had some former students who I hadn't seen in years come and visit me, and they're all at this point out of college in the workforce.
And they were talking about what it is that they were being asked to do in their work and the kinds of skills that they needed to do to do that work. And it was really interesting because it was very much. I have to be able to work with a team of people. I have to be able to communicate. I need to be able to lead a team of people if I'm in that position.
And we are expected to use ChatGPT, and AI to do our work more efficiently, period. And that landed really hard with me because we understand when these things come out, why there's this initial like, oh crap, we just let everything out of Pandora's box. Right? And we also know that once we let things out of a box, they're not going back in.
So, you know, I understand very much that reaction in terms of banning ChatGPT when it first comes out. But after talking to this group of students, and I really heard them, it just hit me that banning something like this is no different than trying to ban a calculator from class. It's unrealistic. And in the end, it's harmful to the future of our students, whether we like it or not, they will be required to use AI in their future.Period. Hard stop.
Now, as educators as it always is, when there is new technology, when there are changes in the world around us, whether it's political changes or social changes, technological changes, changes in what we learn, um, our job as educators is to pivot. And it's always been to pivot. And so now it's on educators as it always is to begin to work with young people to establish what those guardrails might be and to, to talk about like, how do we use this technology effectively? How can it be misleading? What is an ineffective use of this kind of technology? Because if we don't do that, then who is?
Jesse: Justin, I’m curious what your thoughts are.
Justin: I mean, Maria is such a rich thinker. As an educator that I have so many things to respond to. One, it could be our job and we could not know how to do that job. You know, the thing that we talked about last time with Sam was that in the early 2000, late 90s, it was our job to teach students how to deal with this new thing called the web. And how to be able to evaluate websites.
And we didn't know how to do it. And we made stuff up and the stuff that we made up was wrong and was ineffective. And we taught those ineffective strategies to millions of kids. So we might have, and I was one of those teachers who did that. So we might have had the right intuitions. Indeed, you like schools, probably really probably ought to teach young people how to evaluate the web, but the fact that that was important thing to do did not change the fact that we did not know how to do it correctly. And so that is a terrible dilemma that educators, myself included, will face for the foreseeable future.
A second thing is there is an intuition that the way that you get people to be good with AI is teaching them either something about AI or teaching them particular ways of using AI. That might be true. We might be able to prove those things.
I would encourage educators to remain open to another possibility. Which is that it's actually not very hard to use ai. What is extremely difficult to learn is how to evaluate the output of AI and there are not very good, special tricks or things to know about AI that help you evaluate output. The main thing that helps you evaluate is domain knowledge.
That Maria is probably really good at evaluating GPT output about math and about math education, because she knows a ton about that. Novices are not good at evaluating that kind of output. If you want people to be a good math user of AI, they don't need to know that much about AI. They need to know a lot about math. And that would have really different sort of pedagogical curriculum consequences than if it turned out there was a whole bunch of specialized stuff that was really important to teach about AIi. One theory is in order to use AI, there's a bunch of stuff you need to know about AIi.
Justin: A second theory is that if you wanna be good at using AI, there's a whole bunch of stuff you have to know about the domain in which you're asking AI questions about.
Jesse: Yeah. Okay. Another thing that Maria suggested, was that there may be elements of what her school is focusing on anyway, pedagogically, that may also be useful for students who are learning how to effectively use AI.
Maria: One of the things that I've been really reflective on is this idea of how do you ask the prompt and, and how do you ask the question? And I think for a lot of our students, just even asking questions in general is actually quite a difficult task. When you start looking at questioning methods. Now I'm getting very like Socratic, right?
And in theory that even teachers aren't necessarily as skilled at questioning as they really should be as instructional leaders, and it's that much more difficult to then get students to be able to ask questions in the same way or being able to ask them of each other in that way. And yet, I think to get out of ChatGPT, the best product you can, or any of this AI, that becomes the essential skill there.
And it's actually quite a rigorous thinking process, being able to ask good questions to the point where it's actually our focus of professional development this year at our school site, separate from this whole AI discussion just in general. And so I think that's why I can use it more effectively is because I have always been someone who's very good at asking questions.
And I'm good at asking questions for my own knowledge. I'm good as a learner. I'm good at asking questions of my students. Very rarely state things outright to them or answer their questions. So I'm very good at responding to questions with questions and I think that is really, if my students are going to use ChatGPT, that's what I want them to be focusing on.
Justin: I love it. When educators come together and pick things that they care about and work on getting better at those things, I think that's terrific. I would encourage us all, and I think Maria's doing this, but I would encourage us all to really recognize that Maria's claim is a hypothesis and not a best practice.
If Maria's saying, I know for sure that asking really good questions is the way to get the best outcomes from GPT or other AI. I would say, Ooh, we definitely don't know that yet. That's a good guess. That's a good hypothesis. I'm super glad that educators as smart and as talented as Maria and her colleagues are tackling that hypothesis and gathering evidence around it.
But we do not know that yet. And if five years from now, 10 years from now, people found out that actually how you ask questions is just really not that important for improving your skills with AIi, that also wouldn't necessarily surprise me. Like what we have now is a bunch of guesses. Another unfortunate thing about, uh, asking good questions is it is also a thing which is extremely dependent upon domain knowledge.
There's probably a little bit that you can learn generally speaking about asking, open about asking good questions, like open-ended questions are better than closed-ended questions. People have studied a bunch of questions, and here's a few features that are common to good questions across disciplines.
But if you don't know anything about baseball, you're extremely unlikely to ask good questions about baseball. You start asking good questions about baseball when you know a ton about the rules of the sport and the history of the game and the players who are playing now and the culture around and its history and all these kinds of things.
I mean, a common feature of our world is that things which are really useful across domains. Are usually only a little bit useful across domains like design thinking is a great way of tackling problems. It's like a great way of solving the first 1% of lots of different kinds of problems. Solving most of the 99% of those problems requires a ton of domain knowledge.
Question asking is like that, you know, learning about new topics is like that. Um, it is an unfortunate thing about the way the brain works, which is that getting really good at things. Usually requires knowing a lot about those things.
Jesse: Well Justin, one observation I wanted to share based on my interviews is that when we talk about AI literacy, I think people are using the term in different ways. I’ve heard from a lot of teachers, school leaders, tech entrepreneurs, throwing that term around. And I’ve identified at least three ways in which people use it.
One is the idea that teachers need AI literacy, and by that, that they need to know something about generative AI in order to effectively adapt to its arrival in schools, and education environments.
TWO… is the idea that teachers ought to communicate SOMETHING to their students about appropriate and responsible use of AI. Along the lines of: Here is how you can and maybe should use AI in support of your learning and classwork, and here is how you should not.
AND three: is the idea that we should be teaching students how to effectively use AI because that's a skill they're going to need. So that includes things like prompt engineering, and how to avoid hallucinations, and the strengths and weaknesses of different AI models.
And my observation in the interviews that I've done and in the reading I've done, is that all three of these ideas get bundled up into this phrase, AI literacy. And when somebody uses that phrase AI literacy, they might mean all three.They might mean just one of those things. They might mean two, but not the other.
So one goal of this episode is to try to unpack those three ideas a little. And another goal is to tackle this idea of whether there are useful versions of AI literacy for teachers to be learning about.
And, I would say that your observation that we should be humble, careful, skeptical, and not assume we know too much about how to teach AI literacy lands very well with most of the teachers we talked to.
But I think some people worry that you’re advocating for a “business as usual” approach that doesn’t adapt effectively to the moment. And I think the teacher who makes the best, and most eloquent case for teachers, and students getting some “ AI literacy” is Maureen Russo Rodirquez.
You and I spoke with her in December.
Maureen Russo Rodriguez is a Spanish and English teacher at St. Mark’s School in Massachusetts. She is a cofounder of a network of educators called CoLab, which started in 2024 and now includes over 900 educators from over 300 schools.
And the first part of this interview mostly featured me asking questions to Maureen about how she came to be an advocate for AI literacy and the work that she and her colleagues put into Colab.
And then overtime, the conversation morphs into a debate where you and Maureen explore where you agree and disagree. I think it’s really instructive. So let’s get into our conversation with Maureen. She started out by telling us that ChatGPT launched on her birthday, and she didn’t notice it right away, but pretty soon.
Maureen: I didn't know about it on my birthday, but within a few weeks. There was an article published in the Atlantic, uh, that caught my attention in a big way called The College Essay Is Dead. And I read that article, thoughtfully and I kind of paused everything I was doing and I thought to myself. Expletive, this is gonna change everything and why are we not talking about it? So I talked to, a couple of colleagues just in passing at St. Mark's where I teach, in Southborough, Massachusetts. And some people were like, I don't even know. I can't think about that right now. And one colleague was very much interested in discussing the extent to which expletives.
I decided to send an email to, all of the humanities departments at my school. So I felt kind of nervous, pressing sand down an email to many departments. But I said essentially, this seems equal parts terrifying and important, and we should be talking about it. Uh, we should be talking about it together. This has implications for us. And the humanities especially, but also it's a call for us to collaborate with STEM. What does everybody think? Let's talk together and to my great horror. The people who responded to that email responded to me individually. Even the ones who were like, yes, it's so important for us to talk together about this.
Jesse: You were hoping they would talk to each other? Yes. Like a group reply.
Maureen: Especially if they responded to say it was critical and so important. And it had been on their minds. And my one colleague who had really, uh, been adamant about, the need to move things forward and talk about it. His name is Ron Spalletta. He is an English teacher here at St. Mark's. So, um, Ron and I decided that we would go to our head of school. We would say, we really care about this. We wanna be close to it. We wanna help other people learn about it. Could we, is there a way to get support to explore it further?
We need time. And our head of school at the time, John Warren, he was like, yes, uh, we have a generous donor who can support a course release for both of you. And, uh, go off, learn things. Use funding to explore tools for a year and, uh, come back and teach us some stuff. Uh, so we were really excited at that point in time.
Jesse: But I wanna go back to the expletive moment. What were the emotions undergirding that particular expletive?
Maureen: Um, that's a great question. My experience as a language teacher and especially, language teachers, I teach Spanish and English at St. Mark's, but as a language teacher who started teaching language in 2006?
I had seen generative technology in the form of Google translate, go from zero to hero over the course of five or six years. At first when I, so my expletive response was like, oh, the same thing is going to happen and it's going to happen so much faster.
And I would like to skip past the part of this where I doubt, you know, or I laugh it off. In 2007, Google Translate was so laughable. I told Andrew about a paper that I got from a Georgetown student who wrote about a woman she admired, and it was Arroz de Condoleezza which is Condoleezza Rice, you know, as opposed to a different kind of rice. And so you can't talk your way out of having used Google Translate when that happens. Right.
Justin: We've, we've all enjoyed a dish, a arroz de Condoleezza in the nineties.
Maureen: Oh my gosh, I got other things. Also, we follow this topic, Balada de papel, which is a paper jam, you know, but really a marmalade made of paper.
Justin: &Jesse:: (laughing)
Maureen: Justin likes that one. So at the time my colleagues and I, and I was in my early twenties, we could have. We loved this. We could have filled a coffee table book of all of these bloopers and we felt so safe.
Jesse: And you weren't thinking, this is gonna change the work?
Maureen: No, we were thinking, this is laughable. We will always be able to tell when students are using this. It's not a thing we need to have on our radar or be concerned about. And then over the course of the next five to seven years, Google Translate got really good. It got so good that it was just a thing we had to acknowledge was out there and we had to adapt our curricula around it.
I think my moment in late 2022 when I read that article in the Atlantic, I was like, oh no. This is gonna happen so much faster, and everyone else around me is gonna wanna laugh it off and put it in a coffee table book at first. And I, we need to skip past that part, um, because it's gonna be dangerous for learning at a whole new level.
Jesse: Now, three years in, we're almost exactly three years after the arrival of ChatGBT.
Maureen:I know.
Jesse: And you've had the chance to experiment. You've had the chance to research. You've given me an example of ways you either permit or encourage your current students in your practice to use AI and also an example of something that is not allowed by your students.
Maureen: Sure. A lot about how I teach and the areas of course, participation and assignments where students are allowed to use AI tools in my courses currently is pretty small. Um, it, it changes based on what I'm teaching, um, especially when it comes to, I'll give you an example. For a lower level Spanish class. For example, my Spanish three class that I'm teaching right now, my students were not allowed to use AI. They were not allowed to use AI any in, in any way until I asked them to a few times in class.
Um, once recently I asked them to, um, for a project in which they had to, they recently did a unit assessment where they did a five minute speech on the environment and, uh, environmental issues in a specific country in the Spanish speaking world. This was the first year that I suggested to my course team, Hey, we're asking them to go out and get, uh, sources in Spanish authentic sources. And a lot of the best sources are coming from, you know, the environmental agency page of La Republica de Panama. You know, so why don't we write them, an awesome prompt that helps them to text level the source to their level. And to add a vocabulary glossary to it so that they can access it at a level that they can read it. And then let's ask them to add a link in their bibliography…
Jesse: Text level in Spanish or text level in English?
Maureen: Text level in Spanish.
Jesse: Okay, Right. So yeah.
Maureen: Basically Let's hope they could understand it as high school juniors if they just translated it to English, right?
Jesse: Yeah, yeah.
Maureen: I wrote a prompt, just a very detailed prompt they could have used with any LLM that asked that to put the text in their specific level, get it to this amount of words, uh, give me a glossary at the end and do some vocabulary IDs in the text that will help me.
But we asked them to submit a bibliography that also had a link to the chat history where they did that. That's one way I've asked my students to use AI. Uh, another way I would do it in an advanced level class, I taught a unit on the human imagination. We did an experiment where we were talking about aliens and the students all had to draw an alien, and then we looked at the aliens and we were, this is an advanced level class in, in Spanish literature.
So, uh, this is obviously all, all in Spanish. We were discussing why all of our aliens looked kind of the same. And where in, you know, just literature and text we've been exposed to that came from and why it was so anthropomorphic as well. And then we did an experiment where they used, uh, an AI image generator and they tried to prompt the AI to make an alien.
But the goal at this point, like try to get it to make an alien that's not a stereotypical alien. And a lot of, I don't, was it them? Was it the AI? It was an epic failure. Um, I only had one student who managed to make an alien that, you know, didn't. Have really big eyes and a head and you know, arms and maybe teeth or something that looked kind of scary.
Jesse: That first case you gave me of having your students use AI to change the reading level of a Spanish text and then still have to read it in Spanish. You already told us that you were hesitant to let your students use AI very often. Why did that feel like a good exception? Why did that, why did you feel okay with that?
Maureen: Oh, well, obviously because if we didn't do that, then like what was stopping a lot of them from just taking a text that was obviously not at their level at all and just translating it to English or doing all of the research in English when that's so easy to do. I mean, the motivation is let's make sure they're doing more research in Spanish, but then also let's give them a tool that they can use because I don't think any of them had ever seen a prompt this in depth to do a thing like this before. Let's model for them how AI can be used in a way that is taking a certain shortcut, but not all of the shortcuts.
Jesse: What do you think, based on your experiences with CoLab and talking to your colleagues, that teachers need to know about AI in this moment?
Maureen: Watch teachers in my work with other schools and my discussions with colleagues, some of my, some teachers out there still don't know that AI is an arrival technology. And that we don't get, like I've, I had a conversation with a teacher the other day who was like, we have to keep the fox out of the hen house. And I'm like, oh my friend, like, I mean the hen house is now in a fox farm.
I think the most important thing teachers need to know is that they need to be experimenting in a hands-on way with the tools and that, and that they shouldn't accept text proposed solutions as givens. And this is not a thing that they need to know. It's more like a perspective or a mindset that they need to have. And it's one in which if they want agency and all the teachers I know want agency and how AI enters or does not enter their classroom or their teaching practices, they have to do hands-on work with it themselves.
Jesse: And what do you mean when you use the term AI literacy? What's your working definition of that,
Maureen: So to speak, very broadly, I would say that AI literacy, the kind that I want to build for teachers, uh, and the kind that I hope students can also have is an understanding of what these tools are and what they aren't.
It doesn't, for me, the important kind of AI literacy is not understanding the intricacies of what's under the hood. For me, the important kind of AI literacy is understanding things like, you know, uh, what does it mean to say it's sycophantic? Uh, it's understanding things like you need to iterate and you need to develop iteration as a skill if you want to have a productive exchange within an AI chatbot.
Maybe the catchall way to say it is this. Understanding all of the things we need to understand about AI in order to preserve our humanity in a future where it's hard to predict the power AI will have, uh, in society, in our lives and in education.
Jesse: Alright, Jesse dipping into the interview here.
We asked Maureen to tell us about starting Co-Lab, and to save time, I’ll sum it up. You remember, in 2023, her school gave her and her colleague Ron Spalletta a class release: they each had one fewer class to teach.
They used that time to learn more about AI tools, attend conferences, and organize an interschool symposium. They met a lot of other educators, especially independent school educators, who were also trying to learn about AI. And a lot of those people felt that there just wasn't good professional development around AI. So when Maureen's grant ended in early 2024, she teamed up with another educator passionate about this topic, Nate Green, from Sidwell Friends School. Together they founded Co-Lab in an attempt to offer teachers what they were not getting at their own schools: opportunities for ongoing experimentation and dialogue with peers.
Every month, they pick a topic, and somebody comes up with an AI exploration: this is some kind of way to use AI, in the context of education, sometimes with students, sometimes without students, and they explore: Try it out. And then they meet on Zoom, and discuss it in breakout rooms.
Maureen: This month we're doing, It's called AI Out Loud, which I'm very proud, the title that I gave it. Um, we didn't like Oral AI.
Justin: Yes, AI out loud. Much better. Much better,
Maureen: Yes. Thank you.
So it's all trying to test out the capacities of voice mode. We did an inter call earlier this week where the designer said, here is how I want you to try to use these prompts. Uh, go forth, do these things. Uh, let's come back together at the end of the month and discuss our findings. And then you as the participant would go take the exploration document and the information that you got in that short call experiment. Talk to people about it if you want to. We always encourage people to have an accountability buddy at their own school if they can.
And then at the end of the month, you would come together and join another call on Zoom in the evening, and you would walk through two, two breakout rooms where you got to share how your exploration went.
And then dig into some larger questions that are focused on teaching and learning. We have people that we call breakout room rangers who are awesome leaders who keep coming back for explorations, who ranger the breakout rooms and make sure that we don't get off topic so that we can talk about the questions specifically, how does this use case of AI help or hinder student learning?
And we'd like to encourage everyone to, um, to be skeptical. Um, and not just hype, the use case.
Justin: And I wanna emphasize that the price of admission to that last session is you have to do the prep work, right? Oh yeah. You are not allowed to come and freeload on the interesting conversations, you have to do the individual 45 minute hour.
That's, that's about what you're expecting of people, right? That they have to spend about an hour Um, doing the pre-work, interacting with the prompts that you've prepared.
Maureen: Yeah, and I mean, if. People show and they've only done 20 minutes of work. It's like we're not like booting them out. Um,
Justin:The Rangers are not gonna arrest you for that, but…
Maureen: No, but, but I have arranged a session where, you know, someone didn't do the work at all and I just had to ask them, you know, please listen, but like, the way we roll is this. Yeah.
Jesse: And Justin, you know, I sat in on one of these sessions.
Maureen: Jesse did the work, he did his homework.
Jesse: I did my homework. Justin was busy. Um, and I can say it was about assessments and we did this, uh, some, a bunch of experiments with assessments. I used ChatGPT. I sometimes teach podcasting at the college level, so I did it in that context. It was an interesting exercise in the breakout rooms. The discussions were great. I sensed there were a lot of teachers who were hungry to talk about this. And the other thing I will say is that. Lots of teachers in those breakout rooms said, I will not be using ChatGPT to do an assessment on my students based on this experiment. There was as much skepticism around the tools as there was interest and enthusiasm.
Justin: And that sounds like that's a success for you, Maureen.
Maureen: Yeah.
Justin: What's what Jesse just described as a win?
Maureen: I mean, it's a win. As long as we know that our vibe is one where people are honestly skeptical when they want to be skeptical.
Justin: And honestly enthusiastic. When they wanna be.
Maureen: Exactly.I mean, is it a failed exploration when we don't have as many skeptics in the room? Not necessarily. I mean, some use cases of AI that we've tried out have been like real success stories and sometimes it crosses disciplines that way where like all of the English teachers hate it.
Jesse: I was in a group of English teachers too, and that might have something to do, you know, I mean, the relationship between thinking and writing I think is very, very strong with English teachers. But Justin, you are on record for saying we shouldn't be teaching AI literacy because we don't know what it is.
Do you then object to what Maureen has been doing with her colleagues with CoLab and also her colleagues learning about ai, encouraging experimentation, uh, kind of filling in where there's no PD available?
Justin: No. Goodness. No. I think what Maureen and her colleagues are doing is outstanding. I think it is exactly. The kind of teacher led, practitioner led leadership practitioner. Just meaning like it also could include librarians and coaches and custodians, and everybody else..
Maureen: We have so many librarians.
Justin: Yeah. Well, librarians are always the best. Um. Yeah, I mean, getting people together to explore the capacities of new technologies, um, is absolutely what people should be doing.
I am very strongly on record, say, uh, on record saying that, um, people who position themselves as experts who are, or policy makers who are not in the classroom. Should not be publishing lists of what AI literacy is and saying that they are confident that they know that that list is correct.
We should not be publishing documents.In the year 2025 or 2026 and saying, we are certain that AI literacy is this, and this is what you should be teaching your teachers and students doing that is irresponsible because, um, in previous generations of, uh, technology adoption, we have published similar kinds of lists for other technologies early in the life cycle of those technologies, and they've been wrong.
They have been things that if you teach students, they become worse at using that technology.
Jesse: Okay. Jesse breaking into the interview here to say that Maureen and Justin had a bit of a nerding out session around an education researcher named Sarah Schneider Kavanaugh, who's at Penn, who has written a framework for how experienced teachers can adapt to the challenges raised by AIi.
and as Maureen understands it, generative AI raises new pedagogical challenges for teachers, and research hasn't caught up. We don't know the best way to respond to those challenges. So teachers do their best to adapt, bringing their knowledge, their experience, their expertise around what they know about how students learn, and then those adaptations then raise new pedagogical dilemmas, which the teachers then have to bring their knowledge and expertise to bear on.
So it's a cycle, and Maureen feels very strongly that the more she knows about generative AI, the more effective her responses will be to those challenges raised by AIi.
Maureen: But it helped me to hear that my job hasn't really changed in what it's fundamentally about. I'm still making the decisions based on my expertise in moments of uncertainty, and those decisions that I make will lead me to other decisions that I need to make only now.
All of the things that come into it inform my expertise are so much greater. If I really wanna make good decisions and I need more understanding of LLMs, I cannot exist as a teacher and continue going around in this loop of pedagogical dilemma, informed decision, pedagogical dilemma.
If I don't understand and continually educate myself about what these tools can do, even if fundamentally my goal is to keep my classroom exactly the same again, like I'm in a hen house that's on a fox farm, like AI is out there as an arrival technology, I, if I wanna stay the same, I have to change. And this is where I have so many questions for you, Justin, because..
Justin: Yeah, fire away.
Maureen: Yeah, there's so much that we agree on. So fundamentally, and a lot of the things that you have put forth in your research that I so respect are like, they're just exactly the things that we're doing and it has been hard to do them. It's like swimming, upstream against the current. No one is advocating for us.
We are doing this in our, in our free time, which really is not like you've worked at an independent boarding school. There is no free time. There is no free time. So my ask of you is how can we solve that and what can we do, especially to convince leaders of schools that faculty need time support solidarity, in order to be close to this technology and do the work that people in CoLab are doing every month, what needs to happen?
Justin: Those are, those are terrific questions. Um, wherever teachers get together to study the things that they think are important to them, like, man, am I a hundred percent in favor of that and want to, you know, and I, and I celebrate that.
There are, yeah, there are a wide range of schools in this country. There are some of them where grappling with AI is just not the most important thing they could be doing with their time because other crises are so immediate and so urgent, um, that if teachers at those schools organized and said, we cannot focus on this because chronic absenteeism is just like so much more important than that. I would also wanna support those teachers in places who would say things like. Now is not the time.
Maureen: Let's talk about the schools who can do this. Administrators who have every opportunity and have support and or who have, who can easily say, this is going to be a priority. Or even say, you know what?
We're gonna do five hours on this this year, which would be a heck of a lot better than a lot of schools are doing right now. They are choosing not to because no one is telling them that it is important. The business as usual approach, Justin, and the people in CoLab that I work with are living, it is administrators saying, this AI thing is a thing.
It seems really important, so let's just do everything we've already been doing. And then on top of it, we'll do a little bit more PD. And that can't be the way forward. It just can't be, because this is a real paradigm shift in our industry that, as you have said, teachers need to be scientists. It's not acceptable to me as a teacher who's in this, to hear someone say, we just gotta wait for, we gotta wait for higher ed to do the peer reviewed studies. And then in 10 years we're gonna have the answers. I know you're right. I know you're right. That so much of what we're gonna conclude now is just gonna be wrong later. But we can't wait.
We can't just wait.
Justin: Hmm. Yeah. Well, I, well, I do think you've captured several components of the dilemma. Um, one is that when schools don't operate collectively, then teachers feel like they have to take things on individually. Um, and that feels terrible. So there has to be some kind of collective response.
If I empathize with the school leaders who are trying to organize that collective response to things, I feel are, if we could say, well, we have to do, you know, professional development, we have to come up with policies, but we actually don't know what those professional development and policies are. And historically, when we've made early guesses about what those professional development and policies should be, we have, um, we've let bad ideas infiltrate the educational system.
And we actually can't even get rid of them. And then we teach kids sort of incorrect views. I think school communities will collectively come up with different stances. And different paces. Um, I think the technology companies, I think, you know, open AI, Google, Microsoft, um, Anthropic. They would like most schools to adopt paces, which are really, really fast. They would like…
Maureen: I'm sorry, you're talking about paces of AI integration? Every, I'm just talking about paces of let teachers learn what this is. Let teachers have some literacy. That's what I'm talking about. That's what I'm asking you about.
Justin: Yeah. I think they, I mean, I think they would advocate for both of those things. And I think communities should come together and have conversations that include teachers, that include students, that include administrators, that include other stakeholders, and get a sense of like, in our community, what do we think the right pace is for an integration?
Maureen: Wait, wait. Sorry, sorry, sorry. I'm gonna interrupt you again. Like, how are we coming to the table to have that conversation? If. A lot of people at the table have never done any hands-on exploratory work with AI.
Justin: Yeah, well, historically, people who are skeptical of new technologies, who, who refuse, um, to participate in early stage experimentation. I feel like I've learned in my career can be enormously important voices in thinking about the future of technologies, um, and how they're integrated or not in schools. I mean, I think when I started my work integrating technology, I sort of thought of those folks as kind of like, slow recalcitrant naysayers.
And increasingly I've come to realize like they are advocates for small C conservative values that are enormously important. Um, and that just like if there are teachers who are like, I really wanna spend some time investigating this, I would, in a world of infinite resources, I would want them to be able to do that. In the same world of infinite resources, I would want teachers who say. I am making a principal choice to, to preserve my practice and to focus my limited time on protecting conservative values in schools, those folks also have a very important role to play.
Jesse: Okay, Jesse, breaking in here, one last time. We went back and forth with Maureen for a bit after this, but then we had to wrap up pretty quickly. So. Justin, I just wanted to give you a chance upon reflection, how would you summarize where you think you and Maureen are in agreement, and where do you think you are in disagreement at this point?
Justin: So I celebrate the idea of educators playing around and exploring Generative AI tools. That's terrific. I think schools really do have to do something.
I'm loath to say that I was remembering Jesse, that in the fall of 20 22, 1 of the main things I was working on was this project, which partially got published here on Teach Lab called Subtraction and Action. Yeah. Where I was trying to give teachers a bunch of guidance about how to make schools simpler and to do fewer things.
But I think generative AI has so much potential to be disruptive and maybe because it has potential to do good things that schools just, it, it, it's the thing you have to do something about. Um, a great way to approach it is to give teachers opportunities to explore and play. Um, there are gonna be lots of schools this year that decide that that is not their priority.
That for some reason there are. That something else is more compelling and I very much respect that decision. I mean, that's in some ways, that's sort of the heart of subtraction and action. There are too many good things that teachers ought to be doing for them to accomplish this year. If there are some elementary schools that are making great progress on implementing the science of reading to improve reading and literacy in their schools. Don't stop doing that until it's done. Like keep, keep focusing on that. You know, AI can wait. Um, other things can wait if that's a priority. And if you're not seeing harms in your school because of the arrival of AI
A second claim in there is that all teachers. Need to be exposed to these tools even to decide to reject them. I think that's, I think the spirit behind that is generally quite sensible. I have however, in working with schools over the last couple of decades, had many moments working in communities where some educators in this, in those communities refuse to participate in a new technology trend. They just say, no. This a, I don't think this is good for my students or the school. But in particular, I think what I do without this thing is better than anything that could be done with this thing. And I don't need to explore it that much to come to that conclusion. And I have learned over the years to really value those perspectives.
Sometimes those are just like recalcitrant refuseniks, but almost always they're people who recognize the value of historical practice, the value of expertise, and want to preserve that. And I'm quite inclined to listen to those folks, but, you know, but it's not a disagreement. That disagreement that I'm talking about with Maureen is, is it's not really that much of a disagreement about principles.
It's a, it's, it's sort of a disagreement about some degree, some edge cases, some, you know, it doesn't have to be universal or not. Like nah, I can come up with a bunch of cases why it doesn't have to be universal. Um, but it doesn't take away from the, you know, a really important point, which is that there are communities of educators who would like to learn more through playing around with generative AI.
We should support them in doing that and that it would probably be a good idea. Amongst many, many good ideas for things that could improve teaching and learning to give teachers require teachers the opportunity to do that.
Jesse: Justin, I do have this other example. We've chatted about it a few times and I think it argues in favor of Maureen's point. Uh, so I wanted to play this excerpt from a teacher named Tony in Los Angeles talking about an experience he had with a student related to Google's AI mode, you know, an AI powered search results summary and an intervention he felt. That he had to do with that student. So, Tony's a middle school social studies teacher.
Um, so let's just hear from him.
Tony: So the, the, the kids create a menu for a restaurant that would exist in ancient Mesopotamia and their research is what fruits and vegetables existed and drinks and how can we use that to create a menu. But they'll just be like, did corn exist? Did weed exist? And I remember, student was like, look, there was corn. And I was like, no, there's not. It was It was not corn in ancient Mesopotamia. Right?
Jesse: That comes from Guatemala,man.
Tony: Yeah. He's like, no, look, it says, and I'm like, okay. Like click, click it, click the link. It was this 70 page PDF that talked about where food came from in different civilizations.
Then I was like, okay, let's do a control find for corn. Then they're like, look, corn. And I'm like, but then read, like read it. Like what does it say? And it was like, oh, corn did exist in ancient civilizations. But then it lists that it came from Guatemala, right? That it came from Central America. Yeah. And he's like, but it said on the other page.
And I was like, because it's pulling information from this PDF.
Jesse:Wow.
Tony: I'm like, it's trying to answer your question. And he was like. He was like so confused. He's like, but it told me, and even now though, when, um, our student teachers were doing their, the unit on teen activism, and my rule is like, you can click the source that it came from.
We can tell you whether it's a reliable source or not, but they're not allowed to just take that, um, that summary and like. Write it down as fact. It's, we're, we're not getting into that habit.
Jesse: And I think that Maureen would say that this was a brief moment of instruction in AI literacy between the teacher and the student. The teacher was doing some AI literacy instruction. It basically explaining you just used AI and you used AI in a way that is not actually a source. Um, and the, what you need to understand is that Google is using AI. We don't quite know what it works, but it's not reliable. It's making stuff up. Um. It could be true, but in this case it, you know, it's not true.
And I think Justin, you would say that that's not necessarily AI literacy. So I, I guess two part question to you, Justin. One is, is the teacher wrong in explaining to the student why that doesn't count as a source? Also what would you, what would you call that, um, if, if that's not AI literacy?
Justin: Let's see. Um, well of course the teacher's not wrong to instruct students about how to make sense of the world, and it sounds like they're providing good instruction in that moment. Um, I think it can be useful to think about those kinds of practices as disciplinary literacy. Um, citation is a particularly good example because like, you know, citation in a sense is crafted by disciplines. Um, like the way, like if you cite something, you're citing it with a system. Um, the PA system, the Chicago system, the MLA system, that system is generated literally by disciplinary bodies, by groups of scholars that self-identify as part of the same discipline, to create something.
Um, I think my hunch.My hypothesis is that we'll find over the years, um, that the most valuable. Knowledge about applying and using AI is centered in disciplines and in professional practices rather than domain independent, um, knowledge. Um, so to the extent that AI literacy gets presented as a set, which I think it often is as a set of domain independent, um, things that would work in science and math, in pharmacy, in anatomy, in, uh, multi-variable calculus. Um, I, I think we will find that the kinds of things that people need to know that work really well across domains are actually like, pretty small and are smaller than we predict.
At least, pretty much with every generation of technology, we, we assume that you need to know a lot about technology, and then when people use it, we're like, oh, actually somehow we don't understand at all what's going on here, but we still manage to drive our cars and use our computers and things like that.
Jesse: And as a journalist and a historian, I share your intuition about this. I think you're basically right. But I also think I see Marine's point. I do think it certainly helps to know something about AI, in order to explain why AI is not a good source and a good term for that. Something about AI might be AI literacy.
Like, to me, it seems natural to call the knowledge that the teacher brings to bear in that moment of explaining why Google and AI mode is not the same as what we would normally consider to be a reliable source that you might find in the Reader's Guide or JSTOR or something like that. That that might be a form of AI literacy.
Justin: One. I'm enthusiastic about thinking of that as a disciplinary literacy. But I think you're right, that part of what part of Tony's intervention depends upon him understanding something about how generative AI systems work and generate text. Um, I would say early, I, if you go back to a bunch of my early talks, you know, some of the very first talks that I did with the Massachusetts and the Connecticut school boards, one of the first things I do is be like, let me give you an explanation kind of about how these things work, right?
I mean, the second episode of The Homework Machine. How do these things work? I have been challenged recently to think, ah, we might be overestimating how much people need to know. Mike Caulfield is one of the people who sort of pushed my thingy on that. He's the developer, the sift method. Another one of the folks who really helped figure out web literacy.
Jesse: He’s worked with Sam Wineburg too, right.
Justin: He, he worked a lot with, he wrote, co-wrote a book called Verified with Sam Weinberg. He's been in the T’s lab before. And in this workshop on sort of using AI for information literacy, said, the only thing you need to know about generative AI systems is that they're non-deterministic.
The only thing you need to know is that, if you ask 'em the same question twice, you can get different answers. You don't have to know why that is. You don't have to know the underlying piece of it. That's the only thing you need to know. You know, and as I was thinking about it, like historically, we often overestimate how much you need to know about a technology to use it.
Um, no one has any idea what's going on in their car. They drive all over the place. One of the challenges of education research in the years ahead is gonna be trying to estimate for different kinds of tasks and for the general wellbeing of citizens. How much do you need to know about this thing? And, uh, assuredly the answer is not zero.
And I have a guess, a hypothesis that we may be overestimating how much the typical person needs to know in order to go about, you know, doing useful tasks.
Jesse: Just for the record, if you're, um, when you publish an opinion piece that says we shouldn't be teaching AI literacy. Does that mean school leaders should not be supporting teacher-led efforts to experiment with AI and education… like Co-LAB.
Justin: No. No, absolutely not. should we go into college majors and tell people entering college majors. We are going to teach you at the end of this major how to effectively use AI in this discipline. We should not do that because we don't know how to do that. Um, should we encourage faculty members to explore, generative AI tools, other kinds of tools, new technology, especially the ones who are enthusiastic about it, like Absolutely.
And one of the very best possible ways of doing that is by having communities of educators self-organize to explore and teach each other, and that is exactly what CoLab is doing so wonderfully.
Jesse: Justin Reich, thank you for joining me on your podcast Teach Lab today to talk about AI literacy.
Justin: It is a pleasure having you have me on my podcast.
Jesse: Alright everybody, thanks for listening. You know, we always post these episodes on Justin's LinkedIn and as it turns out, that seems to be the place where some of the most organic comment threads develop around these topics, so we're not gonna fight it. You can follow Justin Reich and me, Jesse Dukes on LinkedIn. And that is a great place to respectfully share your thoughts about AI literacy, the homework machine, AI in schools, and the other topics we've raised.
If you want to learn more about Co-lab, or join one of their exploration calls, you can www dot educolab dot org. I totally recommend it.
Maureen Russo Rodriguez was initially interviewed as part of our research back in 2025, by Andrew Parsons. We got support for our interview with her from RAISE at MIT: Responsible AI for Social Empowerment and Education. Thanks to Eric Klopfer and Cynthia Brezeal.
RAISE also sponsors a series of professional development opportunities around AI for teachers, in a similar spirit to Co-Lab called Day of AI. You can look that up.
We had editorial help this week from Steven Jackson, Alexandra Salomon, Adam Brock, Sara Falls, and Steve Oulette.
Teach Lab is a production of the Teaching Systems lab at MIT Justin Reich Director, located in Cambridge, Massachusetts on the Charles River where sits the MIT Sailing Pavilion, the birthplace of collegiate sailing, and the most active college sailing venue in the nation according to the Charles River Alliance of Boaters aka CRAB.
What does that have to do with AI? Hopefully nothing.







