AI Literacy Part 1 "Where Angels Fear to Tread" with Sam Wineburg
Schools and teachers are being directed to teach and learn "AI Literacy" but do we know enough to do that responsibly?
Over the last two years, teachers and schools have felt immense pressure to incorporate AI literacy into their curricula. In the fall of 2024, California became the first state to pass a law mandating AI literacy instruction in schools, and several others have since followed suit. In the summer of 2025, the Department of Education released the "AI Action Plan for Education," which stated in part: "The Action Plan encourages schools to teach AI literacy and supports the responsible integration of AI in classrooms. AI is seen as a key education tool to enhance individual student preparation for the real world and to bolster the United States as a leader in AI."
Most major AI companies have pledged significant capital to train teachers or educate students in AI literacy. Google alone has committed over 40 million dollars toward these initiatives, while OpenAI, Microsoft, and NVIDIA have all launched similar donation programs.
But do we actually know what "AI literacy" means? Sam Wineburg doesn't think so. Sam is a professor emeritus of education and history at Stanford and the co-founder of the Digital Inquiry Group. He previously led a landmark study for the Stanford History Education Group (SHEG) that exposed how standard school methods for teaching web literacy were failing K-12 students.
In part one of this two-part miniseries, Wineburg shares his observations on how educators have gotten "literacy" wrong in the past. He suggests there are more responsible ways to adapt to transformative new technologies than to hastily stand up literacy guidelines that may repeat old mistakes.