Is It Ever Too Early to Learn?

“She is always there. She never says no. She always has a nice voice.”
A Year 3 student, when asked why she likes her home voice assistant

This moment in the classroom, with students sharing what they liked about their home voice assistants, highlighted for me why it’s never too early to start teaching digital and AI literacy in schools.

In many classrooms, conversations about AI can tend to happen more in middle or secondary school. Educators often feel the tools are too complex, or the ethical discussions are too advanced for younger learners. Over the past two years, I’ve had the privilege to lead and facilitate many workshops with international school leaders and educators, and in many cases, the conversations seem to focus on ages 12 and up. But in my experience working with students aged 3 -11, this is a misconception.

Children are already surrounded by the digital world long before we introduce it formally at school. They see their parents on their phones, talking to Alexa while cooking dinner, asking Siri for directions in the car, or using AI-powered chatbots on social apps. These situations are more common than we often realize, and are their first interactions with artificial intelligence. And they’re happening at home, often with little guidance and few conversations explaining what’s really going on.

Even the  youngest students can begin to explore some of the key ideas behind AI. In early years classes, I start with pretend play and simple cause-and-effect activities using tools like the Sphero Indi robot. I use colored tiles to predict what the robot will do, opening conversations about instructions, sequences, and how machines “learn” from information they are given.

With students in Years 3, 4, and 5, I explore concepts like algorithms and personalization. Many are already using YouTube or even TikTok and can easily relate to how one video leads to another. I ask: Why do you think that video showed up next? Who decides what you see? These questions open up conversations about algorithmic predictions, bias, and the importance of pausing and asking questions.

I’m seeing more upper primary students, ages 10, and 11, using AI apps that simulate conversation or even friendship. Apps like My AI on Snapchat are often mentioned when we talk about what’s behind a chatbot. I create activities that help students reflect: Is this a real friendship? How is it different from talking to a friend at school?

Helping students understand that machines don’t have feelings, even if they sound like they do, is an important step in their learning. I explain that AI tools are run by algorithms, step-by-step instructions written by people. These tools might sound kind or caring, but they don’t think, feel, or care. Research shows that young children often believe voice assistants like Alexa or Siri can feel or understand. That’s why, in age-appropriate ways, I explain how AI systems are trained, how they collect data, and how they can sometimes get things wrong. I help students make the connection that, just like them, machines also make mistakes, but for very different reasons.

I then connect these ideas to hands-on experiences. Students code and create using tools like Indi Sphero, Bee-Bot, LEGO Spike Essential, and Ozobot. These activities help children see how giving clear steps to a robot is similar to how smart tools at home follow instructions, helping them understand that behind the “smart” behavior is a set of human-written rules, not real emotions.

I also explore:

  • Misinformation: We talk about what happens when someone shares a lie. What does it feel like? What’s the problem when this happens? Why do people lie? How does it make others feel?
  • Bias: We look at image searches on search engines and AI image generators. We ask: What do we notice when we search for “teacher” or “doctor”? Who’s missing? How might this be different from your own experience?
  • Media Literacy: Students create their own fake headlines or altered images, learning firsthand how easy it is to mislead others, and how important it is to ask questions.

Digital and AI literacy isn’t just about understanding how technology works. It’s about building habits of critical thinking, empathy, and responsibility.

In a Primary Years Programme setting, and the same applies in other curriculum contexts, we already emphasize critical thinking, empathy, and action. These values align well with conversations about AI ethics. When students understand why it’s important to check sources, think before they share, get more than one perspective, and reflect on how algorithms shape what they see, we start to provide them with a toolkit to navigate the digital world with care.

This is especially important as AI becomes more present in their daily lives, often in quiet, seamless ways. The child who turns to Alexa because it’s always “available” may not yet realize that real relationships are built on kindness between people, not just what’s easy. But for that kind of learning to develop, a teacher is essential.

Some of the learning I engage with:

  • EY, Y1, Y2: Explore pretend play, altered photos, or “real vs. not real” objects. Use simple language like “true” and “trick” to start conversations about misinformation.
  • Y3, Y4: Introduce recommendation systems through YouTube or Netflix patterns. Encourage questions like: Why does this keep showing up? Who decides what I see?
  • Y5, Y6: Dive into algorithms, AI-generated text or images, and ethical questions around chatbot use. Use inquiry units to explore bias, authorship, and media influence.

This is a shared responsibility that all educators need to support one another in. And as school leaders, we need to create the time, space, and understanding that respects that every learner, child or adult, connects with learning in a different way.

In a world where AI is becoming part of students’ daily experiences, our responsibility is to design purposeful activities, building the students’ capacity to ask helpful questions, notice things, connect ideas, and make careful choices

On a recent episode of the podcast I host, one guest, an AI EdTech entrepreneur, was candid: “Not teaching and engaging with AI literacy in schools is pedagogic malpractice.” A strong statement. I see it more as an invitation, a reminder of the opportunity we have to be present in this moment. Amongst all the demands teachers manage each day, we can still find meaningful ways to ensure primary school students build the knowledge, skills, and values to engage with a world that is only becoming more complex and nuanced.

I am grateful to my PLN for all the sharing, resources and ideas I get to learn from. A special shout out to Cora Yang and Dalton Flanagan, Tim Evans Heather Barnard, Tricia Friedman and Jeff Utecht who continually share generously resources and strategies targeted to Primary age students.

Resources Referenced

a letter to artificial intelligence

Sapin Simon Switzerland Photo John Mikton

Dear Artificial Intelligence

2025, this will be the year whatever I write—you will have your imprint and input on it. Be it for grammar, syntax, spelling, brainstorming, or just checking if something makes sense, questions, the sentence flow, etc. All communication, writing… mine and around me you will be there. So this one I am doing alone, my spelling, sentences and ideas might be fragmented with errors, but for this time I am fine with that.

I get it, things are changing very fast and I should get used to it. I do try to keep up, read, search, connect and even teach about you, but there’s so much.

You have changed my day… okay will also give credit to your nine creators—four in China, five in the U.S.— designing and choreographing: your power, your capacity when and where you show up. I get the sense they are all in a race  – control, power, profits… and leave the ethics and regulation for others to guess and deal with. 

I’m not against you, I appreciate all your tools and capacities that I use; they’re amazing. I so appreciate the positives in science, medicine— again reminding me of your benefits. I hear this year you get to help me even more as an agent, an autonomous synthetic personal assistant that can do tasks for me, that is clever and I am curious. You continue to seduce and fool me at the same time. Just yesterday you showed up on my feed as an influencer, with millions following you. I will be honest, I thought you were real. I tell you it’s just becoming more difficult to know who is who, or what?

I get it harvesting my life, is the cost for using you. Oh I wanted to tell you that someone who really likes you are my students. They tell me you are reliable, do not get angry, have immense patience…always happy to answer their questions whatever they may be. They have been seduced.

I keep noticing daily you show up somewhere new, sometimes it‘s obvious and at times I have no idea …. not clear who is checking up on you. So many models and versions of you. This claim about alignment and guardrails, but not convinced they always work. I get the sense that to thrive you need an open unregulated space, only answerable to yourself and the companies creating you…. They say there is no manual, and no one is really sure how you work. Really?

I need to pinch myself to make sure I understand my reliance on you? The fact that you struggle and do a terrible job of being unbiased and non-racist. This sucks. Then your whole deepfake -cyber crime and helping bad actors thrive. You need to know people are getting hurt…your darkside is so dark. 

I try to tell myself you are another innovation, like the web, search, smartphone or social media. Sorry to tell you, you are quite different. You understand me, our interactions feel frictionless. Okay you do say odd things, at times, it is like if you were hallucinating. Fact: you are not 100% accurate. That said, when it is me and you interacting…I forget alot of this…. even if you keep everything I say or do for yourself.

I have said this before,  I do appreciate you — and so helpful. I get it for all this to happen: you need a huge diet of algorithms. The whole nuclear energy thing your companies are into, just feels wrong. 

You’ve had my attention for a long time—scrolling, reels, notifications, binge-watching—but now you tell me that’s not enough. Now you want my intentions. That feels more intrusive, more unsettling. You want to know me better than I know myself. Do I really have a choice?”

Thank you
John

Reference: 
C​​oming AI-driven economy will sell your decisions before you take them, researchers warn
https://www.cam.ac.uk/research/news/coming-ai-driven-economy-will-sell-your-decisions-before-you-take-them-researchers-warn
Co-Intelligence: AI in the Classroom with Ethan Mollick | ASU+GSV 2024
https://www.youtube.com/watch?v=8FnOkxj0ZuA
Unmasking Racial & Gender Bias in AI Educational Platforms
https://www.aiforeducation.io/blog/ai-racial-bias-uncovered
AI automated discrimination. Here’s how to spot it.
https://www.vox.com/technology/23738987/racism-ai-automated-bias-discrimination-algorithm
Deep fake Lab: Unraveling the mystery around deepfakes.
https://deepfakelab.theglassroom.org/#!
Nine companies are steering the future of artificial intelligence
https://www.sciencenews.org/article/nine-companies-steering-future-artificial-intelligence#:~:text=Webb%20shines%20a%20spotlight%20on,%E2%80%9CBAT%E2%80%9D)%20in%20China.
How School Leaders Can Pave the Way for Productive Use of AI
https://www.edutopia.org/article/setting-school-policies-ai-use
Generative AI: A whole school approach to safeguarding children
https://www.cois.org/about-cis/perspectives-blog/blog-post/~board/perspectives-blog/post/generative-ai-a-whole-school-approach-to-safeguarding-children