Exploring Artificial Intelligence
Your kid uses AI every day, but do they know how it actually works? In How Machines Learn, students ages 11–17 find out. Over four Saturday sessions, they'll go from "AI is a mystery" to building and training their own machine learning model that can identify images of marine wildlife from Haystack Rock. Built on the research-backed curriculum from the Raspberry Pi Foundation and Google DeepMind. No coding experience needed, and each student takes home a Chromebook to keep exploring.
__
Details
Ages: 11-17
Ideal For: Kids curious about computer science and AI. No coding experience required.
Sessions: 4 Saturday sessions
Dates & Time: April 18th, April 25th, May 2nd, May 9th – 1:30pm to 3:00pm
Class Size: 10 students
What’s Included: All course materials, a trained AI model the student built themselves, and a Chromebook to keep exploring.
About the Instructor: Hadassah Davis lives in Cannon Beach and is a recent high school graduate with a passion for computer science. She has been coding since age 10, completed courses in Python and iOS development, and released an iOS app, Mental Math, on the Apple App Store. She plans to pursue an undergraduate degree in computer science and is committed to making technology education accessible in her community.
Curriculum: This course is built on the Experience AI curriculum, developed by the Raspberry Pi Foundation in collaboration with Google DeepMind. Experience AI is a research-backed program informed by the University of Cambridge’s Computing Education Research Centre.
__
What Students Will Walk Away With
They’ll understand the difference between regular software and AI. Most apps follow rules a programmer wrote. AI systems learn patterns from data. Students will be able to explain this distinction clearly and give examples of each.
They’ll know how machine learning actually works. Supervised learning, training data, classification, not as vocabulary words, but as things they’ve done with their own hands. They’ll know what a “model” is because they built one.
They’ll have trained and tested an image classifier. Using real ML tools, students will feed images into a system, train it to recognize different categories, and then test how accurate it is. They’ll iterate when it gets things wrong.
They’ll understand why AI gets things wrong. Biased data, datasets that are too small, and poor-quality images. Students will see these failure modes firsthand and understand that AI is only as good as what it’s trained on.
They’ll think critically about AI in the world around them. From recommendation algorithms to facial recognition to wildlife monitoring, they’ll have a framework for asking good questions about how AI is being used and whether it’s being used well.