How YOU learned what your grandma looks like
When you were very little, you didn't have a sticker on your grandma that said "Grandma." Instead, you SAW her many, many times. At home, at church, at funerals, on her visits. After seeing her face hundreds of times, your brain built a picture of her. Now even if she cut her hair, wore new clothes, or got a new pair of glasses — you would still know her.
Your brain is made of cells called neurons. Neurons send little sparks to each other. When you see grandma, a specific pattern of sparks fires in your brain. Your brain connects that pattern to the word "Grandma."
Computers have neurons too (well, pretend ones)
Scientists copied this idea. They built computer neurons — fake, mathematical ones — and connected them together into a big web. This web is called a neural network.
When you show the neural network a picture of grandma, the sparks fly through the network in a certain pattern. The computer starts guessing. At first, its guesses are TERRIBLE. It might say "elephant!" when you show it grandma. That's fine. Because now you correct it: "No, this is grandma."
Each time you correct the computer, it adjusts its sparks a tiny bit. After doing this 1,000 times — with 1,000 grandma pictures AND 1,000 "not-grandma" pictures — the computer gets it right. It can now recognise grandma even in pictures it has never seen before.
This learning-from-correcting process is called training. When you USE a trained computer (like when you type into ChatGPT), that's called inference.
The training example computers love: pictures of cats and dogs
If you go to teachablemachine.withgoogle.com (ask your parent first), you can train your own AI in 5 minutes.
- Click "Image Project"
- Point the webcam at yourself smiling. Click "Record" 30 times.
- Now point at yourself frowning (sad face). Click "Record" 30 times.
- Click "Train Model." Wait 10 seconds.
- Now smile at the camera. It will say "Smile!" Now frown. It will say "Frown!"
That's it. That's machine learning. Same trick as self-driving cars, M-Pesa fraud detection, and the YouTube recommendation system. Just bigger, with millions of examples instead of 60.
Why AI sometimes gets things TOTALLY wrong
AI makes mistakes. Big ones. Here's why:
- If you only trained it on cats with their eyes open, it might not recognise a sleeping cat.
- If you only trained it on light-skinned faces, it might struggle with darker-skinned faces. (This is a real problem with some face-recognition apps. It's unfair, and it's because of bad training data.)
- If someone asks ChatGPT a question it doesn't have a good answer for, it might make up a fake answer that sounds real. This is called "hallucinating." AI doesn't know it's lying.
Remember this rule forever: Always check AI's answers. Never trust without checking. AI is smart, but it can be confidently wrong.
Open Teachable Machine together at
teachablemachine.withgoogle.com. Train it to spot the difference between two things in your home — maybe a shoe and a book. Take 30 pictures of each. Train the model. Then test: does it say "shoe" when you show a shoe? Try it with a shoe it has never seen (maybe mum's shoe). Does it still work? Tell your parent what you learned.