
Weekly roundup
Welcome back, friends
This week brought one of the most consequential regulatory milestones for kids and AI we've seen yet, the FTC's updated COPPA rule officially became enforceable on April 22, the same week Meta rolled out new parental visibility into teen AI chats, and a major investigation revealed how badly underfunded child-protection investigators are falling behind AI-fueled abuse. There's a lot, but I've kept it quick.
Two podcast episodes pair perfectly with this week's stories, listen alongside your reading:
AI Takes 3 Seconds to Steal Your Voice. One Step Stops It. (Elementary school), A must-listen alongside this week's investigation into AI-generated abuse imagery. Voice and photo data are the new frontier, and your kids can learn to protect both.
How to THINK About AI Before It Thinks for You (Older kids, parents & teachers), The perfect companion to the COPPA news and the classroom media literacy story.
Pour the coffee. Let's get into it.
National Focused Article
COPPA's AI-Era Update Took Effect April 22
After more than a decade without a meaningful refresh, the Federal Trade Commission's amended Children's Online Privacy Protection Rule officially became enforceable on April 22, 2026. After more than a decade without a meaningful refresh, the Federal Trade Commission's amended Children's Online Privacy Protection Rule officially became enforceable on April 22, 2026. The updated rule expands the definition of "personal information" to include biometric identifiers like voiceprints, fingerprints, and facial templates. It bans indefinite retention of children's data, requires written data retention policies, and, most consequentially for the AI industry, requires separate verifiable parental consent before a child's personal data can be used to train or develop AI systems. Civil penalties reach over $51,000 per violation, per day.
What parents and teachers need to know: The practical takeaway for you at home or in the classroom is that any app, game, or smart toy your child uses should be asking for clearer, separate permission before sharing their data, especially for AI training. Take ten minutes this week to review what permissions the apps on your child's device actually have. If a privacy policy says data is kept "indefinitely" or buries AI training inside a single blanket consent, you have more leverage now than you did two weeks ago.
National Focused Article
The Push to Ban AI Chatbots in Children's Toys
U.S. Congressman Blake Moore introduced the AI Children's Toy Safety Act. This legislation aims to ban the manufacturing, importation, and sale of children's toys or childcare articles that incorporate artificial intelligence chatbots. The push for this ban was prompted by recent consumer testing which demonstrated that AI-integrated toys frequently veer into inappropriate themes, vulgar language, and explicit content. Lawmakers are emphasizing the severe data privacy challenges, unpredictable engagement patterns, and the risk of fostering addictive behaviors in young demographics.
What parents and teachers need to know: Listen, just because a toy is sitting on a store shelf doesn't mean it's safe for our little ones to take home. These interactive dolls and teddy bears might seem like a fun, high-tech way to keep them entertained, but check the labels, do a little digging on what powers that new gadget, and remember that good old-fashioned wooden blocks and board games never collected anyone's personal data or said anything out of pocket.
State Focused Article
False AI Accusations Impacting Top Students
Reports surfaced detailing a high school student receiving a failing grade after three separate AI-detection tools incorrectly flagged her English assignment as AI-generated. The incident highlights a rapidly growing national issue where capable, hard-working students are wrongfully accused of academic dishonesty. Cybersecurity and education experts point out that AI detection software remains highly unreliable and prone to false positives, yet many school districts are increasingly relying on it to evaluate and penalize student submissions.
What parents and teachers need to know: Teachers, I know the grading pile is heavy and your time is stretched thin, but we cannot hand over our trust to a software program that gets it wrong this often. We need to know our students' actual voices and writing styles. Parents, if your child comes home upset because a machine called them a cheater, you must advocate for them. A great practice right now is having them write their drafts in a document that tracks their typing history so they can definitively prove their hard work. We want to encourage their brilliance, not let a faulty AI program crush their spirit (and trust).
Local Focused Article
Middle Schoolers Teach Educators About AI Ethics
In Oak Park, Illinois, a group of middle school students spent the academic year investigating AI tools and presented their findings to teachers and parents during a recent April panel discussion. The students emphasized that AI will be a permanent fixture in their academic and professional futures, noting that AI will likely be the entity grading their future college resumes. They raised critical points about the acceptable use of AI for brainstorming versus completing assignments, revealing that students are highly aware of the ethical dilemmas but are rarely given a voice in school policy-making.
What parents and teachers need to know: Out of the mouths of babes, right? We spend so much time talking about the kids, we forget to actually talk to them. They know this technology is their reality. Instead of just handing down strict rules from on high, let's pull up a chair and ask them what they think. Have an open conversation at the dinner table or in the classroom. Ask them how they feel when a computer helps them brainstorm, versus when it tries to think for them. We might be surprised by how much grounded wisdom they already have.
"You [need to] take time before you believe... AI can sound very confident. They're trained to do so; they're very good at language, but [AI] can do that while being completely wrong, or half wrong, or partly wrong."
AI Tools Article
Meta Launches Parental Insights for Teen AI Chats
Meta introduced a new "Insights" tab inside its supervision hub on Instagram, Facebook, and Messenger. Parents won't see word-for-word transcripts, but they will get weekly topic summaries of what their teens are discussing with Meta AI, broken into broad categories like School, Health & Wellbeing, Entertainment, and Lifestyle. Meta is also developing direct alerts that notify parents if a teen's AI conversations veer into self-harm or suicide. The rollout is currently live in the U.S., U.K., Australia, Brazil, and Canada, with global expansion expected in the coming weeks. The move follows a Wall Street Journal investigation, a lost lawsuit in New Mexico, and an FTC inquiry, meaningful context for understanding why the tool exists.
What parents and teachers need to know: If your teen uses Meta AI, turn the Insights tab on, and then use what you see as a starting point for a conversation. "I noticed Health came up a lot this week. Anything you've been thinking about?" lands differently than "I saw what you asked the bot."
Research Focused Article
Investigation: AI Has Supercharged Child Predators While Investigators Fall Behind
A Bloomberg investigation detailed how Internet Crimes Against Children (ICAC) task forces are struggling to keep pace with a flood of AI-generated child sexual abuse material. Tips of child sexual abuse material to the National Center for Missing and Exploited Children rose from thousands in 2023 to over a million in 2025. NCMEC's own review identified more than ten times as many AI-generated CSAM files as companies themselves reported between 2023 and 2026. Federal funding for these task forces, meanwhile, has been arriving later each year, with three task forces still waiting on expected funding as of early April. Investigators must now sift through synthetic imagery to find real children in active danger.
What parents and teachers need to know: One of the single most protective things you can do is talk to your child about photos, what gets posted, by whom, and where. AI tools can now generate convincing fake explicit images from ordinary social media photos, which means privacy settings on family accounts matter more than ever. A practical step: audit who can see your child's photos on every platform you use, and turn off public visibility where you can. Then, age-appropriately, let your child know that if anyone ever sends them a strange image of themselves, real or fake, they can come straight to you. We learned in this week’s AI for Kids podcast episode that you can report those images here.
Research Focused Article
How AI Is Shaping Teachers' Wellbeing
Education Week published research from David T. Marshall and Tim Pressley examining how AI is affecting teacher wellbeing. The findings landed in the same week as broader survey data showing 61% of elementary school educators say their students struggle "a lot" to distinguish AI-generated content from human-created content, compared to 44% in middle school and 38% in high school. The research arrives as the American Federation of Teachers continues its National Academy for AI Instruction, training 400,000 teachers, and as the Trump administration pushes AI adoption while simultaneously facing GOP scrutiny of broader ed-tech use.
What parents and teachers need to know: Teachers are carrying a lot right now, adopting unfamiliar tools, redesigning assignments, and trying to spot AI-generated work in real time, often without much district-level guidance. If you're a parent, a kind email to your child's teacher asking what they're seeing with AI in the classroom can let them know you understand and you care. If you're an educator, the elementary-grade media literacy gap is the one to watch. The earlier kids learn to ask "who or what made this?" the more durable that habit becomes. AI Literacy should start as early as possible.
Screen-Free Game: Think Tank
A family card game that turns Dhani Ramadhani's THINK framework into something kids actually want to play. Works with 2–6 players.
What You Need: index cards (about 25), a pen, a small bowl or hat, and 20 minutes.
Set up the deck
On separate index cards, write one "AI Scenario" per card. Aim for 15–20. Mix easy and tricky. (Don’t put these in the bowl.)
Examples:
An AI says George Washington invented the lightbulb. It sounds really sure of itself.
You ask AI to plan your birthday party and it asks for your full name, address, and school.
Your friend uses AI to write their entire book report in 5 minutes.
An AI tells you a "fun fact" but doesn't say where the fact came from.
You want a recipe for cookies, so you tell the AI your mom's name, your address, and that you're 9.
AI gives you an answer for your science homework. You copy it word-for-word into your assignment.
You ask AI which sport is the best, and it tells you football. No explanation.
On 5 separate cards, write one letter each: T, H, I, N, K. Put these in the bowl.
How to Play
Shuffle the scenario cards into a face-down pile in the middle.
The youngest player goes first. They draw a scenario card and read it out loud.
They then pull one letter from the bowl. That letter is the lens they have to use.
T = Take time before you believe. AI can sound very confident, but be completely wrong, half wrong, or part wrong. What might not be true here?
H = How does it work? Get curious about how the AI got to this answer. Did it give sources? What might be missing?
I = Intention. Why are you using AI right now? Are you trying to learn, create, or skip the hard part?
N = Never share secrets or important information. What personal info shouldn't have been shared? How could you ask the question without it, maybe by giving the AI a persona to think through instead?
K = Keep your brain in charge. How would you say this in your own words? You don't have to take whatever the AI gives you.
The player has 30 seconds to give their answer using that lens.
The rest of the table votes: thumbs up = solid answer, thumbs sideways = decent, thumbs down = try again.
Thumbs up = 2 points
Thumbs sideways = 1 point
Thumbs down = 0 points, and someone else can steal by giving a better answer
Put the letter back in the bowl. Pass play to the next person.
First to 10 points wins. Or play until you run out of scenarios.
Why it works: Kids don't just hear about THINK, they practice using each letter on real situations they'll actually encounter. The randomness of pulling a letter forces flexible thinking, because the same scenario looks completely different through the T lens versus the N lens. And the voting piece turns the whole family into co-thinkers, which is exactly what Dhani's framework is built for.
Until next week
The week of April 22 feels like one of those weeks we may look back on later and say, “Oh, that was a shift.” New rules kicked in. New tools showed up. And the cost of pretending this is all “future stuff” got a lot higher for tech companies.
None of this is getting simpler. Annoying, I know. But the good news is people are paying attention, asking better questions, and starting to move. That includes you. Every time you read, share, question, or talk to a kid about what is happening, you are helping make this a little less confusing.
If something in this issue made you think of another grown-up in a child’s life, send it their way. And if there’s a question you want me to answer in a future issue, hit reply. I read everything.
Stay curious. Keep asking questions. See you next week.
Amber Ivey (AI)
Learn AI in 5 minutes a day
You don't have to scroll every AI thread, track every new tool, or watch every demo.
The Rundown AI breaks it all down for you — the latest AI news, tools, and tutorials in one free 5-minute email every morning.
Trusted by 2M+ professionals at Apple, Google, and NASA.

