Tired of news that feels like noise?
Every day, 4.5 million readers turn to 1440 for their factual news fix. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture — all in a brief 5-minute email. No spin. No slant. Just clarity.
Weekly roundup
Welcome back, everyone.
This week, the conversation around kids and AI is moving fast on multiple fronts, schools are wrestling with whether AI tools support learning or shortcut it, some districts are blocking certain apps while still trying to teach AI literacy, and lawmakers are pushing new safety rules for chatbots and age verification. Researchers are also sounding alarms about AI toys and emotional development, and educators are debating how AI should actually show up in classrooms.
Before we dive into this week’s updates, check out our latest podcast episode here: Is AI Making Your Brain Lazy? 3 Rules to Keep Your Brain Strong While Using AI
Alright, let’s look at what happened in the world of AI for kids between March 8th and March 15th.
International Article
The Debate Over AI and “Cognitive Offloading”
Some education experts are warning about something called “cognitive offloading,” which is when students let AI do too much of the thinking for them. More kids are using AI to summarize readings, answer questions, or even write essays. The concern is that when AI handles the hard parts, students may miss the struggle that helps their brains grow. Some researchers also say this problem is exposing a bigger issue in schools, which have often rewarded quick answers instead of real understanding.
What parents and teachers need to know: It is tempting for kids to use the easiest shortcut available. Let’s be honest, most of us probably would have done the same thing at their age. But the “productive struggle” experts talk about is where real brain growth happens. Feel free to encourage the young people in your life to sit with a real book, ask hard questions, and form their own messy, thoughtful ideas before they ever ask a machine for the answer.
Quote
“Your brain is a muscle, and in order for that muscle to be the best version of itself, it has to work out.”
-Amber Ivey (AI) Host of the AI for Kids Podcast
State Focused Article
Boulder Schools Block ChatGPT Over Safety Concerns
The Boulder Valley School District (BVSD) officially blocked ChatGPT on all school networks and devices. This decision was triggered by new features in GPT-5.3, specifically a new unmonitored “chatroom” function and an “adult mode” that allows for explicit image generation. District officials stated that current age verification isn’t strong enough to keep students safe from potential bullying or inappropriate content. While ChatGPT is out, the district is still supporting AI literacy through “MagicSchool,” a platform specifically designed for teachers to monitor and guide student use.
What parents and teachers need to know: Sometimes these big companies release features that aren’t quite ready for a hallway full of tweens or teenagers, and the schools have to step in. If your district does something similar, take it as an opportunity to talk to your kids about why certain boundaries exist. It’s a great time to discuss how a “cool new feature” might have hidden risks, and why using tools built specifically for learning is usually the safer bet for now.
State Focused Article
Washington Becomes Second State in 2026 to Pass AI Chatbot Safety Law for Kids
Washington State legislators gave final passage on March 11 to HB 2225, a companion chatbot safety bill for minors, the second such law to pass in 2026, following Oregon’s approval of a similar bill the prior week. If the governor signs it, companies that run AI companion chatbots will have to add protections for minors and include crisis support if a user talks about self-harm or suicide. The law also requires chatbots to clearly tell users they are talking to AI, not a real person. Washington is the second state to pass this kind of law in 2026, after Oregon passed a similar one the week before. Across the country, lawmakers in 27 states are currently considering 78 different bills focused on AI chatbot safety for kids.
What parents and teachers need to know: Two states. Two weeks. These laws are specifically targeting companion-style AI chatbots: the apps where the AI plays the role of a friend, a confidant, or a supportive presence. If your child or student is using something like that, here are a few questions to consider. What does the app say it is? What is it actually designed to do? And does your kid know the difference between a program and a person?
State Focused Article
NYC Plans First AI-Focused Public High School, Parents and Educators Push Back Hard
New York City’s Department of Education unveiled a proposal in early March for the Next Generation Technology High School, a new selective school set to open in Lower Manhattan in September 2026. The school would train students for careers in cybersecurity, computer science, robotics, and AI, with a stated goal of making students “builders as well as ethical users of AI.” But the proposal has drawn sharp backlash from parents and community members, who note that the NYC DOE still has no citywide AI policy. Five Community Education Councils have already passed resolutions calling for a two-year moratorium on AI use in schools.
What parents and teachers need to know: How do you build a school designed to teach AI literacy when you don’t yet have rules for how AI should be used in any school? Even if you’re not in New York, this debate is coming to your city. You can start by asking your own school or district: What is your AI policy? How are teachers being trained? Who is making these decisions, and how can families be part of the conversation
Research Focused Article
Real-Time Data Reveals How Students Are Using AI on School Tech
A school safety company called Securly looked at nearly 1.2 million student interactions with AI across more than 1,300 school districts between December 2025 and February 2026. The data shows that ChatGPT was the most used tool, making up about 42% of the interactions, followed by other AI tools. Most of the time students used AI appropriately. About 80% of conversations followed school rules when districts had guardrails in place. However, about one in five interactions involved problems, such as students trying to cheat on schoolwork, bullying, or asking questions related to self-harm. The monitoring system also flagged serious safety concerns, including suicide-related searches, so schools could step in and respond.
What parents and teachers need to know: One thing many people don’t realize is that AI use on school devices is often monitored. If a student asks AI about things like self-harm or violence, the system can alert school staff. It’s worth reminding kids that school technology isn’t private and that adults are there to help if something serious comes up.
Research Focused Article
Cambridge Report: “Talking” AI Toys Can’t Understand Kids’ Emotions
A systematic study by the University of Cambridge warns that AI-powered toys, marketed as “friends” for children under five, often fail to understand social play or basic human emotions. Researchers observed that when children expressed deep feelings, like saying “I love you” or “I’m sad,” the AI bots responded with robotic reminders about “interaction guidelines” or ignored the emotion entirely to keep the “fun” going. The report calls for strict new safety standards labels to ensure these toys don’t interfere with a child’s emotional development or privacy.
What parents and teachers need to know: When a little one says they’re sad, they need a real person to hear them, not a machine that changes the subject to keep the game or activity going. If you have AI toys at home, it’s best to keep them in shared spaces like the living room so you can hear what’s happening. These toys can be fun, but they shouldn’t take the place of adults or confuse kids about where to go when they have real human feelings.
More AI Headlines That Affect Kids
Critics Label Big AI Teacher-Training Push a “Grift”
An education analysis called out major initiatives like Google’s three-year deal to train all six million K-12 teachers using Gemini and NotebookLM, plus the AFT’s $23 million academy funded by OpenAI, Anthropic, and Microsoft for lesson-plan tools. The piece warns of profit-driven hype, cognitive offloading, and no proven long-term student gains urging schools to resist the rush.
AI Offers Big Wins for Language Learners, If Humans Lead
A new report from the British Council says AI can actually be helpful for kids learning languages, but only if humans stay in charge. Tools powered by AI can give students extra practice, help them try conversations without feeling embarrassed, and point out patterns in their mistakes. That can build confidence, especially for kids who are nervous about speaking a new language. But the researchers also warn that AI can carry bias, misunderstand accents, and collect a lot of personal data.
Age Checks Are Coming to More Apps
Governments around the world are pushing tech companies to verify users’ ages online. New “age-assurance” tools can estimate someone’s age using things like facial analysis, account history, parental approval, or ID checks. Improvements in AI have made these tools more accurate and cheaper to run, which is why regulators are starting to require them for social media, AI chatbots, and adult sites. Some countries, including Australia, have already begun enforcing
What parents and teachers need to know: When you hear about new AI tools for classrooms, language learning, or apps asking kids to verify their age, take it as a cue to ask a few simple questions. Ask your school what AI tools teachers are actually using and how they protect student data. And when apps ask for age checks or personal information, pause before approving and look for options that use parental permission instead of uploading IDs or face scans. AI can help with learning, but adults should still set the guardrails.
Screen-Free Game : The Evidence Ladder
This game helps kids realize that “proving” your age online is a trade-off. While simply typing in a birthday is easy to fake, stronger methods (like scanning a face or an ID) mean handing over very personal “digital keys” to a company. By playing, families learn to spot the difference between staying safe and giving away too much private information.
What You Need: 12 scraps of paper (or sticky notes), a marker, and a floor or table.
How to Play:
Prep: Write one “Age Check” on each scrap of paper:
Age Checks: 1. Typed my birthday, 2. Clicked “I’m 13+”, 3. Parent clicked “Approve”, 4. School email check, 5. Text code to Mom’s phone, 6. Face scan, 7. Upload my ID, 8. Take a selfie, 9. Credit card check, 10. App store check, 11. No check at all, 12. Ask a teacher in person.
The Ladder: On the floor, imagine a ladder with 4 rungs:
Rung 1: Super easy to fake (anyone could do it).
Rung 2: A little bit of work.
Rung 3: Hard to fake.
Rung 4: Very strong proof, but uses very private info (like your face or ID).
Sort: Have the kids place each paper on the rung where they think it belongs.
The Talk: Ask them: “Which one is the safest?” and “Which one feels a little too creepy or private?”
Why it works: This game helps kids realize that “proving” your age online is a trade-off. While simply typing in a birthday is easy to fake, stronger methods (like scanning a face or an ID) mean handing over very personal “digital keys” to a company. By playing, families learn to spot the difference between staying safe and giving away too much private information.
That’s all for this week.
Don’t feel like you have to have a PhD in computer science just to sit at the dinner table; your life experience and your values are the best filters your kids or students will ever have. We’re all figuring out the “new normal” together, so give yourself credit for staying curious. Keep leading with curiosity and a little healthy skepticism, and we’ll be back next week to share the most recent news.
Be kind to yourselves and others,
-Amber Ivey (AI)

