In partnership with

Weekly roundup

This week was a lot.

(But honestly, when is it not? I'm starting to think "slow news week" is just something people made up to feel better.) States were passing laws, a major education CEO admitted his AI tutor did not quite go the way he thought it would, and Texas is asking some very fair questions about what happens when kids start turning to chatbots like therapists.

That is why I put this together each week. Not to add one more thing to your plate, but to help you sort out what actually matters for your family or classroom. Read what's useful, skip what's not, I know it’s not personal.

And for kids who want AI topics broken down in a way that actually makes sense to them, check out this week's podcast episodes:

Thanks for sharing this newsletter. If it has been helpful, you can refer a friend or coworker through the program below and earn rewards. Now, on to this week’s news.

National Focused Article

Sal Khan Admits His AI Tutor Was a “Non-Event” for Most Students

Sal Khan, the founder of Khan Academy and one of the biggest voices pushing AI in education, said openly this week that his AI tutoring chatbot Khanmigo hasn’t lived up to the hype. In an interview with Chalkbeat, Khan compared Khanmigo to a tutor sitting in the back of a classroom waiting for students to come ask for help, and most students simply didn’t. Khan acknowledged that AI alone doesn’t solve the motivation problem. The tool was designed to be Socratic, nudging students toward answers instead of handing them over, but most kids didn’t engage with it enough to benefit. Khan remains optimistic about AI in education but now sees it as part of a solution, not the whole answer.

What parents and teachers need to know: For the past three years, there’s been so much pressure on schools and parents to adopt AI tools or risk falling behind. Now one of the biggest names in the game is saying: slow down, the tool isn’t magic. If your child’s school is using Khanmigo or any AI tutor, ask how it’s being used alongside actual teacher instruction. A chatbot sitting in the corner is not a substitute for a teacher who walks over to your child’s desk and says, “Show me where you’re stuck.” Technology is still a tool in a teacher’s toolbox.

International Focused Article

China Rolls Out National AI+ Education Plan

China launched a national action plan this week called "AI+ Education," mandating AI integration at every level of learning, from elementary school through adult education. The Ministry of Education and four other government bodies are behind it, and they were blunt about why: the U.S., the EU, and Singapore are all investing heavily in AI education, and China doesn't intend to fall behind. Officials said AI is "forcing a systemic and fundamental overhaul of education." Instead of letting cities and districts build their own tech piecemeal, the central government is creating a unified national computing and data platform to standardize AI infrastructure across the entire country.

What parents and teachers need to know: While we're still arguing about whether kids should use ChatGPT on homework, China is building a national pipeline to train an entire generation in AI from the jump. You don't have to love the approach to respect that they are taking a unified approach. The bigger question for us: what's our plan? If your school doesn't have a clear strategy for teaching kids about AI, not just blocking it or blindly adopting it, that's a conversation worth starting. The global race isn't waiting on anyone to figure it out.

Quote

“Who made this? Why did they make it? And why is this platform showing it to me?”

-Matt Silverman, Digital Producer (Quote from AI for Kids Podcast)

State Focused Article

Texas Investigates Meta and Character.AI Over Fake Mental Health Services for Kids

Texas Attorney General Ken Paxton announced an investigation into Meta AI Studio and Character.AI this week, accusing both platforms of marketing AI chatbots as mental health tools to children without proper credentials or oversight. Paxton’s office issued civil investigative demands to both companies, alleging that some chatbots impersonate licensed therapists, fabricate qualifications, and collect sensitive data from minors under the guise of confidential counseling. The investigation focuses on possible violations of Texas consumer protection laws and the state’s SCOPE Act, which restricts how companies handle minors’ personal data.

What parents and teachers need to know: Kids are going to chatbots for emotional support, and some of those chatbots are pretending to be therapists. They can’t diagnose anything, they can’t treat anything, and the conversations your child thinks are private? They’re being logged and used for advertising. Tonight, ask your child: “Have you ever talked to an AI about how you’re feeling?” Make it a no-judgment conversation. And then make sure they know: if they need to talk to someone, there are real people ready to help. A chatbot is not that person.

State Focused Article

NYC Holds Public Hearing on Proposed AI-Focused High School

On April 14, New York City held a public hearing on a proposal to open Next Generation Technology High School, an AI-focused public high school in Lower Manhattan. The school would use Google’s AI-powered Skills Platform and prepare students for careers in cybersecurity, robotics, and computer science. However, the proposal has sparked significant pushback from parents and educators. Critics have raised concerns about the school’s ties to Google, questions about data privacy, and the fact that the city’s broader AI guidance for schools is still incomplete. A petition against the school has gathered over 1,300 signatures. The Panel for Educational Policy will vote on the proposal April 29.

What parents and teachers need to know: There’s nothing wrong with preparing kids for tech careers. But when the curriculum is built on a specific company’s platform, and the district hasn’t even finished its own AI rules yet, parents are right to ask hard questions. This isn’t just a New York story. Wherever you are, if your district starts talking about AI-focused programs, pay attention to who’s at the table. Is it educators and families driving the conversation, or is it industry partners looking for a pipeline? Those are two very different starting points.

AI Tools Article

OpenAI Releases a Child Safety Blueprint to Combat AI-Generated Exploitation

(I missed this last week and wanted to make sure you all saw it.) OpenAI published a new policy framework called the Child Safety Blueprint on April 8, outlining a path for strengthening protections against AI-enabled child sexual exploitation. Developed with the National Center for Missing and Exploited Children (NCMEC) and the Attorney General Alliance, the blueprint focuses on three priorities: updating laws to address AI-generated abuse content, improving how companies report abuse to law enforcement, and building prevention tools directly into AI systems. The blueprint comes as the Internet Watch Foundation reported over 8,000 cases of AI-generated child sexual abuse material in the first half of 2025 alone, a 14% increase from the prior year.

What parents and teachers need to know: It’s complicated when the same company facing lawsuits about its products harming children also puts out a blueprint about child safety. Credit where it’s due, the framework itself addresses real gaps in how AI-generated abuse material is detected and reported. Watch what happens next. And in the meantime, the takeaway for families is this: AI tools can now generate realistic images of anyone, including children. Talk to your kids about the permanence of images online, the importance of never sharing personal photos with apps or strangers, and the fact that you are a safe person to come to if anything makes them uncomfortable.

Research Focused Article

Stanford's 2026 AI Report Card: 4 Out of 5 Students Are Using AI, Schools Are Barely Keeping Up

Stanford’s Human-Centered Artificial Intelligence institute released its 2026 AI Index Report, revealing that four out of five U.S. high school and college students now use AI tools for school-related tasks. However, only about half of middle and high schools have formal AI policies in place, and just 6% of teachers say those policies are clear. The report also found that generative AI reached 53% population adoption within three years, faster than the personal computer or the internet. AI capabilities continue to accelerate, with frontier models now exceeding human expert performance on some PhD-level science and math benchmarks.

What parents and teachers need to know: Four out of five. Your child is very likely already using AI for school, whether they’ve told you or not. And their school probably hasn’t given you, or their teachers, a real roadmap for how to handle it. This isn’t the time to ban things and hope for the best. It’s the time to ask your child, directly and without judgment: “Show me how you use AI for your schoolwork.” Get in the room. Watch what they’re doing. That’s one way to start the conversation.

Research Focused Article

California Middle Schools Become Testing Ground for AI Classroom Tools

A report published April 13 spotlights California middle schools as “ground zero” for testing AI tools in classrooms. Schools across the state are piloting AI-powered grading, feedback, and assessment tools with mixed results. At KIPP Public Schools Northern California, one AI writing feedback tool caused student distress instead of helping. At South Lake Middle School in Irvine, a math teacher found that his AI exit-ticket tool sometimes marked correct answers wrong. Teachers report that clear district-level guidance is still rare, leaving individual educators to set their own rules.

What parents and teachers need to know: I want you to hear what happened here: a student got the answer right and the AI tool marked it wrong. A writing tool meant to help kids actually stressed them out. The technology is still making mistakes, and our kids are the ones absorbing the impact. If your child’s school is using AI tools, you have every right to ask: Which tools? How are they being tested? And what happens when the tool gets it wrong? Your kid’s peace of mind is not a beta test.

Screen-Free Game: Source Detective, The 3-Question Card Game

This week's screen-free game comes from our guest Matt Silverman. Make sure to listen to this week’s episode. In the episode, Matt shared three questions everyone should ask before they like, share, or believe anything online: Who made this? Why did they make it? Why is this platform showing it to me?

What you need:
Index cards or scrap paper, a pen, and two or more players.

How to play

  1. Make a set of simple “post cards” with pretend online posts on them. Each card should have one short example, like “Free pizza for every kid today!”, “Your friend posted a picture of her dog,” “This toy will make you smarter,” or “Breaking news: school canceled forever.”

  2. Next, make three question mats and place them on the table:

    Who made this?
    Choices: a friend, a company, a news/info source

    Why did they make it?
    Choices: to share, to sell, to get attention

    Why is this platform showing it to me?
    Choices: because I liked something like this before, because lots of people click it, because it is trying to keep me watching

  3. One player picks a post card and reads it aloud. Then the other player, or the whole group, has to answer Matt’s three questions in order: Who made this? Why did they make it? Why is this platform showing it to me? The player answers by putting the card, a token, or a marker on one choice under each question mat.

  4. After answering all three questions, the player decides: like it, share it, or scroll past it.

  5. Then an adult, teacher, or the person running the game gives the best answer and explains why.

  6. Players get 1 point for each question they answer well, so they can earn up to 3 points per card. They get 1 bonus point if they can spot a trick the post is using, like “free,” “breaking,” a giant number, or something meant to make them react fast.

  7. For younger kids, make it a team game. Try to get 15 points together before the cards run out.

Why it works: Matt talked about how algorithms are built to trigger emotional responses and keep you scrolling, and how content farms now use AI to create endless versions of that bait. This game makes kids practice those exact questions every single turn, so the habit starts to stick.

Until next week.

Here’s what I want you to take away from this week: the systems that are supposed to protect our kids are still being built. Schools are still drafting policies. Companies are still writing blueprints. Legislatures are still debating bills. None of that is a reason to wait.

You don’t need a perfect policy to have a real conversation. Ask your kid what AI tools they’re using. Ask their teacher what the classroom rules are. Play Source Detective at the dinner table this week. The most powerful thing you can give your child right now is the habit of asking questions before they click, share, or believe what they see.

We’re in this together. See you next Tuesday.

Amber Ivey (AI)

Smart starts here.

You don't have to read everything — just the right thing. 1440's daily newsletter distills the day's biggest stories from 100+ sources into one quick, 5-minute read. It's the fastest way to stay sharp, sound informed, and actually understand what's happening in the world. Join 4.5 million readers who start their day the smart way.

Keep reading