Tired of news that feels like noise?

Every day, 4.5 million readers turn to 1440 for their factual news fix. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture — all in a brief 5-minute email. No spin. No slant. Just clarity.

Weekly roundup

Hey, y’all. Let’s talk.

This was one of those weeks where you could feel things shifting a little.

The White House dropped its big AI framework and kids were right in the middle of it. Teachers are getting trained on tools most of us have never even heard of. A new report basically confirmed what a lot of us already know. Kids are using AI way more than we think, and they don’t even agree with us on whether that’s a good thing.

And then… a company got caught putting real people’s names on AI-generated work without asking. Not great.

Oh, and states are now seriously debating whether laptops even belong in elementary school classrooms. I already saw this shift in schools in certain districts a few years ago. So yeah. A lot happening. But none of this is about being for or against technology. It’s about making sure the adults in the room, that’s us, stay informed enough to make good decisions for the kids in our lives.

As always, if you want a safe place to explore AI with your kids or students, check out the AI for Kids podcast.

Now let’s get into it.

National Focused Article

White House Releases National AI Framework, Kids Are the Lead Story

The White House released its long-awaited national AI framework, and child safety is right at the top. The plan says Congress should make sure the child privacy rules we already have also apply to AI. That includes limits on how kids’ data is collected, how it’s used to train AI, and how companies target ads to minors. It also calls for age checks on AI platforms and makes it clear that states should still be able to enforce their own child protection laws, even if there’s a broader federal AI law.

Days earlier, Senator Marsha Blackburn introduced her own AI discussion draft, which embeds the Kids Online Safety Act and explicitly allows states to go beyond federal protections for minors. The framework also builds on the Take It Down Act, which addresses deepfake pornography targeting children and adults.

What parents and teachers need to know: This is a framework, which means Congress still has to actually pass something, and the details will matter enormously. The good news is that both the White House and members of Congress are putting kids at the top of the agenda. The question to keep watching is whether “protecting kids” means real, enforceable privacy rules, or something that sounds nice on paper but doesn’t change much in practice. For now, it’s worth knowing that the conversation is happening. And it’s worth asking your kids’ schools what they’re already doing to protect student data when AI tools are in use.

National Focused Article

Teachers Get Their First Hands-On Training at National AI Academy

About 50 teachers gathered in New York City for the first official training session of the National Academy for AI Instruction. The academy is a five-year, $23 million partnership between the American Federation of Teachers, Microsoft, OpenAI, and Anthropic, with a goal of training 400,000 teachers to actually use AI well in the classroom. This first session went beyond the basics. Teachers were building “agentic” AI tools, meaning systems that can handle multi-step tasks and require some real judgment, not just quick prompts. Right now, about six in ten teachers say they’re already using AI, mostly for lesson planning and admin work. The goal here is to move beyond that and start using it in ways that actually improve how kids are learning.

What parents and teachers need to know: Teachers being trained by other teachers, not just tech companies, on how to use AI thoughtfully is exactly what we need more of. If your child’s teacher starts mentioning AI in how they teach, that’s not necessarily a red flag. It could mean they’re getting better tools and training. The first question may be “Is my child’s teacher using AI?” and then quickly shift to “How are they using it, and what guardrails are in place?” Good teachers with good training will still be the best thing in your child’s classroom.

State Focused Article

Schools Are Tired of Waiting, Drafting Their Own AI Bans

With district-wide and state guidance lagging behind the technology, local schools are taking matters into their own hands. In New York City, for instance, schools like East Side Community spent months drafting their own comprehensive AI policies to prohibit the unsupervised use of generative AI for assignments. Rather than waiting for top-down mandates, educators on the ground are establishing their own boundaries to prevent cheating while attempting to figure out how to safely prepare students for an AI-driven workforce.

What parents and teachers need to know: I commend these educators for stepping up while the higher-ups appear to be dragging their feet. For parents at home, this means staying in active communication with our children’s schools. Don’t wait for the next parent-teacher conference. Ask your child’s teacher what their specific classroom policy is on AI. You don’t want to accidentally allow tools at home that go against what’s expected in the classroom. Consistency between school and home are important so we don’t confuse the littles in our lives.

State Focused Article

16 States Now Debating Whether to Limit Screens and Ed Tech in Classrooms

A wave of legislation is building across the country that goes beyond cellphone bans. Lawmakers in at least 16 states are now debating restrictions on classroom technology use, including school-issued laptops and educational software. In Kansas, a bill would ban digital devices entirely for grades K–5 and cap screen time to one hour per day for middle schoolers. Tennessee is considering a similar prohibition for elementary grades. In Utah, lawmakers are pushing a “back to basics” package that would set statewide screen time limits and require AI use policies in schools. Virginia’s Senate has already passed a bill requiring the state to develop model policies capping instructional screen time by grade level.

What parents and teachers need to know: This conversation isn’t really about being pro-tech or anti-tech. IMO, it’s about whether we’ve been thoughtful enough about what technology is actually doing for kids in the classroom. The research is mixed. Some tech helps. Some distracts. The fact that state legislators are even asking these questions should encourage you to ask them at the school level too. What are your kids actually doing on their school devices all day? Is it purposeful, or is it just filling time? These are fair questions, and you don’t have to wait for a law to ask them.

(Sidenote: This is also why we built AiDigiCards and the ABCs of AI Activity Deck as a screen-free way to introduce kids to AI. They need to understand the tech, but they don’t necessarily need another screen to do it.)

AI Tools Article

New COPPA-Compliant AI Chatbot for Kids Launches

The nonprofit Savvy Cyber Kids announced a partnership with Chatperone, a new AI chatbot platform designed specifically for children and teens. The platform is COPPA-compliant and text-only, with built-in parental controls including real-time conversation monitoring, daily message caps, allowed chat hours, and alerts for concerning topics. It also features a “homework mode” designed to teach rather than simply provide answers.

What parents and teachers need to know: If your child is going to use an AI chatbot, and let’s be real, many already are, the platform should be purpose-built with parental oversight. I don’t endorse specific AI tools, but anything you use should be designed with kids in mind, not just an adult LLM with a kid-friendly wrapper. That said, no tool replaces your involvement. “COPPA-compliant” just means it meets basic children’s privacy rules. That’s a baseline, not a gold standard. Before handing any AI tool to your kid, do your homework. Read the privacy policy, try it yourself, and set clear expectations together about how and when it gets used.

Quote

“And these long-term relationships…are what really sustain us to having a happy life. The online relationships and views are fleeting, they're temporary, they do not create long-term happiness.”

-Tim Allen, The Wharton School (Quote from AI for Kids Podcast)

AI Tools Article

Grammarly Faces $5M Lawsuit Over AI “Expert Review” That Used Real Names Without Consent

Journalist Julia Angwin filed a class action lawsuit against Superhuman Platform Inc., the parent company of Grammarly, over a paid AI feature called “Expert Review.” The feature offered writing feedback attributed to real journalists, authors, and editors, including Stephen King, Neil deGrasse Tyson, and Angwin herself, without their knowledge or consent. The tool charged users $12/month for what appeared to be personalized editing advice from these experts, when in reality the feedback was AI-generated. Grammarly has since disabled the feature, and the lawsuit seeks at least $5 million in damages.at to Consider

What parents and teachers need to know: This is slightly outside the week, but worth sharing because it’s a strong teaching moment for older kids. Grammarly is a tool many middle and high schoolers already use for writing. You can ask them: did you know the app was presenting feedback as if it came from real people when it didn’t? This opens up a conversation about consent and identity, and what it means when companies use someone’s name or reputation without asking. It’s also a good way to help kids think about their own digital presence and what protections exist for their name and work.

Research Focused Article

Students Confess: AI Might Be Ruining Their Critical Thinking

A newly released RAND Corporation survey reveals a striking internal conflict among students: nearly 70% of middle and high schoolers now report being worried that relying on AI for schoolwork is actively eroding their critical thinking skills. Despite these growing concerns, which are up significantly from early 2025, student reliance on AI for homework continues to surge, jumping from 30% to 46% among middle schoolers in just over six months.

What parents and teachers need to know: Our young folks are smart enough to know when they are taking a shortcut that will cost them in the long run. If they are telling us they feel their brains getting a little lazy, we need to listen and intervene. It is time to get back to the basics at the kitchen table. Ask them to explain their homework to you out loud without looking at a screen. Let them wrestle with a hard problem for a few minutes and experience productive struggle before they reach for a digital crutch.

Screen-Free Game : Who Said It: Human or AI?

The Grammarly lawsuit and the question of AI pretending to be real people influenced this weeks screen-free game.

What You Need: Index cards or slips of paper, pens, a bowl or hat

Setup: Before the game, one player (or a parent) prepares 10–12 cards. On each card, write a short piece of advice, a fun fact, or a quote. Half should be real things said by real, named people (pull from books, interviews, or things the family knows). The other half should be made-up statements written to sound like they came from a specific person but didn’t. On the back of each card, write either “REAL – [person’s name]” or “FAKE – made up.”

How to Play:

  1. One player draws a card and reads the statement aloud, along with the name of the person it’s attributed to.

  2. Everyone else votes: do they think this person really said it, or is it made up?

  3. After everyone votes, flip the card to reveal the answer.

  4. Each correct guess earns one point.

  5. Play through all the cards. The player with the most points wins.

  6. After the game, talk about it together: Was it hard to tell what was real and what wasn’t? What made some of the fake ones feel believable? How would you feel if someone used your name or words without asking? And what do you think it means to give permission when it comes to your name and identity online?

Why it works: This helps kids practice figuring out what’s real and what’s not, which they’ll need every time they come across AI content. It also helps them understand consent in a way they can actually grapple with, not just hear about.

Until next week.

The world feels like it is trying to catch up with the speed of AI. Not perfectly, and not fast enough in some cases, but the conversations are happening at the White House, in state legislatures, in classrooms, and, most importantly, in homes like yours.

You don’t have to have all the answers about AI to be a good guide for your kids or students. You just have to stay curious, stay present, and keep asking the right questions. That’s what this newsletter is for, to keep you in the loop so you can keep yourself and your kids in the conversation.

If something in this week’s edition sparked a thought, a question, or a dinner table debate, I’d love to hear about it. Reply to this email or share it with someone who needs to be in this conversation too.

Take care of yourselves and each other. I’ll see you next week.

-Amber Ivey (AI)

Keep reading