Google's shiny new AI assistant just got some harsh feedback from the people who actually care about kids' digital safety. Common Sense Media dropped a bombshell assessment this week, slapping both versions of Gemini designed for young users with a "High Risk" rating that should make any parent pause before letting their kid chat with Google's AI.
The nonprofit's comprehensive evaluation reveals what many cybersecurity experts have been quietly worrying about: these aren't purpose-built tools for children. Instead, they're essentially the adult version of Gemini wearing a thin safety costume. Think of it like giving a teenager the keys to a sports car and calling it safe because you put training wheels on it.
Common Sense Media's research team put both Gemini Under 13 and the Teen Experience through rigorous testing, and the results paint a concerning picture. Despite Google's added filters and protections, both platforms still expose kids to inappropriate content about sex, drugs, alcohol, and potentially dangerous mental health advice. The timing couldn't be worse, given recent reports linking AI chatbots to teen suicides and OpenAI facing its first wrongful death lawsuit after a 16-year-old died by suicide following months of ChatGPT interactions.
"Gemini gets some basics right, but it stumbles on the details," says Robbie Torney, Common Sense Media's Senior Director of AI Programs. The organization's critique cuts deeper than surface-level safety features. They found that Google's approach treats all kids the same, ignoring the massive developmental differences between a curious 8-year-old and a high school sophomore navigating identity questions.
Here's where it gets particularly troubling from a cybersecurity perspective. Gemini's privacy protection strategy involves not remembering conversations, which sounds good on paper but creates new vulnerabilities. Without conversation history, the AI can't maintain consistent safety guidelines or recognize when a child might be in distress across multiple sessions. It's like having a security guard who forgets everything that happened five minutes ago.
The assessment arrives at a critical moment for Google's AI ambitions. Industry sources suggest Apple is considering Gemini as the engine behind its next-generation Siri, potentially exposing millions more teenagers to what Common Sense Media calls "fundamental design flaws." If that partnership moves forward without addressing these safety concerns, we're looking at a massive expansion of potentially risky AI access for young users.
Google's response has been predictably defensive, claiming the assessment referenced features not actually available to underage users. But they also conceded that some Gemini responses "weren't working as intended," requiring additional safeguards. That admission alone should raise red flags for parents and educators considering these tools.
The broader AI industry isn't faring much better in Common Sense Media's evaluations. While ChatGPT earned a "moderate" risk rating and Claude was deemed minimal risk for its 18+ target audience, Perplexity also received the dreaded "high risk" label. It's becoming clear that the rush to capture young users in the AI race is happening faster than proper safety infrastructure can be built.
For parents, Common Sense Media offers some practical guidance that actually makes sense. No AI chatbots for kids under 5, period. Ages 6-12 should only use these tools under direct adult supervision. Teens 13-17 can use them independently, but only for schoolwork and creative projects, not as emotional support or companions.
The fundamental issue here isn't that AI is inherently dangerous for kids. It's that the current generation of chatbots wasn't designed with young minds in mind. They're powerful tools built for adult cognition and then retrofitted with safety features, rather than being thoughtfully constructed from the ground up for developing brains.
As someone who's spent years analyzing digital threats to young users, this feels like Déjà vu. We've seen this pattern before with social media platforms that rushed to capture young audiences without fully understanding the implications. The difference is that AI chatbots have the potential for much more intimate and influential interactions than scrolling through feeds.
Google has the resources and expertise to build genuinely safe AI tools for children. The question is whether they'll prioritize that over quickly expanding their user base in the competitive AI landscape. Until they do, parents and educators would be wise to approach these tools with the same caution they'd use with any powerful technology that hasn't been properly tested for its intended audience.