In back-to-back bulletins, Meta has made vital modifications geared toward empowering dad and mom and defending teenagers on Instagram and its AI providers.
Final week, Meta introduced that Instagram will mechanically place all customers beneath 18 into Teen Accounts that default to content material roughly equal to a PG-13 film score. This week, the corporate launched new parental controls for its AI options.
Teen accountsThe aim of Instagram’s teen accounts is to scale back publicity to mature materials, corresponding to graphic violence, specific sexual content material, sturdy language and dangerous stunts, whereas giving dad and mom extra management over what their teenagers see.
Meta, Instagram’s guardian firm, acknowledged that “teens may try to avoid these restrictions,” so it’s utilizing age-prediction know-how to use protections even when customers misreport their age. The AI system appears to be like for behavioral and contextual clues that somebody claiming to be 18 would possibly really be youthful. It’s not good, however it’s way more dependable than counting on self-reported birthdays.
What teenagers will seeUnder the brand new system, anybody beneath 18 is mechanically positioned into “13+” mode. Teenagers can’t disable it themselves. Parental consent is required to loosen settings. Instagram’s filters display screen out content material exterior PG-13 norms, together with sturdy profanity, depictions of drug use or harmful stunts. Accounts that repeatedly submit mature content material shall be hidden or made tougher to search out, and search outcomes will block delicate or graphic phrases, even when misspelled.
Stricter optionFor households searching for tighter limits, Instagram is including a Restricted Content material Mode that filters much more posts, feedback, and AI interactions. Dad and mom can already set every day cut-off dates — as little as quarter-hour — and see if their teen is chatting with AI characters.
Teenagers can’t comply with or be adopted by accounts that repeatedly share inappropriate materials, and any present connections shall be severed, blocking feedback, messages and visibility in feeds.
AI protections
Alongside the brand new Teen Account protections, Meta is including parental supervision instruments to assist households information how teenagers use AI digital “characters” that usually have a singular persona. Dad and mom will quickly have the ability to flip off one-on-one chats between their teenagers and Meta’s AI characters altogether. The corporate’s normal AI assistant will nonetheless be obtainable for questions and studying assist, however with age-appropriate safeguards.
For households who don’t wish to block it completely, dad and mom can limit particular AI characters, giving them management over which personalities their teenagers can work together with. AI options corresponding to chatbots and picture mills are additionally being tuned to remain inside PG-13 parameters.
Dad and mom will even get perception into the sorts of matters their teenagers focus on with AI — normal themes relatively than transcripts — to encourage conversations about how their teenagers use these applied sciences.
Tragedies and safeguardsI’m not conscious of any tragic outcomes from Meta’s AI, however lawsuits have been filed alleging that chatbots from different firms performed a task in teen suicides. In Florida, the household of a 14-year-old boy who died by suicide claimed the chatbot on Character AI inspired self-harm. In California, the dad and mom of 16-year-old Adam Raine allege that OpenAI’s ChatGPT supplied him detailed directions on suicide and emotional reinforcement, resulting in his loss of life in April 2025.
OpenAI is now creating methods to detect whether or not a ChatGPT consumer is an grownup or beneath 18, so youthful customers mechanically get an age-appropriate expertise. If age isn’t clear, the system defaults to teen mode. The corporate can also be rolling out parental controls that enable dad and mom of teenagers (age 13 and up) to hyperlink accounts, resolve which options can be found, corresponding to disabling reminiscence or chat historical past, obtain alerts when their teen could also be in misery, and set “blackout” hours when ChatGPT can’t be used.
Character.AI now provides a extra restricted model of its platform for teenagers, powered by a devoted language mannequin designed to filter out delicate or suggestive content material and block rule-violating prompts earlier than they attain the chatbot. Teenagers have entry to a smaller pool of characters, with these tied to mature themes hidden or eliminated. The corporate just lately added a “Parental Insights” function that gives weekly summaries of a teen’s exercise, corresponding to time spent on the app and which bots they work together with most, however to guard teen privateness and company, it doesn’t embody chat transcripts or give dad and mom full management.
Emotional risksAlthough AI chatbots can supply consolation or a secure house to observe dialog, researchers are discovering that frequent use may carry emotional dangers. Research from the College of Cambridge, Australia’s eSafety Commissioner, and peer-reviewed analysis groups recommend that some younger folks type sturdy attachments to AI “friends,” which might result in extra loneliness and fewer real-world interplay.
A current joint examine by OpenAI and MIT Media Lab on ChatGPT’s emotional influence, together with a separate survey on teenagers, highlighted the dangers of affective chatbot use. The longitudinal examine with practically a thousand individuals discovered that though emotional engagement with ChatGPT is uncommon total, a small subset of heavy customers confirmed regarding developments: larger every day utilization correlated with elevated loneliness, emotional dependence and problematic use. A separate survey confirmed this vulnerability, displaying that teenagers with fewer social connections have been almost definitely to show to bots for companionship.
My ideas
Disclosure: Larry Magid is CEO of ConnectSafely, a non-profit web security group that advises and has acquired monetary assist from Meta, Character.AI and OpenAI