
Meta expands parental controls with teen AI visibility, joining rivalsMeta is rolling out a new Family Center feature that allows parents to see the types of conversations their teenagers are having with its AI tools — marking one of its clearest moves yet toward transparency in teen AI use. ![]() Source: www.unsplash.com The update does not expose full chat histories. Instead, parents will be able to view topic summaries of their teen’s interactions with Meta AI over the past seven days, grouped into categories such as schoolwork, entertainment, travel, and wellbeing. The aim is to give parents a clearer sense of how AI is being used in everyday life without directly reading private conversations. Meta says the feature is designed to make AI use more visible in the home, helping parents understand patterns and potential concerns early, rather than reacting after the fact. The company frames the tool as a way to support conversation between parents and teens, rather than enable monitoring in real time. Alongside the visibility feature, Meta is also introducing guided prompts to help parents start discussions about AI use, including how teens are relying on chatbots for advice, information and creative tasks. Legal actionThis comes at a time when Meta is currently facing a wave of legal action in the United States related to how its platforms affect children and teenagers. Multiple state attorneys general have sued the company, alleging that Facebook and Instagram were intentionally designed to be addictive to young users and that Meta failed to adequately protect minors from harm. These cases argue that engagement-driven design features contributed to mental health impacts among teens and raise questions about whether the company misled the public about platform safety. Broader shiftOther tech companies have also been giving parents more visibility into how teens use AI tools, although the approaches differ across platforms. OpenAI (ChatGPT) has introduced parental controls that allow parents to link accounts, set usage limits such as quiet hours, and restrict certain features like voice or image generation. In higher-risk cases, the system can also flag safety concerns, but it does not generally give parents access to full chat histories, focusing instead on controls and alerts. Character.AI has experimented with weekly activity summaries for parents, showing usage patterns such as time spent on the app and frequently used characters, without exposing the actual content of conversations. This mirrors Meta’s approach of offering behavioural insight rather than full transparency. More broadly, companies such as Google are building age-based AI experiences and family-linked account systems across their products, signalling a wider industry shift toward tailoring AI behaviour for younger users rather than enabling full parental monitoring. About Karabo LedwabaKarabo Ledwaba is a Marketing and Media Editor at Bizcommunity and award-winning journalist. Before joining the publication she worked at Sowetan as a content producer and reporter. She was also responsible for the leadership page at SMag, Sowetan's lifestyle magazine. Contact her at marketingnews@bizcommunity.com View my profile and articles... |