The effects of artificial intelligence on adolescents are nuanced and complex, according to a report from the American Psychological Association that calls on developers to prioritize features that protect young people from exploitation, manipulation and the erosion of real-world relationships.
“AI offers new efficiencies and opportunities, yet its deeper integration into daily life requires careful consideration to ensure that AI tools are safe, especially for adolescents,” according to the report, entitled “Artificial Intelligence and Adolescent Well-being: An APA Health Advisory.” “We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI. It is critical that we do not repeat the same harmful mistakes made with social media.”
The report was written by an expert advisory panel and follows on two other APA reports on social media use in adolescence and healthy video content recommendations.
The AI report notes that adolescence – which it defines as ages 10-25 – is a long development period and that age is “not a foolproof marker for maturity or psychological competence.” It is also a time of critical brain development, which argues for special safeguards aimed at younger users.
“Like social media, AI is neither inherently good nor bad,” said APA Chief of Psychology Mitch Prinstein, PhD, who spearheaded the report’s development. “But we have already seen instances where adolescents developed unhealthy and even dangerous ‘relationships’ with chatbots, for example. Some adolescents may not even know they are interacting with AI, which is why it is crucial that developers put guardrails in place now.”
The report makes a number of recommendations to make certain that adolescents can use AI safely. These include:
Ensuring there are healthy boundaries with simulated human relationships. Adolescents are less likely than adults to question the accuracy and intent of information offered by a bot, rather than a human.
Creating age-appropriate defaults in privacy settings, interaction limits and content. This will involve transparency, human oversight and support and rigorous testing, according to the report.
Encouraging uses of AI that can promote healthy development. AI can assist in brainstorming, creating, summarizing and synthesizing information – all of which can make it easier for students to understand and retain key concepts, the report notes. But it is critical for students to be aware of AI’s limitations.
Limiting access to and engagement with harmful and inaccurate content. AI developers should build in protections to prevent adolescents’ exposure to harmful content.
Protecting adolescents’ data privacy and likenesses. This includes limiting the use of adolescents’ data for targeted advertising and the sale of their data to third parties.
The report also calls for comprehensive AI literacy education, integrating it into core curricula and developing national and state guidelines for literacy education.
“Many of these changes can be made immediately, by parents, educators and adolescents themselves,” Prinstein said. “Others will require more substantial changes by developers, policymakers and other technology professionals.”
In addition to the report, further resources and guidance for parents on AI and keeping teens safe and for teens on AI literacy are available at APA.org.