Will the integration of AI into social media be a “disaster” for teens?

by | May 22, 2023

“The further integration of AI into social media is likely to be a disaster for adolescents,” warn social psychologist Jonathan Haidt and former Google CEO Eric Schmidt in a recent piece they co-wrote for The Atlantic.

I know several researchers who take issue with Haidt’s (negative) portrayal of social media’s impact on youth. They resist his focus on the negatives of social media to the exclusion of the positives, worrying that it fuels unhelpful societal moral panics about kids and technology.

I sympathize with this view. I don’t want us to lose sight of the very real benefits that many teens experience online. I also don’t want us to ignore the challenges that teens may be experiencing in other parts of their lives—mental health disorders rarely have a single cause.

And, as I argue in Technology’s Child, it’s important that we don’t paint social media—or teens—with too broad a brush. Talking about “social media” and “teens” as two uniform categories makes it difficult to do the important work of detecting the specific features and content on social media platforms that are making life more difficult for specific teens.  

Still, I take Haidt’s warnings seriously, including the evidence that he and psychologist Jean Twenge have been compiling in an ongoing, open-source literature review documenting published research related to social media’s contribution to adolescent mood disorders. This review, in addition to my own research, has me convinced that, for many teens, certain social media experiences are contributing to their mental health challenges (the U.S. surgeon general’s recent public advisory supports this view).

How might AI make things worse for teens on social media?

In their Atlantic piece, Haidt and Schmidt share their concerns about generative AI’s likely impact on social media—not just for teens, but for all of us. In a nutshell, they worry that AI is poised to make social media “more addictive, divisive, and manipulative.”

Of the four threats that Haidt and Schmidt identify, one in particular caught my eye because of its specific focus on adolescents. Haidt and Schmidt observe that TikTok’s AI-powered algorithm has been incredibly effective at capturing teens’ attention and keeping them on the platform. So far, though, most of the content that the algorithm serves up has been user created. What happens when the content itself is generated by AI? AI-generated content has the potential to be even more personalized, even more engaging than human-generated content.

And then there’s the question of AI chatbots such as Snapchat’s Chat GPT-powered “My AI” that appeared one day pinned to the top of users’ chat feeds. According to Snapchat, My AI “can answer a burning trivia question, offer advice on the perfect gift for your BFF’s birthday, help plan a hiking trip for a long weekend, or suggest what to make for dinner.” Haidt and Schmidt note (from a Washington Post article) that My AI can also give “guidance on how to mask the smell of pot and alcohol, how to move Snapchat to a device parents wouldn’t know about, and how to plan a ‘romantic’ first sexual encounter with a 31-year-old man.”

Haidt and Schmidt observe that the company has released safeguards to address these (presumably) unintended interactions. But they worry more generally about the commercial incentives that are driving tech companies to incorporate AI into their products at breakneck speed. These incentives place a premium on keeping people engaged on platforms such as Snapchat and TikTok. When it comes to engaging teens specifically, Haidt and Schmidt worry that these incentives could “favor artificial friends that please and indulge users in the moment, never hold them accountable, and indeed never ask anything of them at all. But that is not what friendship is—and it is not what adolescents, who should be learning to navigate the complexities of social relationships with other people, most need.”

So far, it seems that many (most?) people aren’t impressed—and even repelled—by My AI. But the next AI chatbot could be a different story.

What’s the solution?

Haidt and Schmidt offer five reform ideas that mostly center on increasing the transparency and accountability of tech companies—things like user authentication and clearly marking AI-generated audio and video. Their fifth reform, however, calls for the age of “internet adulthood” to be raised to 16, meaning a teen must be 16 or older to engage with social media. (The U.S. surgeon general’s recent public advisory on social media and youth mental health also supports age minimums.)

I support this recommendation, but I’m guessing some researchers don’t. Five years ago, I probably would have been one of them. But I’ve spoken to enough teens—and their parents—and read enough of the published research to be convinced that delaying one’s introduction to social media would be an overall net positive for most teens.

Some might ask, why not make this a family-level decision? After all, parents know their children better than anyone, and since all teens are different, shouldn’t the decision be made on a teen-by-teen basis within the family context?

My main objection to this argument is that it places too much onus on parents. If your child is the only one in their friend group or class without a TikTok or Snapchat account, it can be very challenging to keep them off, not to mention a source of considerable parent-child conflict. Why not take some of the burden off parents and place it instead on lawmakers and tech companies? A further argument for a blanket restriction: if no teens under 16 were able to access social media, there would be less chance that any one of them would experience FOMO (fear of missing out) for not being on a particular platform.  

There’s one reform idea that I would add to Haidt and Schmidt’s list of five. When teens do come of age for using social media, give them more control over their experiences, including how their feeds are curated. Many teens are extremely savvy about curating their social media feeds, but even these teens can be frustrated or confused by the content that’s served up to them. Social media companies should be compelled to make their algorithms more transparent and easier to modify.1Haidt and Schmidt do discuss increased transparency and user control in their reform ideas, but not explicitly in the context of teens’ social media experiences. They also note that the EU recently passed the Digital Services Act, which include several transparency-related mandates.

My takeaways

Though I’m not ready to use the word “disaster” to characterize AI’s impact on teens’ social media experiences, I am concerned. I think Haidt and Schmidt’s reform ideas are a good start, but I suspect more will be needed. I hope we figure it out soon.

Notes:

  • 1
    Haidt and Schmidt do discuss increased transparency and user control in their reform ideas, but not explicitly in the context of teens’ social media experiences. They also note that the EU recently passed the Digital Services Act, which include several transparency-related mandates.