Parenting and AI: Where Do We Go from Here?

by | Mar 12, 2024

Parenting with ChatGPT

I’ve been Googling parenting advice and bookmarking mom hacks on Pinterest since I became a parent seven years ago. I briefly ventured into parenting forums but found the message boards too full of in-the-know acronyms (“dh” for dear husband, “AIBU” for am I being unreasonable) and dubious advice. Over the last year, I’ve added ChatGPT to my list of tech-based sources of parenting support. Here are some entries from my recent prompt history:

  • What should I pack for a two-day camping trip with a six-year-old?
  • Suggest some budget-friendly, eco-friendly party favors for a seven-year-old’s birthday party.
  • Give me some ideas for indoor rainy-day activities suitable for a seven-year-old boy that don’t include a tablet, TV, computer or phone.
  • I need some ideas for feeding an extremely picky child who won’t eat vegetables or fruit.

Pre-GPT, I would have Googled these queries. The advantage now is I don’t have to scroll past sponsored content or sift through pages that may or may not have the content that I’m looking for. ChatGPT is doing the sifting and synthesizing work for me. 

When I first signed up for an OpenAI account, I was underwhelmed by the generic responses it gave me. Suggesting board games, puzzles and crafts for rainy-day activities didn’t much impress or inspire me, and I had mixed feelings about the parting advice that ChatGPT threw in at the end of its response, which I read as a subtle admonishment of my parenting skills: “Remember, the key to a successful indoor day is to rotate activities to keep things fresh and interesting. And, of course, participation is important — jump in and play along whenever possible!” 

As “prompt engineering” becomes part of our cultural lexicon (and an official job position at many companies), my own prompting practices are improving. I’m learning to follow up on my initial query with targeted questions that offer more context (we’ll be camping near a beach in Washington State over Labor Day weekend) and personal details (my son loves Pokémon and Lego; our house isn’t big enough to set up an indoor obstacle course) to help ChatGPT customize its responses for my child and situation. 

When it comes to parenting in an AI age, I see ChatGPT as a place to start, one of many sources — whether tech-based or human — that I consult for parenting advice. It’s a tool to get me started, expand my thinking, iterate and fill out sketchy ideas. Importantly, it’s a tool, not a surrogate. The best stance to bring to ChatGPT is an active one. Instead of asking: “What can you do for me?” try shifting to: “What can we think of together?”

How will AI change our children?

So much for parenting with ChatGPT. What about children’s experiences with AI? Your child might already be interacting with generative AI, for instance, if they attend a school that’s been testing out Khanmigo, the ChatGPT-powered tutor developed by Khan Academy. Or they could be communicating with AI chatbots on social media and messaging platforms. Perhaps they’re holding group chats with their friends and Snapchat’s My AI chatbot, or chatting one-on-one with celebrities like Paris Hilton, Mr. Beast, and Snoop Dog using Meta’s AI-powered celebrity bots. Depending on your current stance towards AI, these examples might sound exciting, alarming, benign or a confusing mix of possibilities. 

Like any new technology, there are both opportunities and risks associated with children’s use of AI-powered tools. I recently attended a webinar hosted by Khan Academy that showcased how Khanmigo could be used to provide timely, customized and high-quality feedback on students’ writing, helping them refine their thesis statements, expand their arguments and improve their grammar. One English teacher described how her high school students developed their ideas about an assigned reading by engaging in an open-ended conversation with Khanmigo. 

In another example, the MIT researchers behind App Inventor have incorporated generative AI into their platform to make it easier for children to create their own apps without first having to learn to code. With Aptly, another of their inventions, children can create and edit apps through spoken language only.

At a panel discussion about ChatGPT hosted by my home institution, the University of Washington, one faculty member described how he and his 11-year-old daughter had experimented with using ChatGPT to write an ad for her breakfast cereal in the style of Shakespeare, then followed up by asking for the same ad written in different voices. He reflected that this activity, in addition to being fun, helped his daughter to explore what ChatGPT was capable of and how it was generating its responses. 

These examples illustrate the potential for using generative AI to support children’s critical thinking and creative expression. 

But there also are risks. The ability to generate all manner of images, text and even voices introduces new ways to victimize others, for instance, by impersonating someone to spread rumors or explicit images about them. The tendency for large language models to hallucinate (by providing false information in a convincingly authoritative way) introduces new possibilities for exposure to misinformation. Using the data that children feed into an AI platform raises concerns about privacy and security. Children’s personal data could be used to deliver ads that are even more targeted and personalized than they are now. And, of course, there’s the ongoing concern in schools about students using ChatGPT to do their homework and write their essays for them (though early evidence suggests students are not cheating more than in pre-GPT times). 

Whether the outcomes overshadow the risks or vice versa will depend on many factors, including government regulation, business incentives, educational interventions and parenting practices. It’s also likely that the opportunity/risk balance will differ across individual children depending on the kinds of resources and supports they have access to.

Supporting your child’s (and your) AI interactions

In my book, Technology’s Child: Digital Media’s Role in the Ages and Stages of Growing Up, I present a two-step framework for determining when digital media experiences support children’s development and when they might do harm. I offer this framework as a North Star to help parents make sense of an often-confusing technological landscape and make concrete decisions that are right for their children. 

The framework consists of two questions that can be applied to generative AI: 

Is this experience self-directed? These technology experiences place children in the driver’s seat of their digital interactions. Children, and not technology, are in control. Ask yourself: “Who’s in the driver’s seat of this AI interaction, my child or the AI?”

Is it community-supported? These are technology experiences that are supported by others, either during or surrounding a digital experience. Ask yourself: “What kind of support is or could my child receive to increase their sense of agency in the context of their AI interactions?”

Keeping these two questions front-of-mind can help parents make decisions about if and how their children should be interacting with ChatGPT and other AI-powered tools. The same questions apply equally well to parenting with AI: Are you using AI to outsource or enhance your parenting decisions? What kinds of support would help you make the most of your AI interactions? 

AI-related resources for parents and educators