After an early dinner last Wednesday, I set Oliver up with his tablet and headphones (in the spirit of good enough digital parenting) and logged onto the live stream of a panel discussion taking place at the University of Washington (UW), titled Demystifying ChatGPT for Academics.
The panelists included faculty from a variety of departments across the UW campus, including my colleague, Jevin West, associate professor in the UW Information School (which is also my homebase) and co-founder of the Center for an Informed Public.
Noah Smith, computer science professor and senior director at the Allen Institute for Artificial Intelligence, started things off with some remarks about what ChatGPT is and isn’t, and why the “isn’t” part is sometimes difficult to discern. He described ChatGPT as a “fluency machine” – it creates fluent language, and does so at lightning speed, which is what we’ve all been marveling at since last November.
But Smith also reminded us that ChatGPT doesn’t understand any of what it produces. It also has no sense of attribution to sources – where did it get these convincing-sounding claims? If it does give you a source, that source could be completely fabricated.
The floor was then opened to the panelists to talk about how they’re using ChatGPT in their university teaching and what opportunities and challenges they see when it comes to teaching and learning with AI. Here are three opportunities and three concerns that stood out to me—
The promising
ChatGPT can help students get unstuck. When it comes to writing, one of the hardest things for students is simply getting started. ChatGPT can help students get going if it’s used in the brainstorming process to explore and develop ideas. This discussion reminded me of Sal Khan’s introductory video to Khanmigo, the new AI-powered tutor from Khan Academy. Khan gives examples of a student interacting with Khanmigo to generate ideas for a story, while another student engages Khanmigo as a mock debate partner to develop their arguments and practice responding to the other side’s arguments.
ChatGPT can be used as a tool to develop critical thinking skills. Brock Craft, associate teaching professor in Human-Centered Design and Engineering, shared a personal story of his 11-year-old daughter using ChatGPT to write an ad for breakfast cereal in the style of Shakespeare. She went on to ask it to create ads in different voices, and this exercise opened up opportunities for father and daughter to discuss how ChatGPT works, including what it can and can’t do.
In his teaching, Tivon Rice, assistant professor in Digital Arts and Experimental Media, uses a series of three guiding questions to support students’ creative engagement with ChatGPT.
- What do you know about this model? This question encourages students to consider the model’s dataset and how its data affects what it produces.
- What did you ask the model? This question views the ability to ask novel prompts as a form of creative thinking.
- How do you think about what the model created? Rice is interested in helping his students develop a position of co-authorship with ChatGPT.
The concerning
The flip side of using ChatGPT to get unstuck is the removal of intellectual struggle. Penelope Adams Moon, director of UW’s Center for Teaching and Learning, worried that AI can rob students of the need to engage with the intellectual struggle that is a key part of the learning process. Of course, an extreme form of bowing out of intellectual struggle is cheating. But the panelists seemed more interested in considering fuzzier examples—what is an appropriate amount of support for students to develop their ideas without having those ideas generated for them?
One panelist observed that it’s alarming the confidence with which ChatGPT makes mistakes, which makes those mistakes difficult to detect if you’re not looking out for them. I think Jevin West had the best line of the night when he added: “They [large language models] are bullshitters at scale.” Another panelist observed that we tend to anthropomorphize technology, and authoritative responses play into that tendency.
Several panelists raised ethical concerns about the data used to train large language models. Hannaneh Hajishirzi, computer science associate professor and senior director at the Allen Institute for Artificial Intelligence, warned that if you share private data with ChatGPT, that data could become part of the model going forward, raising privacy concerns. Luke Zettlemoyer, professor in the Paul G. Allen School of Computer Science and Engineering, observed that ChatGPT has been trained on data pulled from the internet, so all the biases that people express on sites like Reddit are baked into the model. Zettlemoyer also noted that some of the safety controls that developers have put into place are ethically fraught. For instance, one model won’t talk at all about LGBTQ+ issues, which disenfranchises an entire group of people.
AI and the two-step framework
When I consider both the promising and the concerning together, I see clear connections to the two-step framework I present in Technology’s Child. Summarized in a single sentence, this framework describes how technology experiences that are self-directed and community supported are best for children’s healthy development (see my three-part series of posts here, here, and here for more details and to see how I apply the framework to different stages of development).
When it comes to self-directed experiences with AI, the panelists were excited about AI’s potential to promote students’ critical and creative thinking (good for self-direction), but they worried about the consequences if students outsource too much of their thinking to these large language models (bad for self-direction). The power and importance of community support was evident in Brock Craft’s anecdote about using ChatGPT with his daughter, as well as the other panelists’ descriptions of how they’re helping their students to engage with ChatGPT in a critical and careful way.
Where do I come down in the AI in education debate? I recognize the opportunities discussed by my colleagues, but I think they’re far from guaranteed (and I suspect they would agree). I worry that it will be too easy for students to engage with ChatGPT and other large language models in a less than self-directed way. I also worry that not enough students will have access to the kind of community support that my colleagues are providing their students at UW. I don’t have a clear idea yet of how best to address these concerns, but I’m glad we’re having conversations like this at UW.