Artificial Intelligence Isn't All Scary
LTI Researcher Uses AI To Study Storytelling, Help Humans
By Susie Cribbs
Media Inquiries- Associate Business Manager, Language Technologies Institute
- 412-268-2805
Long before computers or electricity, humans used stories to communicate. But with the rise of artificial intelligence and the proliferation of tools like ChatGPT that harness AI to generate text, it's easy to worry about the future of storytelling.
Carnegie Mellon University's Maarten Sap isn't worried. Instead, he's flipping the concern on its head by using computers and AI to learn more about how humans tell stories and applying the results to helping humans communicate better with those around them.
Sap, an assistant professor in the School of Computer Science's Language Technologies Institute, has always been interested in the social science applications of natural language processing and what AI can tell us about how humans use language. But historically, much of this research has focused on how traits like personality, age or gender influence how people write, or ways that deception can be detected in text. Sap wanted to go beyond that and instead try to understand creativity — not deception — and the role it plays in how humans write imagined stories versus autobiographical ones.
Because little data existed about imagined and autobiographical stories, Sap began his research by developing a library of them. He tasked crowdworkers with writing short stories about something that happened to them, then he gave those stories to a different set of crowdworkers and asked them to make up stories based on the originals. Finally, he had the first set of crowdworkers retell their autobiographical stories months later.
Once Sap had collected a corpus, he and his colleagues needed to create a metric to evaluate and compare story types. Generally, stories weave together a series of events, based on either personal experience or a shared socio-cultural knowledge of how things happen. For example, if you tell a story about a wedding you attended, it's smattered with the details of the wedding as you experienced it. But telling an imagined story about a wedding relies on a shared cultural knowledge of what a wedding is and what it entails. The researchers speculated that analyzing the relationship between events in sentences could shine light on the differences between story types.
With this in mind, the research team developed a metric they called sequentiality, a computational measure of narrative event flow that uses a large language model (in this case, GPT-3) to compare the influence of both previous sentences and story topic on individual sentences.
"Sequentiality relies on the connection between an event and what just happened before it versus an event and the main topic of the story," Sap said. "GPT-3 has been shown to understand what a typical unfolding of events looks like, so we used that to our advantage to tease apart questions like, 'Is an average sentence from an autobiographical story going to be more related to its context or to its general topic compared to a random sentence from an imagined story?'"
The team applied the metric to thousands of stories, with interesting findings. The first was that sequentiality for autobiographical stories was lower than for its imagined counterparts. That is, an autobiographical story has a less linear relationship between sentence events. It meanders.
"This corroborates some intuitions from psychology or cognitive science related to the fact that if you have a memory that's salient in your head, you're going to remember details at random that you haven't put in a nice story yet," Sap said. "If you are imagining a story, you're less likely to have random details pop in because you didn't experience them."
They also found that a retold autobiographical story becomes more like an imagined one as time goes by.
"There's this effect of the more you tell it, the more it gets honed into a story as opposed to a brain dump of a memory," Sap said.
Sap's work holds great promise as a tool for exploring the memory, reasoning and imagination processes needed to generate narratives, as well as for harnessing large-scale neural models to explore narrative theories. But Sap is most excited to look at how this work can help humans understand each other.
"There's a fundamental curiosity I have about how we as humans work. This research was a way to understand how different mechanisms in our brain show up in language and how natural language processing systems can pick up on those traces." Sap said. "But there's a lot of prosocial applications to this line of research. Just understanding the structure of a narrative and how people use events to weave them together has applications to studying how people write stories about their own trauma. Are they overcoming their trauma through these narratives, and can we help them come up with analytic tools to understand how you write about those types of memories?"
Sap notes there's also interesting applications with respect to how we include shared knowledge in stories and what that means for people who have cognitive disabilities or conditions like autism that may hinder their ability to connect to that knowledge and infer intent.
"Understanding if there's a missing gap in a story can help us understand whether we should fill in that gap if we're targeting people who might have a harder time doing this inference. Maybe if we write that down, it can help them understand the story," Sap said. "I really want to develop technology that can help people and do good in the world."
Sap performed his research with the University of Washington's Anna Jafarpour, Yejin Choi, Noah Smith and Eric Horvitz; and James Pennebaker from the University of Texas, Austin. Learn more about the project in "Quantifying the Narrative Flow of Imagined Versus Autobiographical Stories" on the Proceedings of the National Academy of Sciences website.