Research Comms Podcast: Interview with AI expert, Dr Kanta Dihal

“Artificial intelligence has been present in our narratives since we started telling stories.” Dr Kanta Dihal on our enduring fascination with stories of intelligent machines.

kanta 2.jpg

This week’s guest on the Research Comms podcast is Dr. Kanta Dihal, Senior Research Fellow at Cambridge University’s Leverhulme Centre for Future Intelligence, where she runs Global AI Narratives, a project exploring the many ways in which artificial intelligence is perceived by cultures around the world.

SUBSCRIBE ON

or wherever you prefer to listen to your podcasts…


The below interview has been edited for the sake of brevity and clarity.

Your Global AI Narratives project explores the stories that humans tell about artificial intelligence. What did you discover about these stories?

Artificial intelligence has been present in our narratives, essentially, since we started telling stories. We have always dreamt of creating intelligent beings that are human-like, but not quite human, that are somehow improved or easier to create, that don't require a partner, that don't require lengthy education, that are in some way an improvement on the way in which humans reproduce.

And often technology is a key element to those stories. Often technologies, especially the newest technologies, are what enable the creation of this being. But in some older stories, it might also be alchemy, magic, things that at various points were considered as almost indistinguishable from a scientific discovery process. So the oldest story that we've identified is from the Iliad, in which the God, Hephaestus, created what are usually translated as ‘golden handmaidens’ to help him in his forge. He was the God of Blacksmithing. And so he created them out of metal. And the original Greek uses very specific words to indicate that these creatures had intelligence, that they had wisdom and knowing and that enabled them to help him out.

Have you listened to these other episodes of the Research Comms podcast?

How did the project come about?

The Global AI Narratives project was inspired by our investigation into the question of what shapes people's perceptions of AI in the English speaking West and discovering that it's a very narrow set of narratives that keep being reused and that narrowly steer people's expectations.

So we sent out a survey to the British public, and we discovered that the films that influenced most people were the Terminator, Spielberg's ‘AI’, and I-Robot, which is such a narrow set of ways to imagine life with intelligent machines. What we also saw was that these Hollywood narratives are being pushed out to other parts of the world as they accompany the technology that is being created in other parts of the world.

And there these narratives clash with expectations that people might already have of intelligent machines because the existing narratives in these places might already be very widespread, very old, much older than the technology itself. So, if new technology and new narratives clash with what is already there, then you get misconceptions, misunderstanding, misinformation, and we want to find out what kind of narratives are already in place in different parts of the world, how that interacts with new technologies coming in and how that can be done better.

Did you find that the stories tell about AI differ between cultures?

Yes, there are marked differences in how people have been imagining artificial intelligence. I’d say the greatest difference can be found in Japan where there's a history of imagining AI throughout the 20th century, which is much more positive.

It's much more aimed at collaborating and befriending artificial intelligences. Their great mascot of artificial intelligence, so to speak, is Doraemon - a cuddly blue cat from the future in cartoon form - who has inspired a very large group of people. And in the generation before that there was Astro Boy, who was again packaged as a cartoon, as friendly, helpful, a protagonist to empathize with and very much the opposite of the Terminator.

And do those narratives have an impact on the way different societies respond to new AI technology?

Yes, absolutely. You can see that attitudes towards robotics, for instance, are very different and much more positive in some parts of the world than others. While many of us have fears of AI taking jobs, those concerns are much more limited in many other parts of the world, for instance in Japan, and to a lesser extent in South Korea. There is more emphasis on care robots being developed in order to support an increasingly ageing population, which is of course a very different situation from say, India, where technological unemployment is a much more pressing issue. But the issue of care robots with this history of narratives of artificial intelligence as a buddy or a helper really helps.

Tell me about the other part of your project ‘The Whiteness of AI’ that looks at how AI beings are almost always depicted as being white in popular culture and stock imagery and so on?

Yes, indeed. What we’re doing is asking the question, ‘How come that there is such a proliferation of these kinds of images? How come it's this one that gets replicated?’ And ‘what were the first narratives that created this assumption that that is what a stock image of artificial intelligence looks like?’ And one reason is that people are creating in their own image.

So artificial intelligence is, for the most part, being developed in Silicon Valley, where there is a very homogenous group of people. It has diversified, especially in terms of Eastern South Asian men coming into business there, but it still is extremely homogenous in terms of its gender balance and there’s a lack of inclusivity of black and Latinx researchers. That creates a challenge when someone is trying to create an intelligent being, what do they consider to be an intelligent being? Well with many of these people, it's often someone exactly like themselves, which is a problem within artificial intelligence research.

PB portrait round.png

Research Comms is presented by Peter Barker, director of Orinoco Communications, a digital communications and content creation agency that specialises in helping to communicate research. Find out how we’ve helped research organisations like yours by taking a look at past projects…


 

EXPLORE MORE FROM THE ORINOCO COMMS BLOG


Previous
Previous

Research Comms Podcast: Dr Sander van der Linden

Next
Next

Research Comms Podcast: Interview with Stefanie Posavec and Miriam Quick