AI hope versus hype
Arvind Narayanan, professor of computer science and one of TIME magazine’s 100 most influential people in AI, sheds light on the capabilities and pitfalls of artificial intelligence.
By Alaina O’Regan
If you’re starting to deploy artificial intelligence in your everyday life, how can you be sure that the tools you’re using are trustworthy? As the reach of AI extends deeper into our daily routines, Princeton’s in-house AI expert Arvind Narayanan aims to help the public disentangle fact from fiction.
“There is a lot of public confusion about what artificial intelligence is, and what it is and isn’t capable of,” said Narayanan, professor of computer science and director of Princeton’s Center for Information Technology Policy.
In an upcoming book called AI Snake Oil, co-authors Narayanan and graduate student Sayash Kapoor — both named among TIME magazine’s 100 most influential people in AI in 2023 — pull back the technical curtain on AI. While certain AI technologies are progressing at a rapid rate — specifically, “generative AI” tools that conjure up text, images and videos — many other AI tools are not up to speed with the hype, attest Narayanan and Kapoor.
For example, AI is not keeping up with the claim that it can predict the future. A set of tools called “predictive AI” is already being used to make consequential decisions about who might commit crimes and which job candidate is best suited for a position. “The fact is that it is hard to predict the future using AI, and its ability to do that is not improving,” Narayanan said. “There are really fundamental limitations to using these kinds of predictive decision-making tools that can really foreclose people’s life chances. This is the kind of AI we largely refer to as ‘AI snake oil.’” In this interview, Narayanan answers some questions about the possibilities and pitfalls of AI today.
How is predictive AI different from other kinds of AI?
A key difference is the task you’re asking the tool to do. Generating text or images is something that a person can do, but that generative AI can do faster.
But with predicting the future, the laws of nature and social aspects of people’s behavior make it hard to predict if someone is going to be a good employee, or if someone is going to be arrested for a crime. These are not things that become easier to predict with AI, even when you have lots of data. In some cases, predictive AI is no better than a random number generator.
One of the big dangers with predictive AI is that the tools just reflect what’s in the data, which is often historical data that reflects people’s prejudices, stereotypes and biases.
What should people be aware of when it comes to AI in the media?
When it comes to generative AI, although we believe that technology is progressing very quickly, I think the hype around those technologies is progressing even faster than that.
A lot of people who hype these technologies are talking about how ChatGPT is going to allow you to be 10 times more productive, but is that true? Should you be dropping everything to try to use this tool?
Why is ChatGPT such a big deal?
When you ask a question to ChatGPT, it’s doing this really crude thing — someone on the internet described it as freestyle rapping because at any moment, it’s thinking about the word or character that should come next. Just by doing that over and over again, it’s actually able to produce some really interesting output.
These technologies have now finally gotten to a point where they’re useful to everyday people. So it looks like it’s everywhere, but this has actually been quite gradual.
Should I be worried that AI will take my job?
Generative AI in particular is having a lot of impact on the labor market, but I think that has to be viewed through the lens of capitalism. It’s a question of economic factors and power asymmetries between workers and people who control the means of production.
There are a lot of people who want to know what kinds of jobs can be automated by AI. But the relevant question is, who has power in the labor market? And how might the people who have a lot of power be able to abuse it, while using AI as an excuse?
Some of these big tech companies have a lot of unaccountable power. There is a lot of policymaking activity around AI, and to prevent companies from exercising outsized power, I think part of what is needed is for the public to be well educated.
What impact do you hope to see from your book?
I hope it can serve as a resource for policymakers and journalists as they encounter claims about these technologies.
I also hope that it will have some small effects on companies avoiding overhyping their products too much, and people feeling comfortable to call out hype when they see it.
Narayanan and Kapoor write about AI on Substack at aisnakeoil.com.