Arvind Narayanan, a computer science professor at Princeton University, is best known for criticizing the hype surrounding artificial intelligence in Substack’s AI Snake Oil, which he co-authored with doctoral candidate Sayash Kapur. There is. The two authors recently published a book based on their popular newsletter about the downsides of AI.
But don’t get it twisted. They are not against the use of new technology. “Our message is often misunderstood to be that all AI is harmful or questionable,” Narayanan says. In a conversation with WIRED, he noted that his rebuke was not directed at the software itself, but rather at the culprits who continue to spread misleading claims about artificial intelligence. It is clearly stated.
In AI Snake Oil, those guilty of perpetuating the current hype cycle are divided into three main groups: companies selling AI, researchers studying AI, and journalists covering AI.
Hyperspreader of hype
Companies that claim to use algorithms to predict the future are positioned as potentially the most fraudulent. “When predictive AI systems are deployed, the first groups to be harmed are often minorities and those already living in poverty,” Narayanan and Kapur write in their book. For example, an algorithm previously used by local authorities in the Netherlands to predict who might commit welfare fraud incorrectly targeted non-Dutch speaking women and immigrants.
The authors also cast a skeptical eye on companies that primarily focus on existential risks, such as artificial general intelligence (the concept of super-powerful algorithms that outperform humans at work). . However, they are not scoffing at the idea of AGI. “When I decided to become a computer scientist, being able to contribute to AGI was a big part of my identity and motivation,” Narayanan says. This discrepancy stems from companies prioritizing long-term risk factors over the impact of AI tools on people, a phrase often heard by researchers.
The authors argue that much of the hype and misconceptions may also be due to shoddy and irreproducible research. “We find that in many areas, data breach issues are leading to overly optimistic claims about the capabilities of AI,” says Kapur. Data leaks essentially occur when AI is tested using some of the model’s training data. This is similar to giving answers to students before administering an exam.
While academics are portrayed as making “textbook mistakes” in AI Snake Oil, journalists have more sinister motives and are deliberately wrong, Princeton researchers say. There is. “Many of the articles are simply paraphrased press releases and have been laundered as news.” Reporters who avoid honest reporting in favor of maintaining relationships with big technology companies and protecting access to company executives are particularly pernicious.
I think the criticism of access journalism is justified. In retrospect, I could have asked tougher questions and asked smarter questions during my interviews with stakeholders from some of the most important companies in the AI space. However, the author may be oversimplifying the issue here. The fact that a major AI company has given me access doesn’t prevent me from writing skeptical articles about the technology or working on investigative pieces that I know will anger them. (Yes, even if they do business with WIRED’s parent company, as OpenAI did.)
Also, sensational news articles can be misleading about the true capabilities of AI. Narayanan and Kapur highlighted New York Times columnist Kevin Roos’ recording of a chatbot interacting with a Microsoft tool in 2023, with the headline, “Bing’s AI chat: ‘I want to stay alive.’ 😈” is cited as an example of journalists confusing the public about perceptual algorithms. “Ruth was one of the people who wrote these articles,” says Kapur. “But when you see headline after headline about chatbots becoming a reality, I think that could have quite an impact on public psychology.” As a prime example of the persistent urge to project human nature onto people, he points to the ELIZA chatbot of the 1960s, where users quickly anthropomorphized crude AI tools.