New study shows that 40% of news headlines about AI are negative

The study of 67,000 headlines examined how AI-related news headlines can fuel public fear and misunderstanding.

Artificial Intelligence (AI) has the potential to revolutionize our world, offering solutions to complex problems and improving our daily lives.

Yet as AI technology advances, so do the growing concerns and fears surrounding its development and implementation.

Researchers at Rutgers University have written a new paper (which is still in pre-print, meaning it has not yet been peer-reviewed) that sheds light on how news headlines contribute to the spread of AI fear and mistrust among the public.

The power of news headlines

News media play a crucial role in shaping public perceptions, and the way journalists frame AI-related stories can significantly impact how people view the technology.

The study found that sensationalism and fear-mongering are prevalent in AI news headlines, with many stories using language that evokes anxiety and mistrust.

The researchers analyzed 67,000 AI-related news headlines using advanced Natural Language Processing (NLP) and Machine Learning (ML) techniques to identify sentiment and themes.

They collected these headlines using the Google News RSS feed as their primary data source, between November 1, 2020, and February 16, 2024.

The sources included a mix of news outlets, technology and science publications, business and finance media, and more.

Some of the sources they analyzed include Reuters, Forbes, Yahoo Finance, and MarketWatch.

Results: 40% of AI headlines are negative

The researchers’ analysis revealed that 39.7% of the headlines they collected “convey negative sentiments,” usually related to apprehension and concerns about AI’s impact on society.

About 11% of the headlines were specifically fear-inducing.

Examples include “AI doomsday warnings a distraction from the danger it already poses,” “The Godfather of A.I. now warns of its dangers,” “The Dystopia is Here: AI is Taking over Data Science Jobs in 2021,” “Americans fear artificial intelligence will steal their jobs,” and “Generative AI-nxiety.”

Terms and phrases commonly used in these headlines were “risk,” “bias,” “concern,” “AI ethics,” “AI safety,” “AI regulation,” “AI arms race,” “artificial intelligence will replace,” and “dangers of artificial intelligence.”

The impact of fearful AI reporting

The prevalence of fear-mongering in AI news headlines has far-reaching consequences.

It contributes to public confusion and mistrust of AI technology, potentially hindering research and developments in the field.

The study emphasizes the importance of distinguishing between AI as a science and its applications, as well as responsible reporting on AI advancements and potential risks.

By providing accurate and balanced information, news media can help the public develop a more nuanced understanding of AI technology and its implications.

As the the authors write, “Genuine reporting on the factual risks of AI must be encouraged, and sensationalism, along with AI-fear mongering, needs to be avoided.”

Study Details: