Artificial intelligence is increasingly integrated into everyday life, from medical imaging and predictive analytics to education platforms and automated decision-making systems. Yet public opinion about artificial intelligence remains deeply ambivalent. Some people view AI as a transformative tool that can enhance healthcare, productivity, and quality of life, while others perceive it as a threat to jobs, privacy, safety, and even human autonomy. This sharp divide raises an important question: why do people respond so differently to the same technology?
A growing body of research suggests that the answer may lie less in artificial intelligence itself and more in how it is communicated to the public. A recent peer-reviewed study conducted by Risa Palm and her colleagues, and published in the journal Science Communication provides empirical evidence that the framing of AI in news-style narratives powerfully shapes public beliefs, attitudes, and policy preferences. The research was conducted by scholars from Georgia State University and the University of Central Florida, drawing on a large nationally representative survey in the United States.
Rather than asking whether artificial intelligence is inherently good or bad, the study poses a more nuanced yet crucial question. How does the way AI is described influence what people think about its risks, benefits, and governance?
The study behind the headlines
The article, titled Framing affects support for the development of artificial intelligence in the United States, reported a large-scale survey experiment involving 3165 respondents across the United States. Participants were randomly assigned to one of three conditions. One group read only a brief neutral definition of artificial intelligence. A second group read a mock news article emphasizing the benefits of AI. A third group read a parallel article highlighting the risks of further AI development.
This design allowed the authors to isolate causal effects. Any differences in attitudes between groups could be attributed to the way the information is framed, rather than to prior knowledge or demographic differences. The framed articles were deliberately written to resemble real news coverage rather than simplified slogans or one-sentence prompts. This choice distinguishes the study from earlier research that found only modest framing effects.
What does framing really mean?
In media studies and political communication, framing refers to the selection and emphasis of certain aspects of reality over others. When a news article highlights job creation, medical breakthroughs, and efficiency gains, it encourages readers to evaluate artificial intelligence through a benefits-oriented lens. When an article stresses job displacement, surveillance, algorithmic bias, or loss of human control, it activates a risk-oriented frame.
Framing does not necessarily involve misinformation. Both positive and negative frames can be factually accurate. However, by making certain considerations more salient than others, framing influences how people weigh evidence when forming opinions. The study applies this well-established theory to artificial intelligence, an emerging technology where many people lack direct experience or well-formed attitudes.
The authors argue that artificial intelligence is particularly susceptible to framing effects because it is often discussed in abstract terms and is associated with both utopian and dystopian narratives in popular culture.
How positive frames shape optimism about AI
Participants who read the positively framed article were significantly more likely to believe that future advancements in artificial intelligence would have beneficial impacts on jobs, health care, public safety, education, and overall quality of life. The effect was not marginal. Exposure to a single news-style article was enough to shift beliefs across multiple domains.
The positive frame emphasised applications such as improved diagnostic accuracy in medicine, personalised education, increased workplace efficiency, and enhanced public safety through predictive systems. Importantly, the article also suggested that AI development occurs under conditions of testing and risk minimisation, reinforcing a sense of responsible innovation.
As a result, readers exposed to this frame were more likely to conclude that the benefits of artificial intelligence outweigh its risks and to express support for its continued development. These findings suggest that optimistic narratives do more than inspire enthusiasm. They actively shape perceptions of social impact and technological legitimacy.
How negative frames amplify fear and caution
The effects of the negative frame were even more striking. Participants who read about the risks of artificial intelligence were substantially more likely to believe that AI would have harmful consequences for employment, education, health care, and safety. They were also more likely to oppose further development of AI technologies.
The negative article highlighted concerns such as automation-driven job loss, errors in medical decision-making, erosion of human skills, surveillance, and the potential for autonomous systems to operate without sufficient human oversight. These claims reflect real and widely discussed issues in AI ethics and governance.
Trust in science is at stake
One of the most unexpected results of the study concerns trust in science itself. Exposure to the negative AI frame significantly reduced respondents’ general trust in science and technology as forces for solving societal problems. At the same time, it increased endorsement of the precautionary principle, the idea that innovation should be restrained when there is uncertainty about potential harm.
In contrast, exposure to positive framing did not significantly increase trust in science or reduce precautionary attitudes. This asymmetry suggests that negative messages may have a stronger psychological impact than positive ones, particularly when technologies are associated with existential or moral risks.
Why artificial intelligence is uniquely sensitive to framing
Artificial intelligence differs from many previous technologies because it is often described as transformational rather than incremental. It is portrayed as capable of reshaping work, learning, governance, and human interaction itself. At the same time, many AI systems operate invisibly, embedded in algorithms and data infrastructures that are difficult for non-experts to observe or evaluate.
This combination of high stakes and low transparency makes public opinion especially vulnerable to narrative cues. When people lack direct experience, they tend to rely more heavily on media representations to form their judgments. The study demonstrates that these representations do not simply inform the public. They actively construct the meaning of artificial intelligence in social and political discourse.
–Risa Palm
Implications for journalism and science communication
The findings carry significant implications for journalists, editors, and science communicators. News coverage of artificial intelligence is not a neutral conduit of information. Choices about language, emphasis, and narrative structure can significantly influence public attitudes.
This does not mean that journalists should avoid discussing risks or exaggerate benefits. Rather, it highlights the ethical responsibility involved in framing emerging technologies. Overly alarmist narratives may undermine trust and encourage reactionary policy responses. Uncritically optimistic stories may obscure legitimate concerns about equity, accountability, and governance.
Balanced reporting that contextualises both benefits and risks may help foster informed public deliberation. However, the study also suggests that even balanced messages can be interpreted through dominant frames if audiences are repeatedly exposed to one-sided narratives.
Understanding these dynamics is essential for designing governance models that are both responsive and proportionate. Artificial intelligence governance relies on public trust, which is shaped through effective communication. Policymakers should be aware that public reactions may reflect communication dynamics as much as substantive evaluations of evidence.
Reference
Palm, R., Kingsland, J. T., & Bolsen, T. (2025). Framing affects support for the development of artificial intelligence in the United States. Science Communication, 47(5), 730 to 752. https://doi.org/10.1177/10755470251317172
Coauthors
Dr. Justin Kingsland is the Assistant Director of Research, Evaluation, Assessment, and Data at the Center for Higher Education Innovation at the University of Central Florida. His work focuses on developing and implementing innovative strategies to accelerate student success. His research interests include political behavior, public opinion, and the intersection of political science and environmental communication.
Dr. Toby Bolsen is the Zoukis Professor of Politics & Justice in the Political Science department at Georgia State University. His research focuses on the study of political communication, preference formation, political behavior, and U.S. energy and climate policy.
