A new study by Federico Mangiò at the University of Bergamo (Italy), published in Technological Forecasting and Social Change, explores how people across the world are making sense of artificial intelligence (AI) in their everyday lives.
Titled “Discursively negotiating AI: A social representation theory approach to LLM-based chatbots,” the paper examines how non-experts interpret, discuss, and construct meaning around AI systems such as ChatGPT. Mangiò and his colleagues argue that understanding AI’s cultural and discursive framing is just as important as understanding its technical capabilities.
Their research suggests that while AI dominates public imagination, few truly understand how it works. Yet this lack of comprehension does not prevent people from talking about it, using it, or even forming strong opinions about it. Instead, social meaning, collective imagination, and cultural metaphors have become the key drivers of AI’s widespread adoption and fascination.
Why everyone is talking about AI
AI has become one of the most talked-about technologies of the century. From boardrooms to coffee shops, from academic conferences to social media threads, AI hype permeates every layer of conversation. Many people see it as a transformative force capable of revolutionising work, creativity, and education. Others perceive it as a threat to jobs, truth, or even human identity.
However, Mangiò and his team note that to evaluate any technological breakthrough, society must first grasp its underlying dynamics. AI is complex, and its mainstream adoption is relatively recent. Only a small fraction of experts can confidently predict how it might reshape industries or social relations.
Interestingly, research indicates that people with low AI literacy often show the highest enthusiasm. For many, AI represents something “magical” and inexplicable, a phenomenon that fascinates precisely because it is not fully understood. As Mangiò puts it, “If we cannot grasp it, we enchant it.”
This observation raises an important question: Why is AI constantly discussed even by those who lack technical expertise?
Beyond technology adoption: Making sense, not just using
Traditional models of technology adoption, such as the Technology Acceptance Model, emphasise rational factors like ease of use, affordability, or social norms. While these remain relevant, they cannot fully explain the intensity of public engagement with AI.
Mangiò’s research introduces a different perspective. He and his co-authors propose that AI’s pervasiveness in everyday conversation is not merely about functionality or access. Instead, it stems from what they describe as equivocality, a state in which a technology’s meaning and value remain open to interpretation.
An equivocal technology is one whose social significance is not fixed. Its potential is shaped performatively through speculation, storytelling, and cultural dialogue. Early in their life cycle, such technologies generate both hype and confusion. Electricity, nuclear power, nanotechnology, and the internet all experienced similar moments of uncertainty, where society struggled to define what they truly were.
Making the unfamiliar familiar
When faced with uncertainty, humans rely on a process known as sensemaking. Through conversation, we transform abstract and unfamiliar ideas into something relatable. The study draws upon the social representation theory of social psychologist Serge Moscovici, who suggested that societies build shared systems of values, ideas, and images to domesticate novelty.
According to this theory, two mechanisms help turn the unfamiliar into the familiar:
- Anchoring, which links new phenomena to existing cultural frameworks (for instance, describing data as “the new oil”).
- Objectification, which converts abstract concepts into tangible symbols or images (such as using the cloned “Dolly the sheep” as a visual emblem of genetic engineering).
How people talk AI into existence
To explore how this happens, Mangiò and his co-authors conducted a computational text analysis of thousands of online discussions posted by early adopters of ChatGPT. These conversations reveal not just what people think about AI but how they actively construct its meaning through everyday discourse.
Their findings reveal four dominant ways of representing ChatGPT, each rooted in cultural narratives and metaphors that shape how society perceives AI.
ChatGPT as a creative partner
The first and perhaps most common representation frames ChatGPT as a creative collaborator. Users describe it as a digital partner that inspires ideas, drafts content, or even substitutes for human creativity.
This interpretation draws from science fiction and post-humanist imagery. References to fictional AI characters such as HAL 9000 from 2001: A Space Odyssey illustrate how people anthropomorphise ChatGPT, seeing it as a friend, coach, or muse.
ChatGPT as a multistable artefact
A second interpretation portrays ChatGPT as a multistable artefact: a technology that is versatile yet fundamentally limited. In this framing, users recognise its usefulness for tasks like writing, coding, and teaching, while simultaneously pointing out its flaws and biases.
Anchoring occurs through analogies with earlier computing technologies. Some users liken it to calculators or early internet search engines, helpful but far from perfect. Objectification occurs through practices such as “techsplaining”, where users reframe ChatGPT as a statistical model rather than a sentient entity.
This pragmatic perspective tempers hype with realism. It acknowledges AI’s strengths while stressing that outcomes depend heavily on human skill and ethical use.
ChatGPT as a connective hackathon
The third representation frames ChatGPT as a playground for experimentation. Online communities treat the chatbot as a system to be probed, challenged, and subverted through adversarial prompts and jailbreaks such as “DAN”.
In this narrative, AI is less a partner and more an opponent. Users test its boundaries to reveal vulnerabilities, share exploits, and celebrate technical creativity. Mangiò’s study calls this framing a “connective hackathon,” where the collective act of hacking becomes a form of social performance and community building.
ChatGPT as a technology of power
Finally, some users perceive ChatGPT as a technology of power. In this interpretation, AI is not merely a neutral tool but an agent that enforces norms, controls discourse, or reinforces existing hierarchies.
Anchored in narratives of conflict and conspiracy, this perspective sees AI as an instrument of surveillance or manipulation wielded by governments and corporations. Through technomorphising (treating the system as a quasi-human entity) and conspiracy theorising, users construct AI as both a symbol of control and a call for resistance.
This darker framing resonates with broader cultural anxieties about data privacy, inequality, and algorithmic bias. It reflects not just fears about technology, but concerns about the social systems that create and deploy it.
Why these meanings matter
Together, these four representations demonstrate how people transform an equivocal technology into something thinkable, debatable, and socially meaningful. Through constant discussion, metaphor, and reinterpretation, society talks AI into existence.
Importantly, Mangiò’s study finds that these conversations are often more nuanced than media portrayals. While news coverage tends to oscillate between utopian and apocalyptic extremes, everyday users blend optimism, scepticism, and humour. They use AI creatively but also question its implications.
Rather than merely spreading hype, these discussions provide cultural scaffolding that allows non-experts to grapple with technological uncertainty. In doing so, they reveal how AI perception is as much a social and psychological process as a technical one.
A mirror for modern society
AI, the authors suggest, acts as both a catalyst and a mirror for contemporary anxieties. It channels hopes for progress while reflecting concerns about inequality, misinformation, and loss of agency.
What people fear or celebrate about AI often says less about algorithms and more about society itself. For instance, when users depict ChatGPT as an all-knowing partner or as a manipulative overlord, they project broader tensions between autonomy and dependence, empowerment and control.
By analysing these narratives, Mangiò’s research contributes to a growing body of AI discourse studies, offering insights that go beyond technical evaluation to the cultural construction of meaning.
The meanings users construct today, not only speculative futures, should inform the agendas of technology developers and policymakers.
-Federico Mangiò
A cultural conversation is still unfolding
As generative models evolve, so too will the ways people make sense of them. Whether as creative partners, tools, adversaries, or instruments of power, these representations shape how AI becomes embedded in everyday life.
In this sense, AI is not simply a product of engineering. It is a social and cultural artefact co-produced through public discourse. By examining these representations, Mangiò’s study reminds us that technological progress always depends on human imagination.
Reference
Mangiò, F., Pedeliento, G., Wassler, P., & Williams, N. (2025). Discursively negotiating AI: A social representation theory approach to LLM-based chatbots. Technological Forecasting and Social Change, 221, 124352. https://doi.org/10.1016/j.techfore.2025.124352
