The Uncanny Valley and the Rising Power of Anti-AI Sentiment
Recent survey data show a wide gap between public and expert views of AI. In Pew's 2025 survey, 76% of AI experts said AI would benefit them personally, while only 24% of the U.S. public said the same. The public was much more likely to say AI would harm them than benefit them.
Negative public sentiment also appears to be growing. In March 2026, Quinnipiac found that 55% of Americans thought AI would do more harm than good in their day-to-day lives, up from 44% in April 2025. It also found that 64% thought AI would do more harm than good in education.
Public hostility toward AI now looks stronger than ordinary skepticism toward a new technology. People have reasons for that response, including fraud, misinformation, privacy invasion, concentration of power, and job displacement. Job displacement carries its own emotional weight because it threatens status, livelihood, and social usefulness, which gives the fear an existential edge.
This essay explores why anti-AI sentiment may be gaining force. AI may now be producing a more ambient uncanny reaction across daily life. That would help explain why public reaction often sounds disgusted, unsettled, and bodily rather than merely doubtful.
The Uncanny Field
Masahiro Mori introduced the uncanny valley in 1970. In Mori's original formulation, corpses and zombies sat deep in the valley, so death and lifeless human resemblance were built into the concept from the beginning. The original graph tied human likeness directly to revulsion once the likeness crossed into something animate-seeming and lifeless.
The literature still offers several explanations for why that drop in affinity happens. Reviews continue to describe competing accounts involving mismatch, category ambiguity, expectation violation, disgust, and threat-related mechanisms rather than one settled model.
AI now appears in forms that trigger human expectations throughout daily life. People encounter text that sounds conversational, voices that sound natural, images and video that almost pass, and agents that mimic competence, memory, initiative, or empathy. Reactions that once centered on a robot or replica may now be attaching to AI as a category.
Repeated contact gives that shift its force. A chatbot that sounds empathic and hollow, a synthetic voice that feels almost right, or a generated video clip that collapses on inspection may each be minor on their own. Repeated often enough, they can make AI as a category feel socially off.
Mismatch is still the strongest basic explanation. AI keeps presenting cues that invite human social expectations, then fails to satisfy them reliably. Natural language invites expectations of understanding. Warm tone invites expectations of care. Realistic video invites expectations of authenticity. Agentic behavior invites expectations of judgment and competence. Repeated breaks in those expectations make aversion easier to understand. Research on face-voice mismatch and realism consistency supports that picture.
Repeated exposure may also change the reaction over time. Some work on repeated interaction with robots suggests uncanniness can decrease with familiarity in certain contexts. Familiarity can reduce startle while leaving behind a more stable sense that the category is untrustworthy. That possibility fits AI especially well because people are encountering many versions of the same near-human pattern across modalities.
Disgust and Danger
Disgust and disease-avoidance are longstanding candidates in uncanny valley theory. The basic idea is that near-human abnormalities can activate evolved avoidance responses because deviations in appearance or behavior may function as cues of illness, contamination, or threat. A 2025 study on virtual agents explicitly frames its findings in terms of the pathogen-avoidance hypothesis.
Danger-avoidance is a wider evolutionary version of that argument. Moosa and Ud-Dean argue that pathogen avoidance alone is too narrow because even a fresh corpse can provoke strong aversion before visible decay appears. Their proposal is that the uncanny valley reflects a danger-avoidance system more generally. That is relevant for AI because near-human abnormality may be enough to trigger caution or revulsion even when the stimulus does not resemble a diseased body.
AI often presents the kind of near-human abnormality that could fit that account. It speaks with confidence without understanding. It performs social fluency without satisfying the conditions that make human social signals trustworthy. That kind of mismatch could plausibly recruit disgust or danger-detection processes even when the stimulus is text, voice, or video rather than a literal body.
Mortality salience and terror management theory point to another mechanism. MacDorman explicitly connected the uncanny valley to terror management theory, proposing that highly humanlike robots may feel eerie partly because they act as reminders of death and human vulnerability. Related work by Ho and MacDorman also links uncanny reactions to fears associated with dying and to psychological defenses against mortality. Put bluntly, what people consider AI slop could remind them of their own mortality because it looks or sounds human while feeling hollow and intrusive.
AI now circulates alongside explicit existential discourse. People encounter AI together with warnings about extinction, superintelligence, replacement, and loss of control. Mortality cues may therefore be present in two forms at once. One is implicit and tied to uncanny responses to near-human but hollow systems. The other is explicit and tied to the surrounding narratives of existential risk and human displacement. Job displacement belongs here too because it is closely tied to redundancy, status loss, and diminished social value.
These mechanisms can accumulate rather than compete. AI can repeatedly produce social mismatch while also activating older aversion systems and explicit existential fears. Under those conditions, anti-AI sentiment would be expected to feel more forceful than a standard policy disagreement.
Why This Could Make Anti-AI Sentiment Stronger Now
The Affective Layer
The visceral layer helps explain why negative public sentiment toward AI can seem more intense than the arguments alone would predict. People are evaluating AI through explicit beliefs about misuse, power, labor, and privacy. They may also be evaluating it through repeated low-level experiences of social wrongness, aversion, and existential unease.
The public-expert gap also looks different through this lens. Experts interact with AI through a frame centered on capability, utility, and technical progress. Much of the public encounters AI as disruption, intrusion, imitation, or threat. Low-level uncanny reactions inside those encounters would help explain why reassurance about usefulness often fails to touch the emotional center of the response.
Limits
The uncanny valley literature remains mixed. Extending it from robots and embodied replicas to AI as a field is a conceptual move rather than an established finding. Repeated exposure can reduce uncanniness in some settings. Cross-national differences in anti-AI sentiment may also have more to do with regulation, media framing, labor conditions, or institutional trust than with disgust or mortality salience.
AI is now a category people keep running into across text, voice, video, and agents. If those repeated encounters keep activating mismatch, disgust, danger-avoidance, and mortality-related responses, then some share of anti-AI sentiment may be building from the body upward as well as from explicit argument.
Design and Public Reaction
AI products already create expectations through their degree of humanness. Chatbots, voice agents, avatars, AI tutors, customer service agents, companions, generated video, and humanoid robots all use human cues in different ways. Wording, voice, timing, memory, emotional tone, visual realism, and behavior can either fit together or clash.
The earlier research points to two clear ways out of the uncanny zone, and it suggests a third design path. The first is consistency. When a system's cues fit together across wording, voice, timing, visual design, and behavior, it violates fewer expectations. The second is full convincingness. In the classic uncanny valley framework, affinity rises again when a system gets far enough past the valley to feel genuinely convincing. The third is purposeful distance from humanness. This is the design lesson I would add. A system that stays clearly stylized, machine-like, or socially distinct may trigger fewer human expectations in the first place.
Humanoid robotics may intensify the problem. Multimodal AI already creates a less embodied uncanny effect through text, voice, video, and agents. Robotics adds gait, facial motion, timing, touch, and physical presence, which is closer to classic uncanny valley terrain. As anthropomorphized robots get closer to passing as human without fully succeeding, they may produce a stronger embodied wave of disgust and fear.
Anti-AI sentiment may be growing in force because repeated exposure is making AI feel increasingly hollow and intrusive. Political, economic, and ethical explanations remain central. This commentary adds an account of why the reaction can feel so visceral.
Subscribe for future posts
If you want new writing at the intersection of AI and psychology, ethics, and implementation of AI in clinical practice, subscribe on Substack.
The views expressed here are my own and do not necessarily reflect the views of any current or future employer, training site, academic institution, or affiliated organization.