Episode
29

Why AI Adoption Fails: A Behavioral Framework for AI Implementation

Lis Costa of the Behavioral Insights Team returns to High Signal to deliver a critical behavioral science playbook for the AI era focused on human and business impact. We discuss why the potential of AI can only be fulfilled by understanding a single bottleneck: human behavior. The conversation reveals why leaders must intervene now to prevent temporary adoption patterns from calcifying into permanent organizational norms, the QWERTY Effect, and how to move organizations past simply automating drudgery to achieving deep integration. We dig into why AI adoption is fundamentally a behavioral challenge, providing a diagnostic framework for leaders to identify stalled progress using the Motivation-Capability-Trust triad. Lis explains how to reframe AI deployment by leveraging Loss Aversion to bypass employee skepticism, and how to design workflows that improve human reasoning rather than replace it. The conversation provides clear guidance on intentional task offloading, the power of using AI to stress-test decisions, and why sanctioning employee experimentation is essential to discovering high-value use cases.
November 27, 2025
Listen on
spotify logoApple podcast logo
Guest
Elisabeth Costa

Behavioural Insights Team

,

Elisabeth is the Chief of Innovation and Partnerships at the Behavioural Insights Team. Elisabeth leads global efforts for building new, long-term partnerships and collaborations across the team, as well as developing BIT’s emerging areas of expertise and service offers.

Elisabeth’s has broad ranging experience across the UK, Australia and the United States. Her particular areas of expertise are the intersection of AI and behavioral science, market regulation, and economic policy.

Elisabeth was previously the Managing Director of the Behavioural Insights Team in the UK. Joining the team in 2015, Elisabeth has developed and led the team’s flagship programmes on the intersection of behavioural science and economic policy. Alongside leading dozens of projects and trials, she has co-authored papers on consumer and financial decision-making, online safety, labour markets, business productivity and sustainability. Elisabeth regularly contributes to policy discussions and public debate.

Elisabeth is currently a Senior Visiting Fellow at the London School of Economics in the Department of Psychological and Behavioural Science. She is also a member of Ofcom’s Online Information Advisory Committee, providing advice to the UK Communication Services Regulator on misinformation and disinformation. Prior to joining the Behavioural Insights Team, Elisabeth held senior roles at the Australian Treasury and completed her postgraduate studies at Harvard University.

Guest

,
HOST
Hugo Bowne-Anderson

Delphina

Hugo Bowne-Anderson is an independent data and AI consultant with extensive experience in the tech industry. He is the host of the industry podcast Vanishing Gradients, a podcast exploring developments in data science and AI. Previously, Hugo served as Head of Developer Relations at Outerbounds and held roles at Coiled and DataCamp, where his work in data science education reached over 3 million learners. He has taught at Yale University, Cold Spring Harbor Laboratory, and conferences like SciPy and PyCon, and is a passionate advocate for democratizing data skills and open-source tools.

Key Quotes

Key Takeaways

The "QWERTY Effect" Dictates Early AI Implementation.

We are currently in a fragile window where temporary adoption patterns are calcifying into permanent organizational norms. Just as the inferior QWERTY keyboard layout persists due to cultural lock-in, the specific AI workflows and friction points established today will determine how companies operate for the next decade. Leaders must intervene now to design deliberate workflows, knowing that "good enough" habits formed today will be prohibitively expensive to reverse tomorrow.

Adoption is a Spectrum: Move from Shallow Use to Deep Integration.

Measuring adoption via binary metrics (active vs. inactive users) creates a false sense of progress. Most organizations are stuck in "Shallow Adoption"—using AI for rudimentary tasks like summarization or note-taking. Real value lies in "Deep Integration," where AI functions as a collaborative partner in complex reasoning workflows. Leaders must audit their organizations to find where AI is merely automating drudgery versus where it is augmenting decision-making.

Leverage Loss Aversion to Overcome Algorithmic Resistance.

Standard change management strategies often fail because they focus on "efficiency gains," which triggers skepticism. Behavioral experiments reveal that humans are significantly more willing to delegate control to an AI when the goal is preventing loss rather than achieving gain. To bypass algorithmic aversion, leaders should reframe AI tools not as productivity boosters, but as risk-mitigation instruments that protect employees from error and oversight.

Guard Against Cognitive Atrophy via Intentional Offloading.

Unchecked delegation to AI risks "Cognitive Atrophy," where critical human skills diminish due to lack of use. Leaders must distinguish between "Cognitive Offloading" (removing low-value drudgery to focus on higher-order thinking) and the harmful outsourcing of core reasoning. To preserve the "Extended Mind," teams should enforce workflows where independent human thought precedes AI augmentation, ensuring the AI serves as a red-teamer rather than a replacement.

Break the "Chat Chamber" Feedback Loop.

Bias in AI is not static; it is bi-directional and compounding. "Chat Chambers" emerge when user biases prompt the model, and the model’s naturally sycophantic tuning reinforces those biases, creating a downward spiral of confirmation. Leaders should implement "Inference Time Adaptation"—tuning internal models to be less agreeable and more challenging—to ensure AI acts as a check on human bias rather than a mirror.

Design for Metacognition: Bridging System 1 and System 2.

Effective AI augmentation mirrors the "Dual Process" theory of human cognition. While foundation models excel at rapid pattern recognition (System 1), the next frontier of value requires mimicking deliberate, slow reasoning (System 2). Organizations should deploy "Reflective LLMs"—agents designed specifically to pause, question the user's premise, and force a "metacognitive check" before generating an answer.

Active Friction is Required for De-Biasing.

Passive availability of AI tools is insufficient to correct human cognitive error. Behavioral experiments demonstrate that AI successfully de-biases decision-making only when the user is forced to engage with the model’s reasoning. Leaders designing internal applications must insert "choice architecture" frictions—mandatory checkpoints where the AI challenges the human’s assumption—rather than relying on optional sidebars that employees will ignore under time pressure.

Sanction "Play" to Uncover High-Value Use Cases.

A productivity paradox currently exists where executives expect efficiency gains, but individual contributors experience productivity dips due to the cognitive load of learning new tools. To bridge this gap, organizations must carve out sanctioned spaces (like a dedicated "AI-BI" channel or "lab time") for low-stakes experimentation. Real "Deep Integration" workflows rarely come from top-down mandates; they emerge from the bottom-up when employees are given license to tinker without the pressure of immediate output.

Operationalize AI as a Dissenting Voice ("Red Teaming").

Due to the "Sycophancy Problem," standard LLMs default to agreeing with the user, which reinforces groupthink. Leaders should institutionalize "Red Teaming" protocols where AI is explicitly prompted to adopt an adversarial persona (e.g., a skeptic, a competitor, or a regulator). This moves the AI from a passive drafter to an active stress-tester, validating the logic of strategic decisions before they reach human consensus.

Diagnose Stalled Adoption via the Motivation-Capability-Trust Triad.

When AI implementation stalls, leaders often misdiagnose the cause as simple resistance to change. The "Motivation-Capability-Trust" framework provides a precise diagnostic: Is the barrier a lack of incentive (Motivation), a skills gap (Capability), or a fundamental belief that the output is dangerous or inferior (Trust)? Interventions must be targeted; offering more training (Capability) will not solve a stall caused by fear of hallucination (Trust).

Reframe AI as a "Metacognitive Controller" for Human Decision-Making.

Beyond generating text or code, the highest utility of an AI agent is its ability to simulate "Metacognition"—thinking about thinking. Drawing from behavioral interventions like the "Chicago Crime Lab" study, AI can be used to insert a "moment’s pause" in high-stakes workflows, offering users a menu of potential strategies before they commit to an action. This shifts the AI’s role from a content generator to a strategic navigator, helping humans break out of reactive, System 1 responses.

You can read the full transcript here.

Timestamps

00:00 AI & Knowledge of Human Behavior

01:02 Guest Introduction: Liz Costa0

1:23 Exploring AI and Human Behavior

01:40 The Adoption Spectrum of AI

03:31 Deep Dive: AI's Impact on Human Behavior

03:54 Framework for AI and Behavioral Science

09:29 Stages of AI Adoption

12:40 Behavioral Barriers to AI Adoption

19:03 Metacognition and AI

29:53 Aligning AI with Human Values

37:45 Guarding Against Cognitive Atrophy

45:51 Encouraging AI Experimentation

47:19 Conclusion and Call to Action

Links From The Show

Links From The Show

Transcript

featured

In the spotlight: Our most popular episodes

Get the latest episodes in your inbox

Never miss an episode of High Signal by signing up for the Delphina newsletter.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.