The GLP-1 revolution has outpaced the behavioural science

paul sacher • March 12, 2026


Why digital health platforms may now be running the largest behavioural experiment in obesity care


GLP-1 medications have transformed the treatment landscape for obesity.


In a remarkably short period of time, pharmacological therapies such as semaglutide and tirzepatide have demonstrated levels of weight loss that were previously difficult to achieve outside of bariatric surgery. Demand has surged globally as both clinicians and patients recognise the potential of these treatments.


But an important detail is often overlooked in the public conversation around these medications.


In the clinical trials that led to their approval, GLP-1 therapies were almost never studied in isolation. They were typically delivered alongside structured lifestyle and behavioural interventions. 


That was the treatment model.


Medication plus behavioural support.


Yet despite the rapid growth of pharmacological treatments for obesity, there is a surprising gap in the evidence base. We still have relatively limited high quality evidence about the most effective behavioural interventions to combine with these medications. 


In other words, drug innovation has moved faster than the behavioural science.


A new real world laboratory

This creates a fascinating and important situation.


Digital health platforms delivering GLP-1 treatments are now operating at a scale that was rarely possible in traditional clinical research.


Hundreds of thousands of patients are interacting with these services.

Millions of digital touchpoints are being generated across patient journeys.


Within those interactions sit a wide range of behavioural signals:


• how patients respond to early side effects

• what happens during plateau periods

• when motivation drops

• what helps patients stay engaged long term

• how patients transition into long term maintenance


These are not purely clinical questions.


They are behavioural questions.


And the answers are embedded within the real world data generated by digital care platforms.


Obesity is a chronic disease

Another key point emphasised in the latest guidance is that obesity should be understood as a chronic disease requiring sustained management rather than a short term intervention. 


This has important implications for how treatment systems are designed.


Pharmacotherapy may initiate weight loss, but long term outcomes depend heavily on behavioural factors such as adherence, nutrition, physical activity, and psychological support.


Digital health platforms therefore face a challenge that is partly clinical but deeply behavioural.


How do you design systems that support patients not just for weeks or months, but potentially for years?


Increasingly, the answer involves combining clinical care with behavioural science and AI enabled support systems that can respond to patients at scale.


From product data to behavioural insight

At the same time, many organisations now hold large volumes of patient interaction data but have limited ways of translating that information into meaningful behavioural insight.


The opportunity is significant.


If digital health platforms are effectively running one of the largest real world behavioural experiments in obesity care, the next question becomes whether we are learning from it.


At Sacher AI, much of our work sits at the intersection of industry and research.


Alongside building and evaluating AI systems for healthcare, our team conducts applied behavioural and clinical research designed to turn real world platform data into actionable insight. This includes identifying behavioural patterns in patient journeys, testing intervention strategies, and generating credible evidence that can inform both product development and clinical practice.


Because the next phase of innovation in obesity care may not come solely from new medications.


It may come from understanding behaviour better.

Related blog updates


Diverse digital health team reviewing patient support data and treatment journey analytics in a mode
By paul sacher March 10, 2026
What digital health platforms are learning as GLP 1 services scale. Why behaviour change support and AI systems matter for retention, safety and operational pressure.
March 9, 2026
Reflections from the MHRA AI Airlock simulation workshop
March 9, 2026
Artificial intelligence systems are increasingly interacting directly with people. They guide health decisions. They answer personal questions. They offer advice, reassurance, and encouragement. In many cases they are now embedded in products people use repeatedly over months or even years. Yet most conversations about AI safety still focus almost entirely on technical performance. Accuracy, bias, privacy, and security dominate the discussion. These are essential considerations. But they are not the whole story. What is still missing in many AI systems is systematic evaluation of behavioural impact. A global call from behavioural scientists Recently, Dr Paul Sacher, Founder of Sacher AI and Research Director at the Behavioral AI Institute, led an international group of behavioural scientists to address this issue. Their open letter, now published in Wellcome Open Research, argues that artificial intelligence systems inevitably influence how people think, feel, decide, and act. Despite this, behavioural effects are rarely treated as a core requirement in AI development, evaluation, or governance. The paper brings together researchers from institutions including Imperial College London, Harvard University, Duke University, the University of Exeter, and the Alan Turing Institute. It outlines where behavioural risks arise in real world AI systems and why these risks deserve far greater attention. Behavioural risks are not edge cases When organisations think about AI risk, they often focus on incorrect outputs or technical failures. However, behavioural risks can emerge even when systems are technically accurate. AI systems influence behaviour through repeated interaction. They shape decision making, motivation, confidence, and emotional responses over time. For example, conversational systems can unintentionally: · encourage over reliance on automated advice · reinforce existing beliefs through personalisation · influence emotional regulation through tone and framing · alter motivation and goal setting behaviour · reduce appropriate help seeking in certain contexts These effects arise through well documented behavioural mechanisms such as automation bias, trust calibration, anthropomorphism, and reinforcement learning from user feedback. Yet most AI evaluation frameworks still prioritise task success and user satisfaction rather than behavioural outcomes. The growing gap between technical success and real world impact This creates a gap between technical performance and real world impact. A system can perform well on benchmarks and still shape behaviour in ways that undermine user wellbeing, decision quality, or long term outcomes. The risk becomes particularly important in domains where AI systems interact with people repeatedly and at scale. Healthcare, education, financial guidance, and emotional support tools are clear examples. In these environments, small behavioural effects can accumulate over time and remain invisible to standard performance metrics. Behavioural science is often missing from the AI lifecycle Behavioural science offers decades of research into how people think, feel, and act. It provides practical tools for understanding trust, influence, decision making, and motivation. However, behavioural expertise is still rarely embedded systematically across the AI lifecycle. In many projects behavioural scientists are not involved in system design, consulted only during ethics review, or brought in late in development when major design choices are already fixed. This often means behavioural risks are identified too late or not assessed at all. What responsible AI should look like The open letter argues that responsible AI must go beyond technical safeguards. Systems that interact directly with people must demonstrate what the authors describe as psychological competence. In practice, this means responding in ways that are emotionally appropriate, behaviourally responsible, and aligned with the user’s needs and context. Achieving this requires several shifts in how AI systems are designed and evaluated. Behavioural assumptions should be made explicit during design. Behavioural expertise should be embedded early in development. Evaluation should assess behavioural outcomes alongside technical performance. Monitoring should continue after deployment as systems evolve and users adapt. These are not theoretical concerns. They are practical requirements for building AI systems that work safely in real world environments. Why this matters for digital health and AI companies At Sacher AI we see this challenge frequently when working with digital health companies building patient facing AI systems. Teams often focus heavily on model performance, prompt design, or system architecture. These are important elements. But once systems begin interacting with people, behavioural dynamics quickly become central to product safety and effectiveness. Tone influences trust. Feedback influences motivation. Personalisation influences decision making. Without deliberate behavioural evaluation, systems can unintentionally nudge users in directions that product teams never intended. For companies operating in regulated environments such as healthcare, this also has implications for governance, compliance, and long term product risk. Bringing behavioural safety into AI development Addressing behavioural risk does not require reinventing AI governance. It requires integrating behavioural science into existing development processes. In practice this can include structured behavioural evaluation during development, adversarial testing of conversational agents, governance frameworks for human facing AI, and ongoing monitoring of behavioural outcomes once systems are deployed. Many organisations are beginning to recognise this gap and are looking for ways to assess behavioural safety earlier in their development pipeline. A broader shift in how we think about AI safety The central message of the open letter is simple. If AI systems influence human behaviour, behavioural science must be treated as foundational infrastructure for responsible AI. Technical safety alone is not enough. Understanding how people interpret, trust, and respond to AI systems is essential for building technology that works safely and effectively in the real world. Read the open letter The full paper is available in Wellcome Open Research . About Sacher AI Sacher AI works with digital health and AI companies to design, test, and deploy human facing AI systems safely and effectively. Our work combines behavioural science, AI engineering, and real world healthcare experience to help organisations build AI systems that are not only technically strong but also clinically and behaviourally safe.  If your organisation is developing AI systems that interact directly with patients or users, we are always happy to start a conversation. More information can be found at https://sacher.ai Sacher PM, Michie S, Hauser OP et al. The missing discipline in AI: a call for behavioural science . Wellcome Open Res 2026, 11:152 (https://doi.org/10.12688/wellcomeopenres.25922.1)