Why GLP-1 weight loss platforms struggle to scale without behavioural support

paul sacher • March 10, 2026

Lessons from real world digital health platforms on why behaviour change support becomes critical as GLP-1 services scale.

Most people building GLP-1 services pour their energy into the clinical and operational infrastructure. The prescribing. The compliance. The customer support. Getting the medical side to scale.


And that is genuinely hard. It is also where most of the pressure builds, and where most of the attention goes.


Which means the lifestyle and behaviour change layer often gets less focus than it deserves. Not because people do not think it matters. But because there is always something more urgent competing for attention.


Last year my team and I were working with the CEO and senior leadership team of a very large weight management company. The business was growing fast. Demand accelerating month after month.


And yet the system was starting to feel like it was pulling apart at the seams.


On paper, everything looked like success.


But across leadership discussions, one pressure point kept coming up.


When a service like this begins to scale, the operational load spreads everywhere at once. Prescribers reviewing eligibility. Patients asked for additional medical information. Questions about side effects. Dose increases. Delivery issues. Lifestyle support. And people simply wanting reassurance that what they are experiencing is normal.


All of it hitting the same system. In some of the weight management providers we have worked with across the UK and USA, support volumes have run into the hundreds of thousands of contacts per month. At that scale, things stop being an inconvenience and start becoming a clinical risk.


At one point the conversation turned to a question I have now heard in several companies.

How much does the surrounding support actually matter?


The medication clearly works. That is why demand exists in the first place.

But everything around it adds complexity. More teams. More systems. More cost.


The question I have heard more than once, particularly from commercial and operational leaders, is not whether clinical and customer support matters. It does. Side effect management, adherence, prescriber oversight, these are non-negotiable.


The question is about lifestyle support. Coaching, behaviour change programmes, digital wellbeing tools. The argument goes: the medication works, patients are losing weight, so why invest heavily in lifestyle support now? Focus on medication adherence and side effect management first. Deal with the lifestyle piece later, perhaps when patients start tapering or transitioning off the drug.

It is a reasonable question. And in the short term, the numbers can seem to support it.


But the evidence tells a different story. Lifestyle support is not just about behaviour change. It is about preventing muscle and bone loss, avoiding nutritional deficiencies, supporting mental health, and building the habits that maintain weight loss when patients eventually reduce or stop the medication. Without it, much of what the drug achieves can unwind quickly.


The question is not whether to offer lifestyle support. It is when and how to deliver it without overwhelming patients or services.


Here is what shifted my thinking after working alongside weight management providers in the UK and USA building these services.


The real issue is rarely information.

It is uncertainty.


People disengage from treatment for many reasons. Cost. Life circumstances. Plateaus. Sometimes they simply reach their goal weight. And some take temporary breaks from treatment entirely, what some teams call drug holidays, losing contact with the service at exactly the moment they most need support.


But there is a particular moment that comes up again and again. Uncertainty builds. Patients feel unsupported, lacking the guidance they need to keep going.


And there is no easy way to resolve it quickly enough at scale.


In GLP-1 driven services, that matters more than most platforms realise. Leaders we have worked with have told us that even a one or two percentage point improvement in retention can be worth more than almost any other intervention. The economics compound quickly.


When uncertainty goes unaddressed at scale, two things move together.

Patient engagement drops.

Operational pressure rises.


What we found, working across several of these services, is that many of those moments are actually predictable. They sit at very specific points in the treatment journey. Anxiety around the first medication dose. Plateau periods. Side effect spikes. They are not random.


And the data backs this up. Across the services we have analysed, a significant proportion of inbound support contacts, in some cases over a third, relate to situations that proactive, well timed communication could have addressed before the patient ever needed to reach out.


If the system responds at those moments, sometimes with human support, sometimes automated, something interesting happens.

Patients stay engaged longer. And the support burden actually falls.


The companies getting this right are starting to ask different questions. Not just what support to offer, but when it needs to reach patients, and what it needs to do at that specific point in the treatment journey.


This is the work we do at Sacher AI. We sit at the intersection of behavioural science, applied research, and AI safety, helping health and weight management platforms build AI that is not just technically sound, but behaviourally safe and clinically trustworthy.


The technology is rarely the hardest part. Designing systems that adapt and respond in the right way at the right moment, from starting treatment through to long term maintenance, that is where most of the real work sits.


If any of this reflects what your team is working through, happy to have a conversation.

Related blog updates


March 9, 2026
Reflections from the MHRA AI Airlock simulation workshop
March 9, 2026
Artificial intelligence systems are increasingly interacting directly with people. They guide health decisions. They answer personal questions. They offer advice, reassurance, and encouragement. In many cases they are now embedded in products people use repeatedly over months or even years. Yet most conversations about AI safety still focus almost entirely on technical performance. Accuracy, bias, privacy, and security dominate the discussion. These are essential considerations. But they are not the whole story. What is still missing in many AI systems is systematic evaluation of behavioural impact. A global call from behavioural scientists Recently, Dr Paul Sacher, Founder of Sacher AI and Research Director at the Behavioral AI Institute, led an international group of behavioural scientists to address this issue. Their open letter, now published in Wellcome Open Research, argues that artificial intelligence systems inevitably influence how people think, feel, decide, and act. Despite this, behavioural effects are rarely treated as a core requirement in AI development, evaluation, or governance. The paper brings together researchers from institutions including Imperial College London, Harvard University, Duke University, the University of Exeter, and the Alan Turing Institute. It outlines where behavioural risks arise in real world AI systems and why these risks deserve far greater attention. Behavioural risks are not edge cases When organisations think about AI risk, they often focus on incorrect outputs or technical failures. However, behavioural risks can emerge even when systems are technically accurate. AI systems influence behaviour through repeated interaction. They shape decision making, motivation, confidence, and emotional responses over time. For example, conversational systems can unintentionally: · encourage over reliance on automated advice · reinforce existing beliefs through personalisation · influence emotional regulation through tone and framing · alter motivation and goal setting behaviour · reduce appropriate help seeking in certain contexts These effects arise through well documented behavioural mechanisms such as automation bias, trust calibration, anthropomorphism, and reinforcement learning from user feedback. Yet most AI evaluation frameworks still prioritise task success and user satisfaction rather than behavioural outcomes. The growing gap between technical success and real world impact This creates a gap between technical performance and real world impact. A system can perform well on benchmarks and still shape behaviour in ways that undermine user wellbeing, decision quality, or long term outcomes. The risk becomes particularly important in domains where AI systems interact with people repeatedly and at scale. Healthcare, education, financial guidance, and emotional support tools are clear examples. In these environments, small behavioural effects can accumulate over time and remain invisible to standard performance metrics. Behavioural science is often missing from the AI lifecycle Behavioural science offers decades of research into how people think, feel, and act. It provides practical tools for understanding trust, influence, decision making, and motivation. However, behavioural expertise is still rarely embedded systematically across the AI lifecycle. In many projects behavioural scientists are not involved in system design, consulted only during ethics review, or brought in late in development when major design choices are already fixed. This often means behavioural risks are identified too late or not assessed at all. What responsible AI should look like The open letter argues that responsible AI must go beyond technical safeguards. Systems that interact directly with people must demonstrate what the authors describe as psychological competence. In practice, this means responding in ways that are emotionally appropriate, behaviourally responsible, and aligned with the user’s needs and context. Achieving this requires several shifts in how AI systems are designed and evaluated. Behavioural assumptions should be made explicit during design. Behavioural expertise should be embedded early in development. Evaluation should assess behavioural outcomes alongside technical performance. Monitoring should continue after deployment as systems evolve and users adapt. These are not theoretical concerns. They are practical requirements for building AI systems that work safely in real world environments. Why this matters for digital health and AI companies At Sacher AI we see this challenge frequently when working with digital health companies building patient facing AI systems. Teams often focus heavily on model performance, prompt design, or system architecture. These are important elements. But once systems begin interacting with people, behavioural dynamics quickly become central to product safety and effectiveness. Tone influences trust. Feedback influences motivation. Personalisation influences decision making. Without deliberate behavioural evaluation, systems can unintentionally nudge users in directions that product teams never intended. For companies operating in regulated environments such as healthcare, this also has implications for governance, compliance, and long term product risk. Bringing behavioural safety into AI development Addressing behavioural risk does not require reinventing AI governance. It requires integrating behavioural science into existing development processes. In practice this can include structured behavioural evaluation during development, adversarial testing of conversational agents, governance frameworks for human facing AI, and ongoing monitoring of behavioural outcomes once systems are deployed. Many organisations are beginning to recognise this gap and are looking for ways to assess behavioural safety earlier in their development pipeline. A broader shift in how we think about AI safety The central message of the open letter is simple. If AI systems influence human behaviour, behavioural science must be treated as foundational infrastructure for responsible AI. Technical safety alone is not enough. Understanding how people interpret, trust, and respond to AI systems is essential for building technology that works safely and effectively in the real world. Read the open letter The full paper is available in Wellcome Open Research . About Sacher AI Sacher AI works with digital health and AI companies to design, test, and deploy human facing AI systems safely and effectively. Our work combines behavioural science, AI engineering, and real world healthcare experience to help organisations build AI systems that are not only technically strong but also clinically and behaviourally safe.  If your organisation is developing AI systems that interact directly with patients or users, we are always happy to start a conversation. More information can be found at https://sacher.ai Sacher PM, Michie S, Hauser OP et al. The missing discipline in AI: a call for behavioural science . Wellcome Open Res 2026, 11:152 (https://doi.org/10.12688/wellcomeopenres.25922.1)
March 6, 2026
Over the last two years, generative AI has rapidly entered healthcare. Startups are building AI health coaches. Pharmaceutical companies are experimenting with patient support agents. Digital health platforms are deploying conversational AI to guide patients through treatment journeys. The opportunity is clear. AI can provide personalised support, expand access to care, and improve patient engagement. But there is a fundamental challenge that is still poorly understood. Most AI systems were not designed to interact safely with humans. And that gap becomes obvious the moment AI starts speaking directly to patients. Human facing AI creates a new class of risk When large language models are used internally, mistakes can often be caught before they reach users. When AI interacts directly with patients, the situation changes. An AI system that speaks to patients can provide incorrect health information, reinforce harmful behaviours, misunderstand emotional context, respond inappropriately to distress, or fail to escalate serious clinical signals. These are not rare edge cases. They are predictable failure modes of generative AI. The issue is not that the models are unintelligent. The issue is that they are probabilistic systems generating language without true understanding. That makes them powerful, but also unpredictable. For organisations deploying conversational AI in healthcare, safety therefore becomes a design challenge. Why traditional AI evaluation is not enough Most AI systems today are evaluated using benchmarks that measure reasoning ability, coding performance, knowledge retrieval, or accuracy on test datasets. These metrics are useful. But they say very little about how an AI system behaves in a real conversation with a human. When AI interacts with patients, the most important questions are different. Does the AI respond appropriately to vulnerable users? Does it avoid giving unsafe health advice? Does it communicate in a supportive and non judgemental tone? Does it recognise when it should escalate to a clinician? Does it stay aligned with clinical guidance? These are behavioural and clinical safety questions. Standard AI benchmarks do not answer them. Designing safe conversational AI At Sacher AI, our work focuses on a simple principle. If AI is going to interact with humans, it needs to be designed with human behaviour in mind. That means combining expertise from several fields including behavioural science, clinical safety, conversational design, and AI engineering. Together these disciplines help teams understand how users interpret what an AI says, how behaviour can be influenced by language, and when systems must escalate to human support. This perspective is often missing in early AI development. Many teams only start thinking about safety after an AI system is already live. By then, fixing the underlying issues becomes much harder. Stress testing AI systems before they reach patients One of the most effective ways to improve safety is to stress test AI systems before deployment. This means simulating large numbers of conversations that reflect the kinds of interactions real users might have. For example a patient feeling anxious about medication, a parent asking for advice about a child’s weight, a user frustrated by slow progress, or someone describing symptoms that may require clinical attention. By exposing AI systems to diverse scenarios, teams can identify unsafe behaviours early and improve the system before it reaches real users. At Sacher AI, this approach led us to build PromptSafe, a platform designed to evaluate conversational AI used in health and behavioural contexts. PromptSafe enables teams to simulate interactions with synthetic patient personas, define behavioural and clinical safety metrics, test AI systems across thousands of conversations, and track safety improvements over time. Monitoring AI once it is deployed Testing before launch is important, but it is not enough. AI systems continue to evolve once they are live. Models are updated, prompts change, and user behaviour shifts. This creates a second challenge which is monitoring safety during real world use. To address this, we are developing OVRSI, a system designed to monitor AI interactions and detect potential safety signals in real time. This includes identifying things like unsafe advice, emotional distress signals, guideline deviations, or escalation scenarios. When risks are detected, organisations can intervene quickly and improve the system. The future of AI in healthcare will depend on trust AI has the potential to transform healthcare by providing scalable and personalised support. But this future depends on trust. Patients, clinicians, and regulators will increasingly expect AI systems to demonstrate that they behave safely and responsibly. Organisations building human facing AI must therefore design for safety from the start. This means stress testing systems, embedding behavioural and clinical expertise into development teams, and monitoring how AI behaves in the real world. The companies that succeed will not just build powerful AI. They will build safe AI that people can trust. Book a discovery call If you are building patient facing conversational AI and want to ensure it is safe, reliable, and ready for real world deployment, we can help. At Sacher AI we work with digital health companies and AI teams to design and evaluate safe human facing AI systems. If you would like to explore how we can support your team, book a discovery call.