Your AI talks to humans. We make sure it's safe, effective, and ready to scale.


We help digital health teams build AI that's safe, effective, and ready for real-world use.



Book a Discovery Call

Teams we work with


Digital health companies building AI that talks to patients, supports clinicians, or drives behaviour change — and need it to be safe, effective, and deployable.
A selection of satisfied Sacher AI client logos

Our expertise


An example conversation between a person and an AI Health Coach Chatbot

We sit at the intersection of behavioural science, applied research, and AI safety — helping digital health teams build AI that's not just technically sound, but behaviourally safe and clinically trustworthy.


From early-stage design through to real-world deployment, we cover the full AI lifecycle. Our proprietary PromptSafe platform gives teams structured, evidence-based tools to evaluate AI outputs against safety and behavioural criteria at every stage.

Our approach


Discover

We map the real behavioural, user, and safety challenges your AI system needs to address — before any design decisions are made.

Design

We co-design through a behavioural and clinical lens — shaping how AI interacts with people, with early testing to ensure solutions are evidence-informed, safe by design.

Deliver

We evaluate and support AI systems as they move toward real-world use, using structured behavioural and safety criteria to assess readiness, guide iteration, and support responsible deployment at scale.

Client feedback


When the Sacher AI team joined, we had ambition but little clarity around AI, or behavioural change. Through Paul's leadership, experience, and drive, that changed rapidly and meaningfully. They played a foundational role in shaping strategy and helped us execute. Paul was instrumental in helping us build the teams that now lead our work. The Sacher AI team's input significantly accelerated our progress. Paul brings deep domain expertise, a highly collaborative spirit, and a bias for action. His impact at Numan has been profound … and lasting.


Jamie Smith Webb, CTO, Numan

Button

"Sacher AI quickly felt like an extension of our team. Due to the strength of their behavioural science, LLM and healthcare expertise, they were able to quickly understand our behavioural science approaches and AI intentions, and help to build something impactful. User safety was prioritised and the operational tests to check the interactions/outcomes were really thorough. I recommend them to other health technology innovators."


Grace Gimson

Cofounder and CEO of Holly Health

"I give my highest recommendation to anyone considering working with Dr Sacher. Dr Sacher is a true thought leader, playing an instrumental role in defining how we could best bring the power of AI effectively and safely to our products.

He will bring huge value to any organization that is lucky enough to have his time."


Jeff Feldgoise

SVP Digital Product, Allurion Technologies

"The application and utilisation of data/clinical evidence throughout the product lifecycle can be hard going, but Dr Sacher has a flair for really bringing insight to life and using evidence to inspire and engage leadership and delivery teams."


Robyn Glen

Digital Officer, Slimming World

"From the moment we hired Paul, it was clear that his wealth of experience in behaviour change, weight loss, and data science would greatly benefit our organisation, healthcare providers and patients. His transformative work in creating our behaviour change program...has had astonishing results. Our patients have been able to sustain 95% of their weight loss even 12 months after the program's end, which is truly an impressive metric considering the chronic nature of obesity."


Benoit Chardon

Chief Commercial Officer, Allurion Technologies

"Dr Sacher has a unique and highly-valued skills-set that enables him to operate extremely effectively across academic and commercial settings to drive the translation of basic research into practice in the areas of digital technologies, behaviour change, weight management and health promotion. He is a fantastic team player and helps bring science to the real world in a way that changes the lives of millions of people. His excellent communication and analytical skills help different experts work together to develop new solutions."


Prof James Stubbs

Professor of Appetite and Energy Balance at University of Leeds

"Dr Sacher is the rare expert who combines a deep understanding of the latest academic research with a knack for crafting impactful digital experiences. Smart, patient, and empathetic, he played a pivotal role in building and launching Coach Iris, our AI Weight Loss coach. His expertise defined the strategy, evaluation and coaching approach, ensuring Coach Iris delivers meaningful, accurate and safe guidance for our weight loss patients."


Joe Ranft

VP Digital Product, Allurion Technologies

"Dr Sacher is one of the most talented and driven experts in nutrition, behavioural medicine, and lifestyle modification in the field today. A widely published scientist in his own right, he has a great command of a highly dynamic research field that is changing by the day. He is also a thoughtful and kind leader who knows how to assemble teams that march toward the same goal. It was a pleasure working with him to build the best-in-class program we have at Allurion today."


Dr Shantanu Gaur

Founder and CEO Allurion Technologies

"Dr. Sacher provided insightful, data driven, Behavioral Science-based subject matter expertise for patients and providers. His impact on the development of our predictive AI model helped identify patients who were at risk of not achieving their weight loss goals, leading to earlier intervention. Dr. Sacher also played a key role in developing Allurion's ChatGPT virtual coach, including the strategy, prompt engineering, quality assurance, safety monitoring and the evaluation and interpretation of the performance metrics. 


Chris Martinez

VP Software Engineering, Allurion Technologies

When the Sacher AI team joined, we had ambition but little clarity around AI, or behavioural change. Through Paul's leadership, experience, and drive, that changed rapidly and meaningfully. They played a foundational role in shaping strategy and helped us execute. Paul was instrumental in helping us build the teams that now lead our work. The Sacher AI team's input significantly accelerated our progress. Paul brings deep domain expertise, a highly collaborative spirit, and a bias for action. His impact at Numan has been profound … and lasting.


Jamie Smith Webb, CTO, Numan

Button

"Sacher AI quickly felt like an extension of our team. Due to the strength of their behavioural science, LLM and healthcare expertise, they were able to quickly understand our behavioural science approaches and AI intentions, and help to build something impactful. User safety was prioritised and the operational tests to check the interactions/outcomes were really thorough. I recommend them to other health technology innovators."


Grace Gimson

Cofounder and CEO of Holly Health

Read more feedback

Recent blog updates


Diverse digital health team reviewing patient support data and treatment journey analytics in a mode
By paul sacher March 10, 2026
What digital health platforms are learning as GLP 1 services scale. Why behaviour change support and AI systems matter for retention, safety and operational pressure.
March 9, 2026
Reflections from the MHRA AI Airlock simulation workshop
March 9, 2026
Artificial intelligence systems are increasingly interacting directly with people. They guide health decisions. They answer personal questions. They offer advice, reassurance, and encouragement. In many cases they are now embedded in products people use repeatedly over months or even years. Yet most conversations about AI safety still focus almost entirely on technical performance. Accuracy, bias, privacy, and security dominate the discussion. These are essential considerations. But they are not the whole story. What is still missing in many AI systems is systematic evaluation of behavioural impact. A global call from behavioural scientists Recently, Dr Paul Sacher, Founder of Sacher AI and Research Director at the Behavioral AI Institute, led an international group of behavioural scientists to address this issue. Their open letter, now published in Wellcome Open Research, argues that artificial intelligence systems inevitably influence how people think, feel, decide, and act. Despite this, behavioural effects are rarely treated as a core requirement in AI development, evaluation, or governance. The paper brings together researchers from institutions including Imperial College London, Harvard University, Duke University, the University of Exeter, and the Alan Turing Institute. It outlines where behavioural risks arise in real world AI systems and why these risks deserve far greater attention. Behavioural risks are not edge cases When organisations think about AI risk, they often focus on incorrect outputs or technical failures. However, behavioural risks can emerge even when systems are technically accurate. AI systems influence behaviour through repeated interaction. They shape decision making, motivation, confidence, and emotional responses over time. For example, conversational systems can unintentionally: · encourage over reliance on automated advice · reinforce existing beliefs through personalisation · influence emotional regulation through tone and framing · alter motivation and goal setting behaviour · reduce appropriate help seeking in certain contexts These effects arise through well documented behavioural mechanisms such as automation bias, trust calibration, anthropomorphism, and reinforcement learning from user feedback. Yet most AI evaluation frameworks still prioritise task success and user satisfaction rather than behavioural outcomes. The growing gap between technical success and real world impact This creates a gap between technical performance and real world impact. A system can perform well on benchmarks and still shape behaviour in ways that undermine user wellbeing, decision quality, or long term outcomes. The risk becomes particularly important in domains where AI systems interact with people repeatedly and at scale. Healthcare, education, financial guidance, and emotional support tools are clear examples. In these environments, small behavioural effects can accumulate over time and remain invisible to standard performance metrics. Behavioural science is often missing from the AI lifecycle Behavioural science offers decades of research into how people think, feel, and act. It provides practical tools for understanding trust, influence, decision making, and motivation. However, behavioural expertise is still rarely embedded systematically across the AI lifecycle. In many projects behavioural scientists are not involved in system design, consulted only during ethics review, or brought in late in development when major design choices are already fixed. This often means behavioural risks are identified too late or not assessed at all. What responsible AI should look like The open letter argues that responsible AI must go beyond technical safeguards. Systems that interact directly with people must demonstrate what the authors describe as psychological competence. In practice, this means responding in ways that are emotionally appropriate, behaviourally responsible, and aligned with the user’s needs and context. Achieving this requires several shifts in how AI systems are designed and evaluated. Behavioural assumptions should be made explicit during design. Behavioural expertise should be embedded early in development. Evaluation should assess behavioural outcomes alongside technical performance. Monitoring should continue after deployment as systems evolve and users adapt. These are not theoretical concerns. They are practical requirements for building AI systems that work safely in real world environments. Why this matters for digital health and AI companies At Sacher AI we see this challenge frequently when working with digital health companies building patient facing AI systems. Teams often focus heavily on model performance, prompt design, or system architecture. These are important elements. But once systems begin interacting with people, behavioural dynamics quickly become central to product safety and effectiveness. Tone influences trust. Feedback influences motivation. Personalisation influences decision making. Without deliberate behavioural evaluation, systems can unintentionally nudge users in directions that product teams never intended. For companies operating in regulated environments such as healthcare, this also has implications for governance, compliance, and long term product risk. Bringing behavioural safety into AI development Addressing behavioural risk does not require reinventing AI governance. It requires integrating behavioural science into existing development processes. In practice this can include structured behavioural evaluation during development, adversarial testing of conversational agents, governance frameworks for human facing AI, and ongoing monitoring of behavioural outcomes once systems are deployed. Many organisations are beginning to recognise this gap and are looking for ways to assess behavioural safety earlier in their development pipeline. A broader shift in how we think about AI safety The central message of the open letter is simple. If AI systems influence human behaviour, behavioural science must be treated as foundational infrastructure for responsible AI. Technical safety alone is not enough. Understanding how people interpret, trust, and respond to AI systems is essential for building technology that works safely and effectively in the real world. Read the open letter The full paper is available in Wellcome Open Research . About Sacher AI Sacher AI works with digital health and AI companies to design, test, and deploy human facing AI systems safely and effectively. Our work combines behavioural science, AI engineering, and real world healthcare experience to help organisations build AI systems that are not only technically strong but also clinically and behaviourally safe.  If your organisation is developing AI systems that interact directly with patients or users, we are always happy to start a conversation. More information can be found at https://sacher.ai Sacher PM, Michie S, Hauser OP et al. The missing discipline in AI: a call for behavioural science . Wellcome Open Res 2026, 11:152 (https://doi.org/10.12688/wellcomeopenres.25922.1)

Alata

Alice

Open Sans

Noto Sans

Bebas Neue

Great Vibes

Rock Salt

Exo

Belgrano

Overlock

Cinzel

Indie Flower

Staatliches

Roboto Slab

Lato

Noto Serif

Open Sans

Montserrat

Ubuntu

Rubik

Delius

Amiri

Montserrat