Your AI talks to humans. We make sure it's safe, effective, and ready to scale.


We help digital health teams build AI that's safe, effective, and ready for real-world use.



Book a Discovery Call

Teams we work with


Digital health companies building AI that talks to patients, supports clinicians, or drives behaviour change — and need it to be safe, effective, and deployable.
A selection of satisfied Sacher AI client logos

Our expertise


An example conversation between a person and an AI Health Coach Chatbot

We sit at the intersection of behavioural science, applied research, and AI safety — helping digital health teams build AI that's not just technically sound, but behaviourally safe and clinically trustworthy.


From early-stage design through to real-world deployment, we cover the full AI lifecycle. Our proprietary PromptSafe platform gives teams structured, evidence-based tools to evaluate AI outputs against safety and behavioural criteria at every stage.

Our approach


Discover

We map the real behavioural, user, and safety challenges your AI system needs to address — before any design decisions are made.

Design

We co-design through a behavioural and clinical lens — shaping how AI interacts with people, with early testing to ensure solutions are evidence-informed, safe by design.

Deliver

We evaluate and support AI systems as they move toward real-world use, using structured behavioural and safety criteria to assess readiness, guide iteration, and support responsible deployment at scale.

Client feedback


When the Sacher AI team joined, we had ambition but little clarity around AI, or behavioural change. Through Paul's leadership, experience, and drive, that changed rapidly and meaningfully. They played a foundational role in shaping strategy and helped us execute. Paul was instrumental in helping us build the teams that now lead our work. The Sacher AI team's input significantly accelerated our progress. Paul brings deep domain expertise, a highly collaborative spirit, and a bias for action. His impact at Numan has been profound … and lasting.


Jamie Smith Webb, CTO, Numan

Button

"Sacher AI quickly felt like an extension of our team. Due to the strength of their behavioural science, LLM and healthcare expertise, they were able to quickly understand our behavioural science approaches and AI intentions, and help to build something impactful. User safety was prioritised and the operational tests to check the interactions/outcomes were really thorough. I recommend them to other health technology innovators."


Grace Gimson

Cofounder and CEO of Holly Health

"I give my highest recommendation to anyone considering working with Dr Sacher. Dr Sacher is a true thought leader, playing an instrumental role in defining how we could best bring the power of AI effectively and safely to our products.

He will bring huge value to any organization that is lucky enough to have his time."


Jeff Feldgoise

SVP Digital Product, Allurion Technologies

"The application and utilisation of data/clinical evidence throughout the product lifecycle can be hard going, but Dr Sacher has a flair for really bringing insight to life and using evidence to inspire and engage leadership and delivery teams."


Robyn Glen

Digital Officer, Slimming World

"From the moment we hired Paul, it was clear that his wealth of experience in behaviour change, weight loss, and data science would greatly benefit our organisation, healthcare providers and patients. His transformative work in creating our behaviour change program...has had astonishing results. Our patients have been able to sustain 95% of their weight loss even 12 months after the program's end, which is truly an impressive metric considering the chronic nature of obesity."


Benoit Chardon

Chief Commercial Officer, Allurion Technologies

"Dr Sacher has a unique and highly-valued skills-set that enables him to operate extremely effectively across academic and commercial settings to drive the translation of basic research into practice in the areas of digital technologies, behaviour change, weight management and health promotion. He is a fantastic team player and helps bring science to the real world in a way that changes the lives of millions of people. His excellent communication and analytical skills help different experts work together to develop new solutions."


Prof James Stubbs

Professor of Appetite and Energy Balance at University of Leeds

"Dr Sacher is the rare expert who combines a deep understanding of the latest academic research with a knack for crafting impactful digital experiences. Smart, patient, and empathetic, he played a pivotal role in building and launching Coach Iris, our AI Weight Loss coach. His expertise defined the strategy, evaluation and coaching approach, ensuring Coach Iris delivers meaningful, accurate and safe guidance for our weight loss patients."


Joe Ranft

VP Digital Product, Allurion Technologies

"Dr Sacher is one of the most talented and driven experts in nutrition, behavioural medicine, and lifestyle modification in the field today. A widely published scientist in his own right, he has a great command of a highly dynamic research field that is changing by the day. He is also a thoughtful and kind leader who knows how to assemble teams that march toward the same goal. It was a pleasure working with him to build the best-in-class program we have at Allurion today."


Dr Shantanu Gaur

Founder and CEO Allurion Technologies

"Dr. Sacher provided insightful, data driven, Behavioral Science-based subject matter expertise for patients and providers. His impact on the development of our predictive AI model helped identify patients who were at risk of not achieving their weight loss goals, leading to earlier intervention. Dr. Sacher also played a key role in developing Allurion's ChatGPT virtual coach, including the strategy, prompt engineering, quality assurance, safety monitoring and the evaluation and interpretation of the performance metrics. 


Chris Martinez

VP Software Engineering, Allurion Technologies

When the Sacher AI team joined, we had ambition but little clarity around AI, or behavioural change. Through Paul's leadership, experience, and drive, that changed rapidly and meaningfully. They played a foundational role in shaping strategy and helped us execute. Paul was instrumental in helping us build the teams that now lead our work. The Sacher AI team's input significantly accelerated our progress. Paul brings deep domain expertise, a highly collaborative spirit, and a bias for action. His impact at Numan has been profound … and lasting.


Jamie Smith Webb, CTO, Numan

Button

"Sacher AI quickly felt like an extension of our team. Due to the strength of their behavioural science, LLM and healthcare expertise, they were able to quickly understand our behavioural science approaches and AI intentions, and help to build something impactful. User safety was prioritised and the operational tests to check the interactions/outcomes were really thorough. I recommend them to other health technology innovators."


Grace Gimson

Cofounder and CEO of Holly Health

Read more feedback

Recent blog updates


March 6, 2026
Over the last two years, generative AI has rapidly entered healthcare. Startups are building AI health coaches. Pharmaceutical companies are experimenting with patient support agents. Digital health platforms are deploying conversational AI to guide patients through treatment journeys. The opportunity is clear. AI can provide personalised support, expand access to care, and improve patient engagement. But there is a fundamental challenge that is still poorly understood. Most AI systems were not designed to interact safely with humans. And that gap becomes obvious the moment AI starts speaking directly to patients. Human facing AI creates a new class of risk When large language models are used internally, mistakes can often be caught before they reach users. When AI interacts directly with patients, the situation changes. An AI system that speaks to patients can provide incorrect health information, reinforce harmful behaviours, misunderstand emotional context, respond inappropriately to distress, or fail to escalate serious clinical signals. These are not rare edge cases. They are predictable failure modes of generative AI. The issue is not that the models are unintelligent. The issue is that they are probabilistic systems generating language without true understanding. That makes them powerful, but also unpredictable. For organisations deploying conversational AI in healthcare, safety therefore becomes a design challenge. Why traditional AI evaluation is not enough Most AI systems today are evaluated using benchmarks that measure reasoning ability, coding performance, knowledge retrieval, or accuracy on test datasets. These metrics are useful. But they say very little about how an AI system behaves in a real conversation with a human. When AI interacts with patients, the most important questions are different. Does the AI respond appropriately to vulnerable users? Does it avoid giving unsafe health advice? Does it communicate in a supportive and non judgemental tone? Does it recognise when it should escalate to a clinician? Does it stay aligned with clinical guidance? These are behavioural and clinical safety questions. Standard AI benchmarks do not answer them. Designing safe conversational AI At Sacher AI, our work focuses on a simple principle. If AI is going to interact with humans, it needs to be designed with human behaviour in mind. That means combining expertise from several fields including behavioural science, clinical safety, conversational design, and AI engineering. Together these disciplines help teams understand how users interpret what an AI says, how behaviour can be influenced by language, and when systems must escalate to human support. This perspective is often missing in early AI development. Many teams only start thinking about safety after an AI system is already live. By then, fixing the underlying issues becomes much harder. Stress testing AI systems before they reach patients One of the most effective ways to improve safety is to stress test AI systems before deployment. This means simulating large numbers of conversations that reflect the kinds of interactions real users might have. For example a patient feeling anxious about medication, a parent asking for advice about a child’s weight, a user frustrated by slow progress, or someone describing symptoms that may require clinical attention. By exposing AI systems to diverse scenarios, teams can identify unsafe behaviours early and improve the system before it reaches real users. At Sacher AI, this approach led us to build PromptSafe, a platform designed to evaluate conversational AI used in health and behavioural contexts. PromptSafe enables teams to simulate interactions with synthetic patient personas, define behavioural and clinical safety metrics, test AI systems across thousands of conversations, and track safety improvements over time. Monitoring AI once it is deployed Testing before launch is important, but it is not enough. AI systems continue to evolve once they are live. Models are updated, prompts change, and user behaviour shifts. This creates a second challenge which is monitoring safety during real world use. To address this, we are developing OVRSI, a system designed to monitor AI interactions and detect potential safety signals in real time. This includes identifying things like unsafe advice, emotional distress signals, guideline deviations, or escalation scenarios. When risks are detected, organisations can intervene quickly and improve the system. The future of AI in healthcare will depend on trust AI has the potential to transform healthcare by providing scalable and personalised support. But this future depends on trust. Patients, clinicians, and regulators will increasingly expect AI systems to demonstrate that they behave safely and responsibly. Organisations building human facing AI must therefore design for safety from the start. This means stress testing systems, embedding behavioural and clinical expertise into development teams, and monitoring how AI behaves in the real world. The companies that succeed will not just build powerful AI. They will build safe AI that people can trust. Book a discovery call If you are building patient facing conversational AI and want to ensure it is safe, reliable, and ready for real world deployment, we can help. At Sacher AI we work with digital health companies and AI teams to design and evaluate safe human facing AI systems. If you would like to explore how we can support your team, book a discovery call.
By Sacher AI December 13, 2024
In the rapidly evolving landscape of artificial intelligence, generative AI has become synonymous with creating images, text and video. However, its potential extends far beyond content generation—it's revolutionising the field of behavioural science, opening up unprecedented opportunities for understanding and influencing human behaviour.
Coding
By Sacher AI October 4, 2024
In today’s fast-evolving landscape, blending AI with behavioural science is transforming industries—from healthcare to finance and legal—by creating solutions that not only advance organisational goals but also resonate deeply with individual users. At Sacher AI, our approach combines cutting-edge AI with insights from behavioural science, leading to tailored, effective, and responsible solutions. 

Alata

Alice

Open Sans

Noto Sans

Bebas Neue

Great Vibes

Rock Salt

Exo

Belgrano

Overlock

Cinzel

Indie Flower

Staatliches

Roboto Slab

Lato

Noto Serif

Open Sans

Montserrat

Ubuntu

Rubik

Delius

Amiri

Montserrat