Dr. Sievert Weiss is a physician, co-founder, and medical director of AMBOSS. Since its founding, he and an interdisciplinary team have pursued the goal of making medical knowledge available where it is needed in everyday clinical practice. AMBOSS combines learning, reference, and decision support on a platform that is now used by healthcare professionals in over 180 countries. The focus is on the reliable preparation of evidence-based content – understandable, verifiable, and immediately applicable.
The AI Mode for medical and nursing-specific research is part of this long-term development. It stands for the commitment to responsibly combine medical expertise and AI technology – with a focus on evidence, traceability, and practical relevance. In the independent NOHARM benchmark by Harvard and Stanford, the technical basis of the AI Mode (LiSA 1.0) was recently rated as the best system.
Developed in Berlin, AI Mode does not replace professional expertise, but takes over the most time-consuming tasks: researching, structuring, and contextualizing medical information. In this interview, Dr. Sievert Weiss talks about the importance of independent evaluations based on realistic patient cases and Berlin's role in responsible AI (artificial intelligence) in medicine.
Mr. Weiss, many people initially associate AMBOSS with learning and reference. Looking back at LiSA and its development over the past few years, how has your understanding of what AI should and should not do in everyday clinical practice changed?
AMBOSS has already established itself in clinical practice in recent years: more than a quarter of all physicians in Germany make clinical decisions based on our guideline-compliant, precise recommendations for action. Our AI Mode now offers the opportunity to interact with personalized, curated AMBOSS content: specifically tailored to my medical or nursing-specific question. It does not replace professional expertise, but takes over the most time-consuming tasks: researching, structuring, and contextualizing medical information – tailored to my profession or specialist group, by the way.
We were already successfully using AI before the spread of common chat functions, for example in our search function. In the future, all areas of AMBOSS – learning, practicing, and teaching – will be supported by powerful, customized AI – for students, physicians, and nursing professionals.
The NOHARM study by Harvard and Stanford rated LiSA as the best clinical AI system based on real clinical cases. How important is such an independent evaluation for your daily product work?
In addition to our internal evaluations, the independent NOHARM study provides valuable insight for our product development. While previous studies show what AI can achieve in a controlled “laboratory environment,” benchmarks such as NOHARM go one step further. They examine the actual impact of AI on patient safety and treatment success under realistic conditions.
The benchmark, validated by medical specialists, is based on 100 real clinical cases from 10 specialties that reflect reality – including all uncertainties and incomplete information. Practical relevance and applicability have always been a high priority for us. We therefore actively incorporate the results into our product development and remain interested in participating in independent studies to create transparency and real added value for our users.
Medical decisions ultimately always remain human. When developing LiSA, how do you ensure that AI provides support without overriding medical responsibility or shortening decision-making processes?
That's right – ultimately, responsibility and clinical decisions lie with medical professionals, but AI Mode simplifies the medical research process along the way.
All relevant information is compiled and contextualized from the curated AMBOSS chapters. AI Mode also points out relevant aspects that go beyond the original question but should not be overlooked. It also highlights when guideline recommendations differ or even contradict each other, giving physicians an overview of the evidence base.
If AI Mode cannot answer a question due to insufficient information, this is clearly communicated – without false confidence. Of course, the information must still be well structured and presented accurately in order to provide meaningful support to users.
AMBOSS consistently focuses on curated, guideline-based content rather than open training data. Why do you think this approach is key to security and trust?
That is the great strength of AMBOSS AI Mode. With our solid knowledge base, we not only minimize the risk of misinformation or hallucinations – we also ensure that evidence and guideline recommendations are pre-filtered and summarized by experienced clinicians and nursing professionals.
Incidentally, physicians not only work on the knowledge content, but also actively shape the product development of our AI functions so that the real needs of our users are our top priority.
We see another advantage in the combination of dialogue-based AI mode and familiar AMBOSS chapters: as a medical intelligence platform, AMBOSS is optimized for healthcare professionals and combines the dynamics of modern AI with proven knowledge structures. You don't have to explain to the AI that you are a pediatrician, nurse, or radiologist – this is taken into account based on your profile data.
Despite measurable successes, you are bound to encounter skepticism toward AI in hospitals. Where do you currently see the greatest reservations, and in your experience, what is needed to increase acceptance?
First and foremost, I find it understandable and right that decision-makers are approaching the hype surrounding AI with healthy skepticism. Studies such as NOHARM highlight the discrepancy between the speed of technological developments and scientific-clinical evaluation.
We are also aware of this responsibility and have not acted hastily, but have invested a great deal of time and resources in the development of AI Mode. For me, transparency is also the key to success here, so that the potential of modern technology is not wasted. There needs to be an open dialogue about how AI Mode works, what it is intended for, and where its limits lie. AI should not remain a magical “black box,” but should enrich healthcare.
AMBOSS develops Clinical AI from Berlin for international use. What role does the Berlin ecosystem play in this work?
For us, Berlin's ecosystem is a rich and diverse source of talent: here we find experienced software and marketing veterans, international expats from all walks of life, and a broad pool of outstanding medical professionals who want to build a career outside of clinical care.
What does it take to make responsible AI visible and classify it?
Greater visibility requires additional benchmarks such as NOHARM, honest public debate, and a willingness on the part of decision-makers to compare AI solutions in terms of their safety and actual added value.
Thank you very much for talking to us.
Note: This interview was originally conducted in German and subsequentially translated into English language.



















![[Translate to en:] Dr. Dirk Schlesinger, Leiter des TÜV AI Lab und Dr. Petra Ritter, Leiterin des TEF-Health [Translate to en:] Dr. Dirk Schlesinger, Leiter des TÜV AI Lab und Dr. Petra Ritter, Leiterin des TEF-Health](https://ai-berlin.com/fileadmin/user_upload/AI_Header.jpg)






















![[Translate to en:] Dr Roland Roller © DFKI [Translate to en:] Dr Roland Roller © DFKI](https://ai-berlin.com/fileadmin/user_upload/Interview/DrRolandRoller-c-DFKI.jpg)


![[Translate to en:] [Translate to en:]](https://ai-berlin.com/fileadmin/user_upload/Stories/AI-c-unsplash.jpg)




![[Translate to en:] ©Matthias Heyde, Fraunhofer FIRST [Translate to en:] ©Matthias Heyde, Fraunhofer FIRST](https://ai-berlin.com/fileadmin/user_upload/Interview/Mueller_Preview_neu.jpg)