As part of our work with MPA Berufs- und Handelsschule AG, a medical education institute in eastern Switzerland that has been training medical practice assistants and offering professional development for healthcare staff for over 30 years, we have been running a series of AI-focused sessions with medical doctors, practice managers, and people who work in doctors' offices across the region. We have also been working closely with the school's founders as they reshape their institution around AI.
This article is the first in a two-part series on AI for doctors' offices in Switzerland and Liechtenstein. This one covers what people are already doing with AI in medical practices today. The second will cover what is coming next, and what it takes to get started.
Swiss doctors spend 2.5 hours per day on administration. That number has risen by 25 minutes over the past decade. In our conversations with practices in the region, two answers surface every time we ask why they have not adopted AI. The first is data protection. The second, once people feel safe enough to say it, is simpler: we do not have the time.
In one of our recent sessions, we polled the room: how many hours per week do you spend on admin tasks you wish AI could handle? Answers ranged from 10 to over 20. The room included CFOs, practice managers, specialists in internal medicine, and a board member of a digital diagnostics company. None of them doubted AI could help. All of them were stuck on how to start.
Dr. Alberto Tozzi joined that session. He heads the Predictive and Preventive Medicine Research Unit at Bambino Gesù Children's Hospital in Vatican City, Europe's largest pediatric hospital and ranked 6th best in the world, now seeing patients from war zones in Ukraine and Gaza, and increasingly from the United States. He is a pediatrician by training, technically fluent enough to field questions on federated learning and edge computing, and blunt about what works and what does not.
His first advice to the room was disarmingly simple: start from the problems, not the solutions. If you begin with the problems, he said, you will always find some ways forward. The mistake most people in innovation make is starting from the solutions.
So we started from the problems. Here is what we found already working.
What this article covers
Use case 1: Practice analytics with Copilot in Excel.** One spoken prompt, eight minutes, a full operational dashboard. No new software.
Use case 2: AI-driven admin workflows with pseudonymization.** A Swiss practice already running AI across billing, scheduling, and operations while keeping patient data fully protected.
Use case 3: Digital twins for patient journey optimization.** How Bambino Gesù Children's Hospital simulates clinical pathway changes before implementing them.
Use case 4: Voice-to-report for clinical documentation.** What works today, and where the integration gap still sits.
Use case 5: Patient data and consent.** Why the real bottleneck is institutional, not patient-driven.
Use case 1: Practice analytics with Copilot in Excel
Every medical practice in Switzerland runs on spreadsheets: billing data, appointment schedules, insurance claims, inventory lists, staffing rosters. This data lives in Excel or gets exported there from the practice's ERP system. It is operational, not clinical. There are no patient names, no diagnoses, no AHV numbers. To an AI model, it is a P&L with tabs.
Malcolm Werchota opened Excel, pressed the Copilot button, switched to Agent mode, and spoke for roughly 60 seconds. The prompt was practical, the kind of thing a medical practice assistant might ask on her first week:
Create a dashboard with monthly revenue. Show me the show rate and open invoices. Five insights on billing. Receiver problems. Which personnel are we missing next week? Which doctors do we still need to staff? Suggest concrete action items.
Then he walked away. For three to four minutes, nothing visible happened. Then dashboards started appearing. One after another. Billing gaps sorted by insurer. Staffing shortages for the coming week. Inventory items approaching expiration. Reimbursement problems ranked by severity. At one point the AI detected that the German-language formulas were broken, fixed them autonomously, and continued building.
Malcolm's observation to the room was pointed: telling an MPA or a medical doctor to use Copilot or ChatGPT triggers immediate resistance. Telling them to press a button in Excel does not. The tool is the same AI. But because it operates inside Excel, on the Microsoft infrastructure the practice already uses, on a private tenant where the data already resides, the resistance disappears. No new software, no new account, no data leaving the building.
Malcolm pushed it further. Instead of specifying which metrics to calculate, he asked the AI to identify the worst-performing metrics on its own, across hundreds of rows of data. It ran for another six minutes and surfaced patterns the practice had never identified.
Use case 2: AI-driven admin workflows with pseudonymization
The Excel demo showed what is possible with existing tools. A doctor in our immediate network showed what is possible when you go further.
This physician has implemented AI-driven workflows across his office with one ICT specialist. They use pseudonymization: identifying information is encoded before any data reaches the AI. The output quality remains high. Patient data never leaves the practice's control.
The approach, as described to us by the director of MPA, is straightforward: names, birth dates, and social security numbers are replaced with encoded identifiers before being processed. The AI returns results at the same quality level. Full functionality, without exposing any information beyond what is strictly necessary.
Use case 3: Digital twins for patient journey optimization
Dr. Tozzi brought a perspective from the other end of the spectrum: not a small practice optimizing billing, but a large pediatric hospital treating thousands of children with complex conditions.
His team builds virtual copies of patient journeys. They map the entire path a child takes through the hospital: every consultation, procedure, timestamp, and specialist interaction. From this data, they create a digital twin and run simulations. What happens if the endocrinologist sees the patient before the surgeon? What if the patient stays in the ward longer before moving to surgery? These questions can be explored without touching the actual care pathway.
They are also using historical data from thousands of oncology cases to build predictive algorithms for surgical scheduling: estimating time-to-surgery for specific conditions and using those predictions to optimize operating lists. At this volume, even small improvements in scheduling translate to meaningful gains in patient throughput.
Dr. Tozzi was candid about where this stands. The experiments are ongoing, and transferring results from research into daily clinical routine remains difficult. But the path is clear.
He drew a comparison to the stethoscope, which took decades to become standard clinical practice after Laennec invented it in 1816. AI adoption in healthcare will move faster. But the integration challenge is genuine.
On the question of which metrics matter, his view was that algorithmic accuracy (ROC curves, precision, recall) is necessary but not sufficient. The metric that counts is measurable change in quality of care for patients. Everything else is a proxy.
Use case 4: Voice-to-report for clinical documentation
Doctors in Switzerland and Liechtenstein are already using GDPR and HIPAA-compliant transcription tools. The workflow is simple: after a consultation, the doctor speaks into a device, and a structured report appears. Doctors are natural users of this technology because they already dictate constantly. Voice is their native interface.
The gap is specific. A specialist in internal medicine who runs two private practices named it directly during the session: there is no integration between AI transcription and web-based documentation software. The AI generates the report. But getting that content into the specific input masks of a practice management system still requires manual transfer.
A board member of a digital diagnostics company, who is working to integrate an approved AI product into GP and hospital workflows, confirmed this from the product side. People do not know how to introduce AI into their existing workflow. Part of it is technical. But the deeper issue is trust: can a doctor rely on the output to replace hours of manual work and reach the same conclusion?
Dr. Tozzi's perspective on this was that the critical challenge is placing AI tools at the right point in the clinical pathway. He also noted from his experience in medical education that clinician-to-clinician communication is far more effective than engineer-to-doctor communication when teaching AI adoption. With doctors, you start from a clinical case. If you start from the engineering concepts, it does not land.
New tools are closing the integration gap. Microsoft's Cowork, expected within the year, will allow AI to interact directly with applications on screen: filling forms, navigating web-based software, executing multi-step workflows. But today, for most practices, the last step between AI output and documentation system is still manual.
The honest summary: billing, scheduling, inventory, and correspondence analysis are ready now. Clinical documentation integration is close but not yet seamless.
Use case 5: Patient data and consent
Data privacy was the most cited hesitation when we polled the room. In a region where the economy, from banking to tourism to healthcare, is built on discretion, this concern is structural, not performative.
Dr. Tozzi offered a perspective that reframed the conversation. His team at Bambino Gesù regularly asks parents whether AI can work with their child's medical data to help other children. These are families dealing with rare diseases, oncology, complex cases. In his experience, no parent has ever refused. When the purpose is clear and the benefit is tangible, patients consent. They understand what their data can do.
He was not making an argument against data protection. He was making a narrower point: the resistance to AI in healthcare is more often institutional than patient-driven. Patients are ahead of the system.
This does not eliminate the compliance question. Practices in this region operate under strict standards, and they should. But it reframes where the bottleneck sits. It is not that patients refuse. It is that practices do not yet have a trusted path to begin.
Dr. Tozzi's parting thought to the room was that those who learn to use these tools well will make a genuine impact on patient outcomes. In an earlier session with our community, where he had a full hour, he put it more directly: if doctors use AI, they can save more patients' lives. People in the room had tears in their eyes.
The tools are here. The patients are ready. The question is whether the practices will find the guidance to start.
This session was part of the werchota.ai Chief AI Community. Dr. Tozzi joined from Rome. Participants included medical doctors, practice managers, CFOs, and technology professionals from across the DACH region and the United States. You can access the recording here: https://chief.werchota.ai/c/free-events/ki-in-der-arztpraxis-was-geht-heute-wirklich
The course https://www.mpa-engineering.ch/sichere-ki-anwendung-in-der-arztpraxis starts April 22, 2026. Five weeks. Hybrid. German. Built by MPA and werchota.ai for the practices that are ready to begin.
