That shift was especially visible in the DACH region, where organizations operate in highly regulated, high-trust environments. In these contexts, adoption can’t be chaotic. It needs to be people-first, governance-aware, and practical enough to survive beyond the first workshop.
At werchota.ai, our year was built around one central idea:
AI adoption is a capability muscle that takes time to build.
And capabilities are built through clarity, practice, and trust.
We delivered workshops, leadership sessions, keynotes, and longer collaborations across financial services, manufacturing, industrials, life sciences, and the public sector — and we also began scaling our learnings beyond client rooms through media and structured programs.
A snapshot of who we worked with in 2025
Rather than list every engagement, here are selected organizations and institutions we supported across the year:
- Financial services: LGT, Raiffeisen Bank, Volksbank
- Manufacturing & industrial: Syntegon, Uhlmann, ENGEL, STARLIM, Haberkorn, Arthrex, Action Composites, KION, Kostmann
- Life sciences & medical: Swiss MedTech, NovoArc, MPA (medical practitioners course)
- Aerospace / communications: Frequentis
- Technology enablement: Microsoft
- Public sector: Feldkirch, Vorarlberg, Lustenau, Dornbirn
- Ecosystem & education: HSLU, ESADE, VDMA, Leoben, FEAT Ventures
The details differed by sector, but the patterns were remarkably consistent. Here’s what defined our work.
1. Making GenAI usable in everyday work
Most teams don’t need “AI inspiration.” They need a bridge between what they do today and what GenAI can genuinely improve tomorrow — without breaking quality, accountability, or existing workflows.
That’s why a large part of our work in 2025 focused on hands-on enablement: workshops designed to help people move from curious to confident through practical use-cases and repeatable working patterns.
You can see this in corporate and operational settings such as Unilever and Ospelt, where teams explored how GenAI supports management work and decision-making. You can see it in the Copilot-oriented sessions we ran with Microsoft, where adoption becomes real only when people learn how to use tools consistently and responsibly in day-to-day workflows.
And you see it very clearly in MPA — a format designed specifically as a course for medical practitioners. In medical contexts, usefulness has to come with responsibility: clear boundaries, high-quality output, and an emphasis on human judgement. It’s not about doing “more with AI.” It’s about doing the right things better.
The result we aim for in these sessions is simple: people leave with confidence, shared language, and practical next steps.
2. Bringing leadership alignment to the forefront of AI adoption
As the year progressed, one lesson became even sharper: adoption doesn’t fail because people can’t learn tools. It fails because organizations don’t align.
In leadership and cross-functional sessions, the real work often looked like this:
- defining what “responsible use” means internally,
- prioritizing use-cases with real payoff,
- clarifying guardrails that empower teams (instead of scaring them),
- and building ownership so AI doesn’t remain “everyone’s job and nobody’s job.”
This theme showed up repeatedly in leadership-oriented work with organizations such as ENGEL, Haberkorn, BWBLegal, NovoArc, and Syntegon — where the goal isn’t simply awareness, but the conditions for adoption that scales: clarity, accountability, and consistency.
When leadership aligns, the organizational temperature changes.
The conversation becomes calmer. The experimentation becomes more focused. And “pilot fatigue” starts to disappear.
3. Sector-specific depth: manufacturing, industrial, and banking
AI adoption is not one-size-fits-all — and the most durable outcomes come when the approach matches the operating reality of the sector.
Manufacturing and industrial: longer, more frequent collaborations
In manufacturing and industrial environments, “useful” has a high bar. If AI doesn’t fit the operating model — decision cycles, process discipline, documentation, quality expectations — it won’t last.
That’s why some of the most meaningful work in 2025 went beyond one-off sessions into deeper collaboration rhythms. We saw this through repeated engagements and accelerator-style sequences with organizations like Uhlmann, as well as leadership workshops and deep dives with Syntegon, ENGEL, STARLIM, KION, Action Composites, and Arthrex.
The core question in these environments is: how do we integrate it responsibly into how we operate?
Banking: speed, trust, and safe scaling
In banking, organizations want to move quickly — and they must protect trust.
Engagements with LGT, Raiffeisen Bank, and Volksbank reflected a consistent banking reality: adoption works best when different functions build a shared understanding of where AI helps, where it doesn’t, and what standards are required to use it safely.
The win is: we can evaluate, adopt, and govern AI as a repeatable organizational capability.
Moving from single sessions to longer collaborations
One of the most encouraging patterns in 2025 was how often work evolved naturally into longer arcs.
Many engagements began with a workshop or keynote that created shared understanding. From there, teams wanted follow-ups to convert interest into prioritized use-cases — and increasingly, they asked for longer collaboration structures because the biggest challenge in adoption isn’t motivation, it’s continuity.
That’s where longer sequences and program-shaped collaborations (like the multi-step work with Uhlmann or accelerator-style formats with industrial clients such as KION) make a difference: learning compounds, standards become clearer, and internal capability grows without burning out champions.
Government and public-sector work in the DACH region
Public-sector adoption has its own gravity: accountability is high, trust is fragile, and decisions must stand up to scrutiny.
In 2025, we supported municipalities and regional public entities through workshops and leadership enablement, including work with Feldkirch, Vorarlberg, Lustenau, and Dornbirn.
In government settings, “AI adoption” cannot mean uncontrolled experimentation. The focus is responsible enablement:
- building AI literacy at leadership level,
- identifying use-cases that are appropriate for public administration,
- establishing practical guardrails for staff,
- and strengthening shared judgement so teams can move forward confidently.
When public-sector teams leave a session feeling more capable and more careful, that’s a strong outcome — and it’s how AI becomes a tool for better service, not a source of uncertainty.
4. Scaling knowledge beyond client rooms: The AI Cookbook podcast
One of the most important shifts in 2025 was that we didn’t only teach inside organizations — we started scaling practical learning beyond them.
With The AI Cookbook, we built a platform to share real-world AI adoption insight at a much bigger scale: what’s changing, what matters, what’s hype, and what can actually be implemented in real teams.
The podcast reflects the same people-first philosophy behind our workshops: AI is not about replacing human capability — it’s about amplifying it. And in a world where the landscape changes weekly, consistent, grounded education becomes a form of leadership support.
We are now at 100+ episodes, you can listen to the English episodes and the German episodes on Spotify, Youtube or Apple Podcasts.
5. From workshops to repeatable learning: AI Fit Academy
As demand grew, a clear question emerged:
If workshops work — how do we make capability-building repeatable and scalable without losing the hands-on quality?
That’s where the AI Fit Academy comes in.
The Academy takes what works in the strongest client sessions — practical workflows, clear standards, applied learning, and momentum over time — and turns it into a structured program designed to help participants build real working capability, not just “AI awareness.”
6. A bigger team, and a bigger “we”
Finally, 2025 was also a year of growth behind the scenes.
We started with a core team of four, and as our work expanded across industries and formats, we grew into a broader team of practitioners and collaborators spanning strategy, enablement, development, and market expansion.
That growth matters because it strengthens delivery: more depth, more continuity, and more capacity — without losing what makes the work effective in the first place: strong facilitation, practical craft, and a people-first approach.
Looking ahead, and introducing CHIEF AI Community
The organizations making are the ones building the most capability. That’s what we’re here to do across the DACH region and beyond: help teams adopt AI in a way that is practical, responsible, and built to last.
As part of those efforts, we have pilot launched a community designed for AI leaders navigating a very uncertain transition. We will share more about it in a separate article, but if you are here, this is the link to join: https://chief.werchota.ai/
