© Bundesdruckerei GmbH

20 February 2026

“Sensitive areas of application mean sensitive decisions.”

Artificial intelligence is increasingly finding its way into highly sensitive areas of government and society. It is precisely in those areas where trust, traceability, and legal certainty play a central role that it will be decided whether AI can truly fulfill its potential. Bundesdruckerei GmbH has been working for years at precisely this interface: between technological innovation, public service, and the responsible use of AI.

With the AI Competence Center of the federal administration (KI-KC), Bundesdruckerei is putting user-centered AI development into practice and, together with the public administration, is testing specific applications, from language models and image recognition to anomaly detection. Camilla Dalerci is Senior Innovation Developer Data Science and Deputy Project Manager at the AI Competence Center. In this interview, she provides insights into the work on trustworthy AI, the special requirements of public administration, and the question of how technological excellence and social responsibility can come together.

As a launch partner of the #ai_berlin hub, Bundesdruckerei is also part of the Berlin AI ecosystem and contributes its experience to the exchange between public administration, business, and research.

 

Ms. Dalerci, when you look at the Federal Administration's AI Competence Center, what stands out is its strongly user-centered approach. What does it mean in practice to think about AI not primarily in technological terms, but in terms of the specific needs of the administration?

At the AI Competence Center, we first look at the specific working realities in the administration, such as high documentation loads, complex legal frameworks, time pressure, or limited IT resources, and how administrative staff deal with these. From this, we deduce which tasks and which degree of automation AI can actually relieve or improve the quality of. The focus is always on how we can use AI intelligently to make administrative tasks easier, more efficient, and more user-friendly for people.

This approach also changes the development process itself: we work iteratively, together with the future users, test early on in real-life scenarios, and take into account administrative aspects such as transparency, explainability, and security from the outset. In close cooperation with the administration, this results in AI solutions that are connectable, trustworthy, and suitable for everyday use.

Our goal is to shape the future of AI together and ensure that public authorities can not only try out new technologies, but also use them sustainably.

 

You are working on AI applications for particularly sensitive areas of use. What additional requirements arise there for data, models, and development processes that are often less visible in other contexts?

Sensitive areas of application mean sensitive decisions, so the requirements are restrictive. The demand for reliability of AI output is very high, for example, due to legal obligations. In addition to technical performance, the accuracy of the results and a high degree of human supervision are important here. In our environment, AI systems must be designed in such a way that they consistently meet the legal, ethical, and organizational requirements of public administration.

At the data and model level, this means a high degree of transparency regarding origin, quality, and limits of use. Models must be traceable, robust, and controllable. That is why we have also launched the MÖVE initiative, “Evaluating language models for public administration.”

The goal is to identify risks such as distortions or hallucinations at an early stage and systematically reduce them. This also changes the development processes. Security, AI governance, and compliance are not downstream checkpoints, but an integral part of development.

This requires a deep understanding of administration as an application context as well as continuous evaluation. This also means that humans must always have the opportunity to review or override decisions.

 

Trustworthy AI is a term that is often used but can be interpreted in very different ways. How do you personally determine whether an AI system really lives up to this claim?

In my view, trustworthiness in AI is not demonstrated by individual characteristics, but by how well a system can be classified and accounted for in practice. An AI system is trustworthy when it is clear what it can be used for, where its limits lie, and who is responsible for its use.

In concrete terms, this means that the way it works must be comprehensible, decisions or recommendations must be explainable, and risks such as misconduct or bias must be actively addressed.

It is also crucial that trust is institutionally secured, for example through clear governance structures, regular evaluations, and the possibility of human control.

Trustworthy AI is not created by a label, but through continuous testing, openness, and a willingness to critically develop systems further. That is why I was involved in the “Implementation Guide for the AI Regulation” as part of the Bitkom working group on artificial intelligence.

 

The AI Competence Center deliberately creates prototypes rather than finished products. Why is this exploratory approach so important for public administration in particular, and where do you see its greatest added value?

We are working on solutions that did not exist before. The deliberate focus on prototypes enables public administration to test AI technologies with minimal risk, without committing to systems or providers at an early stage. In a highly regulated environment in particular, it is important to first understand whether and under what conditions AI can be used effectively before it is transferred to regular operation.

That is why the AI Regulation stipulates that AI real-world laboratories, i.e., test rooms where innovative technologies are tested under real-world conditions, must be set up in every EU country by August 2026.

The exploratory approach also creates space for learning and building AI skills. Data scientists, subject matter experts, and end users can work together to test assumptions, refine requirements, and identify conditions for success. In my view, the greatest added value lies in building capacity for action and AI expertise for the future.

 

In the open-access reference book “Artificial Intelligence and Us,” published in the fall, experts from Bundesdruckerei examine the topic of AI from a technological, ethical, and social perspective. Why do you think this complexity is necessary when we talk about and make decisions on AI?

The reference book makes it clear that technological innovation is only sustainable if it is embedded in an ethical and social framework from the outset. With generative AI, we are faced with technologies that represent a particularly natural interface to human communication and at the same time expand it. This is precisely why a human-centered approach is crucial: AI must be oriented toward human needs, values, and responsibilities.

Different perspectives help to identify risks early on, reveal conflicting goals, and set responsible guidelines. This classification is also central to our daily work. As proven experts in their field, Dr. Kim Nguyen and Carmen Dencker bring precisely this overarching perspective to the table by bringing together technological, ethical, and social issues, thereby providing important guidance for the development of responsible AI solutions at Bundesdruckerei.

 

Berlin increasingly sees itself as a location for applied and responsible AI. What role does exchange within the regional ecosystem, for example through initiatives such as the #ai_berlin hub, play in your work, and what can public administration learn from these networks?

Professional exchange within the regional ecosystem plays a central role in our work. In Berlin in particular, public administration, research, civil society, and start-ups come together in a small space. Networking helps us to exchange experiences and identify early on which ideas and players could become relevant in the future.

Initiatives such as the #ai_berlin hub create low-threshold spaces for precisely this exchange and synergy effects.

As AI-KC, we want to promote interdisciplinary thinking and cooperation. It is precisely this attitude that is crucial for thinking about AI in a way that is not only efficient, but also sustainable and oriented towards the common good.

 

Bundesdruckerei gives female experts space and visibility with formats such as “Women in AI.” What has been your experience when different perspectives and backgrounds come together in AI teams, and how does this affect the quality of solutions?

AI development is so complex that it requires different perspectives and backgrounds. Diversity in AI teams changes the quality of the questions that are asked. In my experience, assumptions are questioned more quickly and problems are thought through more broadly.

Formats such as “Women in AI” help to make one facet of this diversity visible and to strengthen it in a targeted manner. This has a direct impact on the work: solutions are thought through more robustly and better adapted to real-world contexts.

We also consciously participate in formats such as those in collaboration with Fraunhofer IAO and Women in AI & Robotics in order to contribute practical perspectives and actively shape the exchange on diversity and responsible AI. However, there is still room for improvement. According to recent studies, only 22 percent of AI professionals are currently women. In our team, the figure is already 50 percent.

 

Thank you very much for talking to us.

Note: This interview was originally conducted in German and subsequentially translated into English language.