Verena Till, Coordinator at ZVKI © Verena Till

07 February 2022

"Experts have a duty to factor in fairness, transparency and comprehensibility when developing AI systems."

How can algorithmic systems be designed for the benefit of society in order to strengthen trust in the technologies used? As a national and neutral interface between science, business, politics and civil society, the newly founded Centre for Trustworthy Artificial Intelligence (ZVKI) aims to be a forum of debate in Germany that helps outsiders understand the developments in artificial intelligence and algorithmic systems in relation to the societal concerns they create. #ki_berlin caught up with ZVKI coordinator Verena Till from the Berlin think tank iRights.Lab for a chat about the tasks and goals of the centre, the certification of AI solutions and knowledge sharing.

Hello Ms Till, our daily life is increasingly being influenced by technological advances, such as in the field of artificial intelligence, which is also fuelling fears in society due to its complexity. What concerns do the newly founded Centre for Trustworthy Artificial Intelligence (ZVKI) address?

Algorithmic systems and artificial intelligence are not just terms from technical discussions, but are a part of our everyday lives. AI systems are used to help make decisions that have a significant impact on our lives. For some, they bring to mind film or playlist recommendations from streaming services that are adapted to the user's listening habits. But you should also think credit checks and selection procedures for applications. The ZVKI investigates the overarching question of how AI technology can be designed for the benefit of society and how a people-centred focus can succeed. The people at the centre are the basis for trust in AI and the driver of positive social development in terms of digitisation.

Since AI has to be conceived and designed in an interdisciplinary manner, different questions and challenges arise for different stakeholders. For example, users must be educated and equipped with the necessary digital skills to use the technology safely. Companies, in turn, need experts and guidelines for their work with AI, and legislation is responsible for creating the appropriate guidelines. The ZVKI would like to serve as an interface between science, business, politics and civil society, to inform about consumer-relevant aspects of AI, facilitate public discussion and develop instruments for the evaluation and certification of trustworthy AI.

How did the idea of founding the ZVKI come about and how was the project implemented?

The development of legal framework conditions for the use of AI systems can hardly keep up with the speed of technical development. The concerns and interests of consumers in this respect have also not been sufficiently taken into account thus far.

The idea of creating a place where all these strands are brought together and where consumer protection is the focus was born in 2021: the Centre for Trustworthy Artificial Intelligence. The project is the brainchild of our director Philipp Otto.

Funded by the Federal Ministry of Justice and Consumer Protection, the iRights.Lab has been building the centre in cooperation with the Fraunhofer Institutes AISEC and IAIS as well as the Freie Universität Berlin since autumn 2021.

What are your goals at the start the project and what aspects do you want to focus on first?

As a forum of debate in Germany, the ZVKI aims to help outsiders understand the developments in artificial intelligence and algorithmic systems in relation to the societal concerns they create. It strives to impart and convey knowledge about AI. We will provide target group-specific services and formats: In addition to digital and analogue information material, there will be podcasts, films and events. Our goal is to reach all citizens, whether at home, at their desks or - insofar as the pandemic situation allows - occasionally in the pedestrian zone on Saturdays.

In addition to civil society, we also address an interdisciplinary specialist audience. The scientific monitoring of developments relating to AI is an essential part of our work at the ZVKI. We are investigating, among other things, which legal and political steps need to be taken to protect people from the negative effects of AI and want to enter into dialogue with various stakeholders.

The evaluation and certification of AI will also play a major role. In order to fulfil the interdisciplinary approach of the project, working groups and exchange formats will be set up here in the near future. In addition, we are planning a large event on 31 March where network partners will present their work and conduct a "deep dive" into various focus areas.

The focus of the project also includes the founding a non-profit association that will continue to pursue the project goals beyond the funding period.

Many processes are already supported by AI solutions without us really noticing or questioning them. Is an increased transfer of knowledge about what an AI can be, what it should be and shouldn't be, essential or should we rather rely on AI seals of approval that create trust?

Both! AI systems are used in more and more areas of life, such as shortlisting applicants for job vacancies, creating editorial content, filtering news reports and detecting fake news, or in medicine for the diagnosis or treatment of diseases. Systems that have such a strong impact on our everyday lives must be as transparent as possible and their functions comprehensible. They must also, and above all, have a high level of trustworthiness. In our view, basic knowledge of how algorithms work is essential in order to make independent and self-determined decisions and to critically question the use of AI. The ZVKI would like to make a significant contribution here.
However, since it is clear that not every one of us is an AI expert, business, science and politics must also play a role in creating trust. Experts are already obliged to take into account certain parameters such as non-discrimination, fairness, transparency and comprehensibility when developing AI systems. If AI solutions meet the criteria of trustworthiness and are even certified, this naturally creates additional trust among users. We have seals of approval in other areas of our everyday life and we have learned that they are helpful. Certification is therefore very conducive to trust, but cannot replace the necessary basic knowledge.

If the understanding of artificial intelligence is to be increased, how can this knowledge be conveyed in concrete terms and which target groups do you want to address?

The transfer of knowledge about digitisation, especially artificial intelligence, can and must be done in different ways. It is important to find the right format for the respective target groups and not to overwhelm people. This means that we will also have very low-threshold services to attract "AI newcomers". Information events in pedestrian zones and youth clubs or adult education centres are conceivable. We would like to explicitly address all age and educational groups and cooperate with different stakeholders.

For experts, we are also planning formats such as panel discussions or small town hall formats, where you can delve deeper into the topic.

The goals of the ZVKI also include the evaluation and certification of AI solutions. How should this work and what concepts already exist?

Instruments for evaluating and certifying AI are an important basis for trust in AI systems for people. This type of quality assurance for AI applications is already being pursued by many different players. The iRights.Lab has also dealt with the topic in the past and in an open participatory process developed nine rules for the design of algorithmic systems: the Algo.Rules.

In the process, various practical aids for the common good-oriented design of algorithmic systems for many relevant target groups were created on this basis. Our network partners Fraunhofer AISEC, Fraunhofer IAIS and the FU Berlin have also been working hard on the subject of certification for a long time and bring practical knowledge from various projects with them. In addition to our own expertise, we would like to integrate the existing concepts and initiatives from other players, develop them further and shape these together with them.

On what scale should your work be carried out: Is this a topic that needs to be tackled globally, within the framework of a community of shared values, or should it take place at national level? What role do regional ecosystems play in this?

First of all, the ZVKI sees itself as a national interface between science, business, politics and civil society and would of course also like to establish itself as an independent player in the AI ecosystem. We see ourselves as networkers and we bring the most important stakeholders together to jointly discuss solutions to the relevant questions and challenges of AI as part of our society.

This will initially take place at a national level, but of course also in the future at European and international level. The European Commission's Artificial Intelligence Act includes a proposal for regulating the use of AI, which affects all European countries and thus also requires international dialogue.
Regional ecosystems are always helpful in combining strengths and developing effectiveness. The central question is: Where can we act regionally and work together in order to have a national effect?

With your think tank iRights.Lab, the Fraunhofer Institutes AISEC and IAIS as well as the FU Berlin, the project has players on board who are very close to the AI hub of innovation. How do you see the cooperation between companies, start-ups and research in the Berlin AI ecosystem, as well as on a national level?

We are well networked, nationally and internationally, and have already carried out various cooperation projects with the iRights.Lab in the past. With both the iRights.Lab and the ZVKI, it is very important to us to work together and cooperate on the key issues of digitisation and to constantly get to know new perspectives.

It was and will always be important to us to work with a broad range of partners at all possible levels: public authorities, research institutions, companies, associations and civil society organisations. Nationally and internationally. The same applies here: where there is digitisation, there we must also be. We are doing that well and we would like to achieve that with the ZVKI.

Thank you for the interview and good luck with your work.