Berlin has developed into a significant hub for innovations in the field of Artificial Intelligence (AI). Given the rapid progress in AI technology and the associated ethical challenges, a responsible approach is of paramount importance. In this context, the BERLIN ETHICS LAB at the Technical University Berlin plays a central role. The Berlin Ethics Certificate is an interdisciplinary and transdisciplinary certification program that offers students the opportunity to focus on individual aspects of ethics and the reflection of technology and science during their academic studies. The lab's goal is to analyze and negotiate shared values and norms to determine the handling of new technologies in digital society. In a conversation with Prof. Dr. Sabine Ammon, we explore how AI and ethics can go hand in hand.
As the head of the project Knowledge Dynamics and Sustainability in the Technological Sciences at TU Berlin, you work at the intersection of philosophical reflection and engineering scientific practice. Could you tell us more about the focal points of your research and teaching in this area?
Our research group has a dual mandate. On the one hand, we introduce ethical questions and reflections on the philosophy of science into the technical disciplines, and on the other hand, we open up the perspective of engineering sciences to the humanities and social sciences. This integrated approach is precisely what is needed to develop sound approaches for dealing with AI technologies, as these technologies represent not only promising innovations but also have the potential to disrupt our society. In our research areas of robotics, AI, and medical technology, we are directly involved in the development teams from the outset, aiming to incorporate a broader societal perspective and collaboratively formulate a research vision that encompasses ethical values. This principle also guides our teaching activities at TU Berlin.
We are convinced that addressing the challenges in the realm of new technologies requires a collaborative effort among engineering, humanities, social sciences, and societal stakeholders. Therefore, we foster interdisciplinary project work to uncover ethical issues in technologies and identify unintended consequences, enabling us to initiate alternative development pathways at an early stage.
What role do ethical standards and guidelines, in your opinion, play in the regulation of the deployment of AI technologies? Are there areas where stronger regulation is necessary?
I consider ethical standards and guidelines to be central in developing responsible AI technologies. When used appropriately, they can become co-designers of new technologies aligned with the well-being of our society. However, AI technologies and application domains are so diverse that a differentiated approach is required. The risk-based approach of the planned AI regulation by the European Union is certainly a sensible starting point. If the application purpose brings unacceptable risks, bans should be imposed, such as with social scoring systems or real-time processing of biometric data. For high or limited risks, a series of conditions apply. But that is not enough. In the history of technological development, there are many examples where technologies were used differently than originally intended. This is also evident in the debate about Chat-GPT as a so-called General Purpose AI, the purposes of which are not yet known in the development process. Besides usage, ethical problems can also arise in production, such as considering the origin and preparation of data. A bigger issue is the lengthy periods until legal regulations come into effect and inevitably lag behind rapid technological advancements. Instead of waiting for legislation, it's crucial to integrate ethical considerations in the early stages of research and development. This demands, among other things, altered educational structures in computer and engineering sciences. To address this, the team at BERLIN ETHICS LAB initiated the Berlin Ethics Certificate two years ago.
Could you tell us more about your Ethics Certificate?
The Berlin Ethics Certificate is an enrichment program available to all students of the Berlin University Alliance, which they can undertake alongside their regular studies. We offer a basic and an advanced program with various specialized areas, such as Ethics of AI or Ethics of Technology and Technology Impact Assessment. In the courses, we prepare students for interdisciplinary project work and provide foundations in applied ethics. We sensitize them to the mutual influence of technology and society, the potency of visions and values in technological development, and show them how they can collectively identify unintended technological consequences early on, along with civil society actors. This way, students acquire a toolbox of methods to carry ethical questions and reflective approaches into their future professional practice.
Recently, the first graduates completed the Berlin Ethics Certificate. What benefits will this have for them, and where could it further evolve?
During this summer semester, the first students received the Ethics Certificate. Among them was, for example, a student from Industrial Engineering who not only followed his personal interests but can also clearly demonstrate an additional qualification. The graduate from the History of Technology, in turn, can leverage this accomplishment to showcase his practical engagement with contemporary issues through historical contexts. Together, they were able to learn a lot from each other and understand how to integrate different perspectives into technology impact assessment. With these learning experiences, the graduates of the Ethics Certificate not only stand out from the usual pool of applicants, but as a network of like-minded individuals, they can also make a significant contribution to aligning the development of new technologies more closely with ethical design principles.
What opportunities and risks arise from the use of AI, and how can an appropriate balance be achieved?
To assess the opportunities and risks of AI, we must not simply refer to 'the' AI, but rather examine in much greater detail which AI technology and application are in question. Especially in promising fields like medical and healthcare applications, where significant risks are anticipated due to sensitive data, unintended consequences and their prevention must be considered from the outset. I consider it essential not to view AI technologies and ethics as opposing forces. Once ethical standards become an integral part of technology research and development, we can course-correct within the process itself and seek an appropriate balance. For this purpose, we've developed the approach of integrated ethics at the BERLIN ETHICS LAB, which empowers developers to consider ethical aspects. Integration must occur iteratively: a technical development process inherently deals with many unknowns; the further it progresses, the better unintended consequences can also be recognized. When ethical values are part of the development framework, the product can be oriented around them. This requires a commitment from companies and the willingness, in the worst-case scenario, to pull the plug.
What specific steps do you recommend to minimize potential risks and negative impacts of AI systems on individuals and society?
We must move away as quickly as possible from the traditional division of labor between developers refining the technical product and society subsequently determining acceptance and ethical compatibility. Ethical values and the assessment of potential societal impacts must be integrated into the development and innovation processes from the very beginning and become an integral part. This is where I also see the responsibility of research and innovation funding to pay much greater attention to these aspects in their measures. Simultaneously, there is an urgent need to change the educational structures in computer and engineering sciences so that all graduates leave universities with a foundational knowledge in the realm of technology ethics and technology impact assessment.
How crucial are transparency and explainability? For instance, in cases of unclear data foundations or when erroneous or outdated data becomes the basis for decisions made by AI systems?
Transparency and explainability, I consider them to be crucial factors in enabling users to confidently interact with various aspects of AI applications. To avoid manipulation, a transparent indication that an AI system is at work is necessary, along with a reference to the system's limitations. To prevent critical decisions in sensitive domains based on unclear data foundations, erroneous, or outdated data, clear quality standards are essential to ensure the security of algorithmic knowledge production. Similar considerations apply to explainability. What's important to recognize here is that an explanation doesn't solely come from the technical system's response. It always emerges in a dynamic interplay between the system and the user. When the AI system is used by a doctor in the diagnostic process and they are aware of the strengths and weaknesses of the AI system, they can evaluate the result and use it as an additional puzzle piece in the search for causes.
Who is primarily responsible for addressing ethical questions related to AI technologies? Public research, the private sector, civil society, regulators.
Here, we must differentiate ethical expertise from the broader ethical reflection process. Achieving the incorporation of ethical questions into AI technologies can only be accomplished collectively: public research, which integrates ethical aspects into the early stages of technology development through funding requirements; regulators, who establish the legal framework; the private sector, which aligns innovation with ethical values; and civil society, which contributes its value foundation to technology development. Bringing all these stakeholders to the table and engaging them in a common discourse process is already a fundamental element in addressing ethical questions. The success of approaches like Responsible Research and Innovation has shown that such endeavors can be effectively practiced. Additionally, the inclusion of ethical expertise must be emphasized, which can be meaningfully integrated only in close collaboration with technical expertise. For this purpose, the interdisciplinary collaboration is again crucial, and it needs to be accomplished both in public research and the private sector, if we genuinely aim to align AI technologies with ethical values.
In light of the dynamic evolution of the AI ecosystem in Berlin, what specific changes have you observed, and what strategies is the BERLIN ETHICS LAB pursuing to address these challenges?
In Berlin, we see how important it is to many stakeholders to align new AI technologies with ethical values, even though they may not know exactly how to achieve that. The key insight here is that ethics doesn't have to be an innovation deterrent. Currently, at the BERLIN ETHICS LAB, we are developing a toolkit to assist and advise development teams and startups in integrating ethical questions into their products. As the BERLIN ETHICS LAB, we view shaping responsible AI futures as a collaborative task with civil society actors. The AI ecosystem in Berlin provides the foundation to take a leading role in this aspect. When utilized correctly, ethics can become a crucial innovator and competitive advantage in AI development – a challenge we are addressing with great emphasis!
Thank you for your time, Professor Ammon!