© Dirk Reps

31 March 2025

“This is the message I want to send: We can all do AI and have a say in its development!”

Feminist artificial intelligence - a term that is much more than just a technical concept. It is about the question of how AI systems reflect and shape the world we live in, and whether we manage to integrate values such as equality, justice and freedom into their development. FemAI, an innovative Berlin-based company led by CEO Alexandra Wudel, is taking on precisely this challenge. She and her team are working to make AI not only smarter, but also more ethical and inclusive.

In our interview with Ms Wudel, we talk about FemAI's vision and goals, the opportunities and challenges of feminist AI and the concrete steps that need to be taken to prevent discrimination in AI systems. She also provides exciting insights into the future of her work, which focuses on AI certifications as well as educating and sensitising society.

 

Ms Wudel, with ‘FemAI’ you are working on feminist AI. How do you define this approach and how does it differ from conventional AI development approaches? What opportunities does feminist AI offer us?

Feminist AI is an evidence-based future research process in which values are consolidated in AI systems. Feminist Artificial Intelligence (FAI) explores how feminist values such as freedom, equality and justice can influence AI development and challenges the male-dominated, westernised biases that currently shape AI systems. In our latest research paper, which we presented in Berlin on 8 March 2025, we explore the core principles of FAI, its role in undoing power structures and its comparison with other frameworks such as environmental, social and governance (ESG) criteria. There, we will also present an AI tool that we believe already deserves a medal. 

 

As CEO of FemAI, you are committed to the human-centred development of AI. What are the biggest challenges in implementing this approach in practice?

We have decided against a rigid certification model because we have recognised the danger of backward regulation from the consultations on AI legislation and therefore want to reposition AI certification. The biggest challenge is selecting the right use cases, demonstrating scalability and keeping the focus at the same time. As we start our next fundraising round in April 2025, the budget for putting our work into practice is a critical milestone.

 

You recognised early on that artificial intelligence has a gender-specific dimension. What measures do you think should be taken to prevent discrimination through AI?

Firstly, we should make discrimination in AI more visible. Hence our approach of certification, i.e. rewarding AI tools that have dealt with non-discriminatory approaches. The concept of ‘bias’ is still too rarely part of planning and development processes in the AI age, even though it is omnipresent. That's why we regularly give talks to organisations such as Google, the EU delegation in Washington, KPMG and many others to raise awareness. The AI legislation has made some good provisions, such as around the so-called diverse data sets (Annex XI Section 1(2)(c) of the EU AI Act). Unfortunately, the EU AI Act is far from sufficient and we are particularly concerned about enforcement. AI systems involve more than algorithms and development teams. That is why we are conducting research in the FemAI think tank on the topics of AI literacy and various data sets. 

 

You often talk about the ethical challenges of AI, and you have drafted a position paper with action points on the European AI Act with FemAI. Do you believe that the current discussions and regulations are sufficient to protect society from the risks of AI in the long term? 

No, of course not. All laws need to be translated into the AI age. The Anti-Discrimination Act (AGG) is not designed enough for multiple discrimination (i.e. when a person is discriminated against on the basis of their origin and sexuality), with serious cases of discrimination occurring, particularly in the case of intersectional discrimination. The GDPR and the EU AI Act are in conflict with each other, including in the context of various data sets. Apart from European and national laws, we should not lose sight of the fact that we need global rules in a global competition. For me, the Digital Global Compact (DGC) is a pioneering project, but it lacks binding force.

 

In the past, you have worked on guidelines for the Federal Foreign Office and the United Nations. What differences do you see in international co-operation in the development of ethical AI standards?

Most of the AI standards I have consulted have recognised the need for different perspectives. Unfortunately, however, I still observe far too often that homogeneous groups overlook essential pillars of AI governance. This can lead to dangerous blind spots. For example, I was very surprised that the GDC as it stood in December 2023 did not include a provision on the use of autonomous weapon systems. This was simply overlooked. It is therefore important to perfect experience and diverse perspectives in the development of AI governance projects. I particularly enjoyed working in the German Bundestag on the AI strategy, as it involved a process lasting several months with extensive hearings. I didn't get to see the result because I left to devote my full attention to FemAI. However, this step also shows that I am tired of consultations like these, which then get stuck at the theoretical level. I am interested in how effectively they are implemented, and that is often where they fail.

 

The methods used by Cambridge Analytica in the 2016 Trump campaign prompted you to look into AI for the first time. Now we are once again shortly after a US presidential election, and federal elections have recently taken place. How do you assess the role of AI in political election campaigns and opinion-forming today? Do you see any differences in the influence of AI on political elections between the USA and Europe?

I would love to give a 45-minute keynote on this question. As part of the campaign team of the youngest German MP in 2021, I have witnessed not only theoretically but also practically the importance of social media in election campaigns. I managed her campaign. Since then, I have been committed to digital policy. AI must become mass-capable. At the moment, AI is fuelling fears instead of utopias for the future and that doesn't do this tool justice. In the right hands, we can do a lot of good with it. That's why we decided to label ‘good AI’ with our FemAI brand instead of continuing to feed its misuse. I hope that politicians take AI and the upcoming revolution or transformation (a lot will change, that's for sure) seriously. 

 

After three years, you left the German Bundestag in September. Why is now the right time to concentrate fully on FemAI and what are the next steps for your think tank? 

Thank you for this question! FemAI has gone through a transformation. We kept asking ourselves: ‘How can we maximise our impact?’. The decision has now been finalised since the beginning of March: we will split FemAI and operate in two markets: AI certification and AI education. Both business areas will be fed with research results by our think tank. My time in the German Bundestag was one of the most educational experiences of my working life so far and I am grateful for the trust placed in me - especially by the Bundestag administration, which gave me a lot of responsibility. I was able to build trust there and continue our research work to the point where, after extensive analyses during the winter months and observation of world political events, I can now start with a new structure.

Many thanks to the team for the many hours of workshops and our unique external network that stood by our side day and night. 

In the process, we also categorised the term feminism in a global context and hinted at a ‘new era of feminism’. 

 

You were honoured with the ‘AI Person of the Year 2024’ award. What does this honour mean to you and what message do you want to send with your work in the AI world?

Believe me, when I heard about it, I had to read the text three times before I could believe it. To put it into perspective: I grew up in a village of 1200 people, washed dishes when I was 14 and never thought I would ever receive an award like this. After the first few days, I felt very grateful; I was proud. At the same time, I also see this award as a responsibility to generate the greatest possible impact. Since then, I have received 8 more awards and have even been on newspaper covers. I am very grateful for all of this, but I also see the need to translate this attention into our business model so that we can actually achieve scalable implementation. With our cooperation with Tokenize.it, we have found a method that allows the civilian population to invest in FemAI in order to break down privileges. This is the message I want to send: We can all do AI and have a say in its development! 

 

Why do you see Berlin as a suitable place to work on feminist AI? What potential does the city harbour for your work and the further development of ethical AI solutions?

I particularly like the diversity and creativity in Berlin. Berlin has history. Berlin is an ecosystem that we are now allowed to join more and more. For many, Berlin is a symbol of freedom. 

For me personally, Berlin is also a part of my family. My father lived here before he died - far too early. My grandma Doris still lives here. Our AI certification platform is named after her: DORIS 1.0

 

Thank you for your time.

Note: This interview was conducted in German and translated into English.