The AI Act puts trustworthy artificial intelligence on a broad regulatory footing in the European Union for the first time. The TÜV AI.Lab is developing test scenarios and methods for testing and certifying AI and to make TÜV companies the leading testing organizations for AI. Founded in 2021, initially under the umbrella of the TÜV Association, TÜV AI.Lab was spun off as a joint venture in 2023.
After many years as Senior Policy Advisor at the Federal Chancellery, Franziska Weindauer, a proven policy and digital expert, took over the management of the AI project lab in November 2023. Due time for a conversation about the certification of AI solutions in practice, the influence of the AI Act on the ecosystem and the various projects of the TÜV AI.Lab.
Hello Ms. Weindauer, thank you very much for taking the time for this interview. You took over as CEO of TÜV AI.Lab in November. What is the first thing on your agenda? What do you want to focus on?
The enormous momentum in the regulatory landscape, especially with regard to the recently passed AI Act, doesn't give us all much time, which is why we are happy to have been properly up and running as a limited company since November. We have noticed that technological development is also accelerating even further and progressing even more rapidly. Take a look at the enormous speed at which new innovations are coming onto the market, just recently the release of Sora and Claude 3, but also very specific applications in the individual sectors. This rapid pace also sets the pace for us at TÜV AI.Lab.
For the immediate future, we are initially concentrating on translating the requirements of the AI Act for high-risk applications in those areas where the regulation now applies in addition to the existing regulations; for example, medical devices or intelligent machines.
In these areas, products that contain AI components are already being certified. In the future, they will also have to comply with the requirements of the AI Act. Furthermore, we are also in the process of developing systematic test criteria and procedures for the high-risk applications that will be covered by the AI Act for the first time, such as HR applications and the use of AI in education or administration.
The TÜV AI.Lab aims to develop test scenarios and methods for testing and certifying AI. What does this look like in practice? What aspects are the focus here when it comes to AI standards?
We see our practical task as being to work through the legal requirements and the standards that are currently being developed and to break them down into specific applications. The main thing here is to understand which requirements apply in detail and how we can translate them into practice.
The particular challenge with AI is that it is not possible to calculate so-called failure probabilities, as they are used in other traditional testing areas of TÜV companies. In addition, we have no statistics and little experience. With a classic technical application, all of this is a given. I can ask questions and answer them clearly: How likely is it that a cable will break or a steam boiler will explode? You can't simply transfer such calculations to AI systems. This is where we need to develop completely new approaches and procedures. In practice, the question is: how good does the respective AI system have to be and how can I measure this in concrete terms?
In your opinion, how will the EU's AI Act change the AI sector in Europe, also in an international comparison?
For the first time, the AI Act lays down uniform requirements that AI applications must meet. Among other things, the regulation stipulates that AI applications in high-risk areas must be accurate, robust and cyber-secure and that risks to safety, health and fundamental rights must be mitigated. This introduces a basic standard for all high-risk classified AI systems. However, this standard will also have a significant impact on all AI systems used in lower risk areas.
I believe that this uniform gold standard will have a global signaling effect. Ideally, we will be pioneers at international level and set an example for other regulations that are currently being developed. Moreover, the requirements are now clearly defined within Europe - everyone involved knows what they need to be prepared for and what they need to be guided by. The common European regulation is the foundation for the development of harmonized standards. I expect that there will be a large number of tools, offers and handouts that will help companies to use AI sensibly - and correctly from the outset.
You say that "trustworthy AI should be a European USP". How do we achieve this goal and what about our competitiveness with countries that tend to think little of AI regulations and data protection?
I am convinced that quality will prevail. There is a market, a need for trustworthy AI - also because it is essential for users. Personally, I also believe it is necessary to enforce the protection interests of citizens, for example with regard to their security and fundamental rights.
And of course, the pace of innovation in Europe could be faster in some areas. But we are doing it right from the start. Even if we are not always the first, we develop sustainable, sensible and high-quality products that do not pose any significant risk. In my view, guaranteeing quality and safety in the long term is more important than always being at the forefront. Of course, we have to be careful not to miss the boat, which is why we in Germany must not overdo it when it comes to implementing and enforcing the AI Regulation. After all, it's not just the regulation that matters, but also how it is implemented in practice.
Neighboring European countries are often much more pragmatic in this regard, enabling more and still complying with the law. This is another reason why we at TÜV AI.Lab aim to develop pragmatic solutions that meet the requirements of all parties involved, including companies that have to survive in a competitive environment.
According to a study by the Chamber of Industry and Commerce (IHK), the use of AI in Berlin companies has doubled compared to last year. In your opinion, how far along is the German economy with the use of AI?
The figures, including the annual figures collected by the EU Commission on the use of AI in Germany and Europe, clearly show time and again that we are not the frontrunners in Germany and that we clearly have some catching up to do, including in the use of AI. These figures also coincide with my personal feelings and are linked to what I have just said. In Germany, we often make it too difficult for ourselves in the private and, above all, business use of new technologies, we see too many hurdles and risks and are too concerned with avoiding mistakes. In other countries, there has so far been a greater mental openness to simply trying something out.
But my feeling is also that ChatGPT has acted as an awakening experience in many places, that companies are now becoming more active and are also showing speed. I therefore have good reason to hope that this experience will now lead to other AI systems and applications being tried out and used quickly. I am convinced that the increase in resources and effectiveness of AI speaks for itself and that the widespread use of assistive AI will become the norm in Germany in the foreseeable future.
Where do you see the hurdles and challenges that need to be overcome?
The biggest hurdle that we need to overcome in the near future is the remaining uncertainty in the practical application of the AI Act. It is now the task of all stakeholders to clarify as quickly as possible how the regulation can be implemented in practice and how it should be structured. This involves harmonized definitions, norms and standards, which are a prerequisite for the design of specific testing processes.
A key challenge for companies, but also for all other stakeholders, is the rapid pace of technological development. On the one hand, this raises the question of how German companies can remain or become competitive and how state-of-the-art AI applications can be developed and used in Europe, in science and industry. The high pace of innovation is also a challenge for us as a testing company: how can we ensure that our testing procedures also integrate future developments? How do we test systems that are constantly evolving? We are already working on specific approaches to map this.
In a previous interview we have already talked about the TEF Health project, in which TÜV AI.Lab is also involved. How has this developed? Are there any other projects that are already underway or being planned?
We are very satisfied with the TEF Health project. It is particularly pleasing to see how well the collaboration with the consortium partners works across national borders. Everyone is pulling in the same direction to advance the common European market. As a first interim result, we published a white paper on the certification of AI in medical devices back in January.
In addition to the TEF Health project, we are currently participating in the AI mission of the Federal Ministry for Digital and Transport Affairs (BMDV), which is being implemented under the leadership of Acatech. Mission KI is a comprehensive project in the field of AI quality and trustworthiness. In this project, effective test criteria and test procedures for trustworthy AI are being developed and tested on specific, cross-sector use cases. We are playing a key role in a pillar that is working on developing an AI quality standard that works for products in the low-risk area, but can also be adapted for high-risk use cases in relevant sectors.
We are also currently coordinating other projects in which we can provide useful support with our expertise in responsible and ethical AI.
How do you see the Berlin AI ecosystem and what potential do you still see in the field of artificial intelligence?
The Berlin AI ecosystem is super lively and very innovative. Like the city of Berlin itself, it brings together an absolutely diverse range of players, who in turn are connected by an incredibly good and strong spirit. I am currently noticing a strong acceleration here too, the ecosystem is growing and is very connectable to other systems and definitely open to other players.
Compared to other cities, the Berlin AI ecosystem is relatively decentralized, i.e. not yet clearly grouped around a central player or campus. This is very often a strength, but can sometimes also be a weakness. We ourselves as TÜV AI.Lab definitely benefit greatly from the proximity we have to the other companies and institutions on the Merantix AI Campus. Sometimes it feels like Berlin lacks the ten to fifteen DAX companies or patrons that can shape and promote such ecosystems elsewhere. On the other hand, as a strong startup location with a high quality of life and the corresponding talent pool, we have opportunities that are the envy of other locations.
In any case, I firmly believe that Berlin can become a global hotspot for AI quality if we can bring our power to the streets together and show how we can combine trustworthiness, quality and innovative strength.
Thank you very much for the interview.