Europe faces a major challenge in AI infrastructure: while the US has enormous data center capacity, Europe shows a clear gap – both in available hardware and in the efficient use of existing resources. Against this backdrop, Lyceum is working on a platform designed to make GPU capacity more utilizable and simplify development processes. Magnus Grünewald, CEO of the company, deliberately bases this work in Berlin, where research, startups, and industry are closely interlinked.
In this conversation, he explains how to assess Europe's infrastructure deficit, what role Berlin plays in advancing AI technologies, and what impulses the new #ai_berlin hub could provide for networking and digital sovereignty. He also discusses the requirements of modern AI teams, the importance of efficient GPU orchestration, and how technical performance, cost awareness, and sustainability questions can be combined in infrastructure development.
Hi Magnus, a recent Bitkom study shows: the US has 48 gigawatts of data center capacity, Germany about 3 gigawatts. Lyceum raised €10.3 million in June to build European AI infrastructure. How do you assess this gap – and what role could Lyceum play in closing it?
The gap is even larger for real AI data centers – meaning water-cooled facilities with very high power density. Pure data center capacity is only part of the problem. Europe lags significantly behind both in the number of available GPUs, meaning graphics processors for AI calculations, and in the software stack.
We're developing an infrastructure platform that makes European GPU capacity not just available, but above all more usable. Our software eliminates out-of-memory errors – memory overloads that crash calculations – typically doubles GPU utilization through intelligent orchestration, and reduces setup and DevOps effort, meaning operational effort for developers, by up to 90 percent.
While many focus primarily on new hardware, we solve systemic inefficiency. Today, average GPU utilization at large cloud providers is 40 to 60 percent. We use AI-powered resource allocation that reduces effective compute costs by a factor of 2.5 – it feels like more than doubling capacity without building a single new data center. Our ambition is not just to catch up, but to set a new standard.
You have headquarters in Berlin and Zurich, but are building capacity in Denmark and France. Why does Berlin remain your headquarters?
Berlin is our headquarters because talents from all over Europe come together here. The city combines a vibrant tech and startup scene with a strong deep tech community and a mentality focused on building and rapid implementation – exactly the environment where Lyceum should emerge. Many of Europe's most exciting AI-first teams are based here. This gives us direct access to early users with whom we develop our infrastructure close to actual needs.
Our team in Zurich and our GPU capacity in Denmark and France complement this ideally: from Berlin, we think consistently European, while the hardware is located where it can be operated sustainably and efficiently. What matters is that AI teams in Europe have access to powerful, sovereign infrastructure – regardless of where the racks are located.
The #ai_berlin hub, which opened in October, aims to connect startups, corporates, and research. What do you hope for from the hub initiative?
The #ai_berlin hub can play an important role – particularly in three areas. First: networking with real output, not just events. Startups benefit from targeted matchmaking formats – with research institutions, other startups, or corporates. Here the hub could act as a "deal enabler."
Second: more visibility for European AI infrastructure. The narrative still dominates that serious AI training automatically leads to AWS or Google. The hub could make European alternatives more visible and comparable. And third: a strong bridge toward policy. For topics like AI regulation or data center funding, the translation between practice and political decisions is central. The hub can bundle the ecosystem's perspectives. What startups concretely need: less bureaucracy, flexible pay-per-use models instead of long rack contracts, simple integration into existing workflows, and transparent cost forecasts.
You promise "One-Click GPU Deployment" and automatic hardware selection. Can you show how this works with a concrete example?
A startup develops, for example, a PyTorch model – a framework for machine learning – for medical image analysis. Previously, this typically runs like this: you select a GPU instance on AWS, set up SSH access, meaning encrypted remote connections to the server, configure the environment, copy your code over, and hope the memory is sufficient. If an out-of-memory error occurs after two hours, the whole thing starts over with a larger and more expensive instance.
With Lyceum, the developer stays in their familiar environment – such as VSCode or Jupyter. There they write the code as usual and click "Run on GPU." Our platform analyzes the code in real time, estimates memory requirements and runtime, recommends the appropriate hardware – such as an A100 instead of an oversized H100 – and shows a clear cost forecast upfront. In the background, the code is automatically containerized, executed on the optimal GPU, and the results land directly back in the IDE, the development environment. Without SSH, without YAML configuration files – which typically describe complex system processes – and without trial and error.
How do you convince organizations that have used AWS or Google Cloud for years?
Our strongest argument is cost efficiency: with traditional clouds, you pay for an instance whether it's utilized at 40 or 100 percent. We optimize utilization and hardware selection automatically and bill based on usage – in many cases resulting in savings of around 60 percent. At the same time, we eliminate trial and error by analyzing the code upfront, selecting the appropriate hardware, and providing clear runtime and cost forecasts, which avoids expensive failed attempts.
Additionally, we enable EU data residency without performance compromises, which is central particularly for the healthcare sector, financial industry, and public institutions. Our developer experience is another unique selling point: developers stay in their familiar IDE and don't have to operate cloud consoles or complex orchestration stacks. And finally, we address digital sovereignty – European infrastructure makes companies less dependent on pricing policies and geostrategic decisions of hyperscalers.
Of course, we also encounter resistance: lock-in to the AWS ecosystem, procurement departments that prefer the familiar name, or the general inertia of existing processes. Our response to this is pragmatic pilot projects, hybrid setups, strong reference customers, and concrete ROI calculations.
EU data residency and sustainability are part of your offering. At the same time, the electricity demand of German data centers has almost doubled by 2025. How do you deal with this tension?
The rising energy demand from AI is real, and we take this very seriously. For us, the most important lever is how efficiently we use the deployed energy. At our locations in Denmark and France, we therefore deliberately rely on power systems with a high share of renewable energy – location selection is a central component of our sustainability strategy.
At the same time, we increase the actual utilization of GPUs through our software and avoid idle time. When a data center uses its GPUs on average only at 40 percent, that's wasted energy. If utilization increases, energy consumption per unit of computing power decreases accordingly. Our goal remains clear: to build a European AI infrastructure that is not only sovereign and affordable, but also works so efficiently that it consumes significantly less energy per result.
Thank you for the conversation.









![[Translate to en:] Tim Polzehl, CEO Gretchen AI © Gretchen AI [Translate to en:] Tim Polzehl, CEO Gretchen AI © Gretchen AI](https://ai-berlin.com/fileadmin/user_upload/Interview/tim-ai3.png)

























![[Translate to en:] Dr. Dirk Schlesinger, Leiter des TÜV AI Lab und Dr. Petra Ritter, Leiterin des TEF-Health [Translate to en:] Dr. Dirk Schlesinger, Leiter des TÜV AI Lab und Dr. Petra Ritter, Leiterin des TEF-Health](https://ai-berlin.com/fileadmin/user_upload/AI_Header.jpg)



























![[Translate to en:] © MindR [Translate to en:] © MindR](https://ai-berlin.com/fileadmin/user_upload/Stories/Header-KIinHR-c-mindR.jpg)







![[Translate to en:] [Translate to en:]](https://ai-berlin.com/fileadmin/user_upload/Stories/Xayn-Header-c-Xayn.jpg)















![[Translate to en:] Dr Roland Roller © DFKI [Translate to en:] Dr Roland Roller © DFKI](https://ai-berlin.com/fileadmin/user_upload/Interview/DrRolandRoller-c-DFKI.jpg)

![[Translate to en:] © Fraugster [Translate to en:] © Fraugster](https://ai-berlin.com/fileadmin/user_upload/Stories/Fraugster_Header-c-Fraugster.jpg)

![[Translate to en:] © Pexels.com [Translate to en:] © Pexels.com](https://ai-berlin.com/fileadmin/user_upload/Stories/Header-c-pexels.com.jpg)









![[Translate to en:] [Translate to en:]](https://ai-berlin.com/fileadmin/user_upload/Stories/AI-c-unsplash.jpg)

![[Translate to en:] © Daniel Isbrecht, RAZ Verlag [Translate to en:] © Daniel Isbrecht, RAZ Verlag](https://ai-berlin.com/fileadmin/user_upload/Stories/c-Daniel-Isbrecht_RAZ-Verlag.jpg)








![[Translate to en:] © Charles Deluvio, Unsplash [Translate to en:] © Charles Deluvio, Unsplash](https://ai-berlin.com/fileadmin/user_upload/DataProcessing_2.jpg)

