AI and the New Iron Curtain: Europe’s Technology Challenge Beyond Cookies

Hagen Hübel
5 min readDec 8, 2023

As increasingly reported in the news, the European Parliament is adopting a decidedly critical stance on AI, imposing strict regulations and making it difficult for potential market participants to gain access. The paradigm shift from traditional laws (such as those concerning copyright and liability) to new technological methodologies underpinning artificial intelligence proves to be extremely problematic. This shift is evident in the transition from algorithmic certainty to probability-based machine learning patterns.

But what does this mean for our European future, and even more so for the future of our children, whose social lives are deeply rooted in the mobile and online world? Have we thought about the consequences if this digital sector in the EU were to suddenly disappear? I would like to show here why this danger is real.

(Drawn by DALL-E, an AI tool that creates images based on written descriptions. The content of this article was used as input)

When talking with German or European young founders, those currently studying at TUM (Munich) or TU (Berlin), it’s clear they are incredibly skilled and interested in founding in the Tech/AI field. What is currently on their mind?

They are pondering whether to head to the USA or Asia.

Europe hardly figures in their considerations anymore, and a glance at our regulation frenzy makes it clear why.

Aleph Alpha exemplifies the political effort needed to establish something here based on modern technological patterns. And, to be honest, their Large Language Models haven’t really impressed me, not in comparison to other providers. Actually, not at all.

But our government can now boast,

“Hey, we do have AI in Germany, even funded with 500 million. That should be enough for the next 100 years.”

Considering the context of Europe falling behind in the eras of Web 1.0 and Web 2.0, and the stringent regulations faced by Web 3 that leave its management largely in the hands of major financial institutions, it’s somewhat ironic to note that we do have a regulation in place that results in large, intrusive message boxes for cookie consent. This law, while aimed at protecting user privacy, often becomes a prominent and somewhat cumbersome aspect of the online experience.

We don’t have a Google, YouTube, Snap, Facebook, Tinder, Amazon, Twitter, TikTok, Instagram, GitHub, NVidia, or Shopify, and we’ll never have an OpenAI.

This means that the applications on your children’s smartphones are predominantly developed by companies outside the EU. These companies are increasingly integrating AI components into their services. Consequently, there’s a likelihood that these services will either become unavailable in the EU or users in the region will have to contend with outdated versions. Persisting with the 2023 versions of these applications would be comparable to using Windows XP in a modern computing world.

Furthermore, in the rapidly evolving world of technology, the absence of AI components in software and products, including cars, online shops, delivery services like Lieferando, or multi-media apps like Spotify, social media, and games, would render them as unappealing as presenting a 14-year-old with a MS-DOS terminal in 2023.

This analogy highlights a fundamental conflict between emerging technological foundations and existing laws, such as copyright and liability. In the automotive industry, for example, traditional manufacturers such as Volkswagen are struggling with the principles of autonomous driving as developed by Tesla, where static algorithms are being replaced by machine learning and probabilistic outcomes. They have a poor understanding of what is happening and why this is the case. And who is to blame when something goes wrong, even though it was no longer a static algorithm that applied the brakes, but the “learned behavior” of a neural network.

This shift suggests a need for a significant rethinking of our approach to technology and lawmaking, moving away from outdated methods and mindsets, often symbolized by the image of old men drafting laws with pens, towards more dynamic, future-oriented strategies that embrace the complexities and potential of AI-driven technologies.

Until our laws evolve to align with these changing technological principles, tech providers and creators of new social applications will likely continue to steer clear of the European market. This cautious approach stems from the potential legal complexities and uncertainties in a region where current regulations have not yet fully adapted to the nuances of modern technology and AI advancements.

The only (sarcastic) upside I can think of is that parents want to limit their kids’ mobile usage. That will happen naturally in 2024/25, at least in the European Union.

A significant challenge for European companies will be securing access to advanced AI technical services. This is essential to ensure that digital voice assistants in customer support lines evolve beyond the limited quality associated with the current era of Deutsche Telekom, a scenario all too familiar to EU citizens.

If you’re considering self-provisioning Large Language Models (LLMs) and other advanced models, be prepared for significant initial costs: the required hardware and infrastructure will cost millions of Dollars per month! From day 1, even if no paying customer has been acquired yet. Such an endeavor is often not feasible without substantial venture capital, a resource less readily available in the EU due to previously mentioned challenges. Consequently, this situation compels tech providers in Europe to depend on foreign suppliers, predominantly from the USA, for their technological needs

So, we’re facing the Cookie- and Google Analytics problem but on a much larger scale. That’s why these US providers don’t even let European customers in. Expanding on this, it’s important to understand that merely using a VPN is not an adequate solution. These providers frequently require a mobile number for verification, and unfortunately, mobile numbers based in Europe are typically not accepted. This extra layer of verification underscores the complexity and depth of the access restrictions faced by European users in the digital realm.

A screenshot of claude.ai, incidentally the most serious competitor to OpenAI, serves as an example. Unfortunately, it is still not accessible for Europeans and, according to the company Anthropic, this is not to be expected for the time being.

Over a year ago, I already said that if a centralized AI economy starts excluding Third-World countries, it will lead to enormous conflicts.

I was somewhat wrong. It’s actually the EU that’s being excluded, not the Third World.

(Eine deutsche Fassung dieses Artikels gibt es hier: https://0xhagen.medium.com/ki-und-der-neue-eiserne-vorhang-europas-technologische-herausforderung-jenseits-von-cookie-bannern-cd95c6152ec4)

--

--