
In turn, we use cookies to measure and obtain statistical data about the navigation of the users. You can configure and accept the use of the cookies, and modify your consent options, at any time.
The Tech Coup
There’s a coup against democracy happening, says Dutch politician Marietje Schaake, who warns against the growing influence of tech giants on society. But if Europe wants to use the technology on its own terms it will come at a cost.
This article was originally published by MandagMorgen.
It has been translated and edited for use in FARSIGHT.
The struggle for control over digital technology is intensifying. While politicians attempt to regulate artificial intelligence and social media, the influence of tech giants on virtually every sector of society has continued to grow. For Europe, the challenge is compounded by its relatively weak homegrown tech sector and a growing dependency on the services of especially American providers.
One of the EU’s most prominent voices in the debate is the Dutch politician Marietje Schaake. As a member of the European Parliament, she speaks of an outright “tech coup.” That’s also the title of her recent book, The Tech Coup: How to Save Democracy from Silicon Valley, published this past fall, in which she describes how more of society’s fundamental functions are being outsourced to tech giants – and thus fall outside democratic control.
Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.
become a futures memberFor a few years, during Joe Biden’s administration and Danish politician Margrethe Vestager’s tenure as European Commissioner for Competition, it seemed like a balance could be achieved.
The EU launched a set of relatively strict regulations that imposed new demands and responsibilities on the tech giants who want to do business in Europe. The Commission also started taking a more offensive approach, including issuing large fines for breaching European anti-trust legislation. The US Government brought its own anti-trust cases against Google, Amazon, Apple, and Meta, and worked to establish global norms and security measures in relation to AI.
Like so much else, this all changed with the election of Donald Trump in 2024.
The tech giants have moved into the White House, and AI continues to develop at breakneck speed, driven by unprecedented investments aimed at securing a dominant position in a technology that permeates every aspect of life and promises to transform everything.
Marietje Schaake’s book provides in-depth descriptions of how the tech giants gain decisive influence in areas like education, health, and national security. Reading the book, while observing how real-world events have not only caught up to but far exceed Schaake’s warnings, is an almost surreal experience.
Since the book was published, talk of regulation has all but fallen silent.
“It’s a power grab,” Marietje Schaake tells me over a video call.
“It’s basically the next step in their tightening grip on decision making. The companies are asserting more and more power, taking more and more decisions without countervailing powers, without oversight, without a mandate, without accountability.
But it’s a peculiar dynamic, because they are using state power to also dismantle the state. It’s hard to discern any sort of doctrine or political philosophy, it just seems like naked capitalism.”
For 10 years, until 2019, Schaake was a member of the European Parliament. She then moved to Silicon Valley where she worked as Policy Director at Stanford University’s Cyber Policy Center. She’s a columnist for Financial Times, is involved with several organisations and regularly appears in various international media.
In her book, Schaake recounts how Meta’s AI Blenderbot once identified her as a terrorist. At first glance, nothing more than an odd mistake, but on further reflection a clear demonstration of the dangers inherent of placing too much trust in AI.
“But imagine how the similar mislabelling of an LGBTQIA+ person or a religious minority in an unfree society might play out,” she writes.
Schaake is part of an effort to formulate an EU codex for responsible use of AI in accordance with the AI act, which recently came into effect.
EU has a history of passing legislation that eventually shapes global standards, as companies have found it simpler to comply with Europe’s stricter rules in order to maintain access to the European market.
Given that nearly all innovation in AI is happening in the US and China, it may seem almost naïve to assume the EU can set the terms in that field as well. According to Schaake, the close ties to Trump mean that the tech giants can largely decide for themselves how far they are willing to push the boundaries of AI experimentation.
“Few people disagree that there’s a considerable risk coming from ever more sophisticated AI models. The risks have been articulated by the CEOs and developers at major tech companies themselves, who warn that this technology can lead to the end of humanity,” she says.
“We need rules – whether it’s for short-term risks like discrimination or disinformation or long-term risks like breakaway AI systems or other national security concerns.”
In November 2023, the UK brought together politicians and CEOs from 28 countries – including the US and China – for a summit on AI safety. The parties agreed to work together on a set of joint guidelines for AI. The purpose was to avert the most destructive scenarios that spring from the use of artificial intelligence, including the development of bioweapons or machines seizing power from humans.
Around the same time, Joe Biden signed an executive order that set a list of AI safety guidelines. Concurrently, the EU, US, and G7 nations have worked on formulating the Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.
Trump has since withdrawn Biden’s AI guidelines, and US tech CEOs have become more vocal in their opposition of the European demands. Sam Altman has made threats of pulling OpenAI out of Europe, and Meta has indicated that they won’t accommodate the EU’s AI Act. It’s not uncommon for new services and AI models to launch later in Europe than in the US – if they launch in Europe at all.
“The AI companies feel emboldened to do things their own way and won’t help create or respect legal boundaries – and Trump will probably help them,” says Marietje Schaake.
“But I think we will see that societies that don’t put these guardrails in place will also feel the consequences of those risks materialising.”
Since Trump’s first term, there has been talk in Brussels of “digital sovereignty” and “strategic autonomy”. So far, though, this hasn’t led to the formulation of a serious strategy for avoiding dependency on foreign technology and services.
The EU commission has enacted several major sets of legislation aimed at regulating the digital domain. The GDPR directive protects personal user data, and the Digital Markets Act and Digital Services Act place more responsibility on platforms for their content while setting demands for openness and data sharing to avoid monopolisation.
The latest piece of regulation, the AI Act, came into effect in January 2025, setting requirements for safety and transparency and banning certain particularly high-risk applications – such as political profiling and live facial recognition.
It’s the first time the tech giants have been seriously affected by this type of regulation, despite having spent fortunes lobbying against it – both in Brussels and in Washington.
But after Trumps inauguration and the subsequent firing of Lisa Kahn as chair of the Federal Trade Commission – an independent agency of the US government responsible for the enforcement of civil antitrust laws and the promotion of consumer protection – the EU is in a more isolated position.
Does Europe have the strength to insist on its principles?
“It will mean a test of historic proportions of Europe’s resolve; which price are we willing to pay for our sovereignty?” asks Marietje Schaake.
“The question will be whether the European authorities, lawmakers, and courts will allow a weakening of these principles, or whether they will stand up for them.”
“It’s about political unity, also internally within EU member countries, since in some countries, including my own, the far right is among the biggest parties,” says Schaake.
Explore the world of tomorrow with handpicked articles by signing up to our monthly newsletter.
sign up hereIt’s not a given that there will be sufficient support for additional regulation. For companies competing in the global markets, not having access to the newest and most powerful technologies could present challenges. There’s a worry that the EU’s regulation will lead to a weakening of European competitiveness, or that innovative European startups will prefer the US regulatory environment and leave.
Does Europe’s regulation come with a major economic cost?
“It will. We will pay a price. But I think we should be more self-confident in Europe in our belief that we have the innovative capabilities to do it in our own way. The Silicon Valley model is not the ideal way to develop and implement technology,” Schaake says:
“Until now, the major American AI companies could claim that they were the only ones capable of creating these technologies, but with China’s DeepSeek it has become clear that it’s possible to build models with comparable performance with fewer resources – and that opens up a lot of opportunities for European entrepreneurs.”
Marietje Schaake isn’t convinced that the American giants will actually pull out of the European market in response to regulation.
“I think it can be posturing on the part of companies to deliberately delay the launch of their services in Europe – it’s used as some sort of threat. It’s a bit strange when they claim that they are delaying launch because they are unsure of what the regulation demands. If anything, the law creates certainty by making it very clear what the obligations are,” she says.
In her book, Marietje Schaake explains how the growing dependence of states on digital systems means that important decisions are left in the hands of private companies – often in ways that are not transparent. Artificial intelligence can decide who is allowed parole, who gets to the front of the line for a medical operation, which job applicants get an interview, or which targets a drone should attack.
In critical situations like war or social unrest, social media and satellite communication becomes crucial infrastructure, but it’s left to the tech CEOs to decide who is allowed access.
“The more public authorities outsource central state functions to digital systems, the greater the loss of control and sovereignty, as well as the competencies necessary to understand how these systems function,” Schaake explains.
She believes that the public sector should take advantage of the fact that they are among the tech companies’ biggest clients to make more demands. This could include requiring transparency regarding AI decision making or that the systems do not discriminate or are vulnerable to data leaks.
Schaake proposes the development of a public European ‘stack’ – a suite of software and infrastructure that covers the basic digital functionality of society, from payment systems to healthcare and governance. It would enable public authorities to steer the development and remain in control of how the systems they depend on.
But aren’t private companies better equipped to develop and maintain our digital systems?
“Definitely. And we should leave it to the private companies to develop them – but not without any regulation. I mean, companies are producing life-saving medicines, but not without any guardrails. Companies are providing our means of transportation, but not at their own discretion.”
“I’m a liberal myself. I believe that the private sector has enormous capabilities, but I think the market has overreached. It’s out of balance right now in relation to the rule of law, our democratic principles and the public interest. It can be rebalanced.”
What would be the most effective way for the EU to react to American pressure?
The most urgent thing is to create better European alternatives, better ecosystems, more investments, less barriers to capital access,” says Schaake and elaborates:
“There’s enormous talent in Europe, but we often lose them to the US because that’s where the capital is. We’ve been talking about creating a capital union in the EU for 20 years. But apparently, it’s only when we are up to our lips in water that we start moving. We are not doing enough”
The EU commission has signalled a readiness to enact a range of countermeasures if Trump enacts new tariffs. Informally known as ‘the bazooka’, these measures include shutting off American companies from public procurement, disallowing US companies from enforcing their patents, or barring US companies from investing, or even selling their products, in Europe.
“It’s hard to discuss retaliation when we don’t yet know the measures we will be retaliating against. But demonstrating that we have countervailing measures is a strong signal to send,” Marietje Schaake says.
“Essentially, no one wants this. The EU is an open trade bloc, we benefit from r ules-based trade, so to have a significant trading partner discard the rules and obstruct open trade is a disaster.”
“But you can’t just take it lying down – then the harms done become more asymmetrical. So, we must hope that our countermeasures impacting certain technologies in the US will also impact voters there who will then say that this is unhelpful.” says Marietje Schaake.
Get FARSIGHT in print.