According to Techmeme, the European Commission has launched a formal investigation into Google’s use of AI Overviews. This probe, announced on May 29, 2024, centers on whether Google has adequately compensated content creators and news publishers for the data used to train its AI models. The move follows Google’s integration of the feature, a direct response to the competitive threat from OpenAI’s ChatGPT, into its core search service. Critics, like Dirk Auer, immediately slammed the investigation as an overreach that pushes competition enforcement to the detriment of consumers. They frame it as a fundamental shift away from economics-based antitrust law toward a tool for political resource redistribution.
The Real Shift in Antitrust
Here’s the thing: this isn’t really about whether AI Overviews are anti-competitive in a traditional sense. The core complaint, as Dirk Auer and others point out, seems to be that Google isn’t paying enough to certain industries for the privilege of improving its own product. That’s a huge blurring of lines. Antitrust law was built around concepts like monopolization, predatory pricing, and consumer harm. Now, it’s being wielded to ask, “Did you compensate these specific players sufficiently?” That sounds less like competition policy and more like a form of price regulation or a mandated revenue-sharing scheme. It turns the law into a tool for what Auer calls “rewarding effective rent-seekers.”
political-economy-of-complaints”>The Political Economy of Complaints
So what’s really driving this? Look at the players. The publishing and media industries in Europe have been vocal critics of Google for over a decade, lobbying hard for various forms of “link taxes” and compensation schemes. This AI probe feels like the latest chapter in that long-running battle. As Lazar Radic and Kaye Jebelli have discussed, enforcement is increasingly shaped by “the political grievances of well-connected industries.” It’s a world where success in the market is seen as a sin if you haven’t also secured political permission. The Commission gets to expand its regulatory empire, a connected industry gets a potential payout, and the consumer welfare standard? It gets quietly sidelined.
A Chilling Effect on Innovation
Think about the signal this sends. A company integrates a transformative AI feature to keep its core product relevant and better serve users. The regulatory response is an investigation into whether it paid off other businesses enough. What does that do for incentives? It basically tells every tech firm in Europe that innovation comes with a political tax. You can’t just build a better product; you have to navigate a minefield of pre-emptive negotiations and potential grievances from any industry that feels its content was “used.” It prioritizes the distribution of existing wealth over the creation of new value. And in a global AI race, that’s a recipe for stagnation.
Where Does This End?
The scary part is the lack of a limiting principle. If Google needs to pay publishers for AI training data, does every AI startup? What constitutes “enough” compensation? It’s a subjective, political question, not an economic one. This approach, as noted by commentators like Peter Suhel and James Czajka, makes the regulatory process itself the product. The goalposts can always move based on who complains the loudest. In the end, we’re left with a system that punishes market success and rewards political maneuvering. And that’s bad for everyone except the regulators and the rent-seekers.
