U.S.
The disruptive potential for artificial intelligence (AI) technologies has only become more evident as they have matured, and fierce competition has developed between both existing technology leaders and new entrants to push forward the AI stack and its applications. The Biden administration had signaled an intent to enforce proactively and prospectively, suggesting that, in their view, that the development of digital technology sectors offered a cautionary tale for AI. However, the Trump administration has made establishing American leadership in AI a major priority and has pledged to remove regulatory barriers to AI innovation and adoption. Despite the current administration’s vision for a national, permissive AI policy, state legislatures have considered and passed a flurry of bills that would limit how AI is developed and deployed, and state antitrust enforcers have not always followed their federal counterparts.
There are few judicial decisions that squarely address whether and how the rapid disruption of AI impacts antitrust analysis.[1] However, courts are more broadly showing increasing sensitivity to the pace of technological change and the potential for even the strongest incumbents to be suddenly disrupted, which in turn may lead to caution in evaluating future harm theories and in crafting prospective remedies, especially in frontier technology sectors.
For instance, AI did not feature at all in the U.S. Department of Justice’s (DOJ’s) 2020 complaint concerning Google’s search distribution agreements. Four years later, AI warranted its own section in the court’s liability opinion despite the conclusion that AI did not appear primed, at that time, to displace general search. And when the U.S. District Court for the District of Columbia issued its remedies order in September 2025, the court recognized that the government’s first witness had “placed GenAI front and center as a nascent competitive threat.” Based in part on these findings over the course of the case, the court rejected DOJ’s more aggressive structural and prohibitional remedies and focused on a requirement to share data and included generative AI firms in that remedy to further bolster that emerging competition.[2]
Agency enforcement in AI-related markets in the coming years will take place in the shadow of the Trump administration’s broader AI policies. The administration issued an AI Action Plan (Action Plan) in July 2025, and throughout the year has issued numerous executive orders directed to a creating a pro-development and pro-adoption environment intended to help American firms “achieve global dominance” in AI. These orders and policy initiatives can affect competition enforcement both directly and indirectly.
For instance, the Action Plan suggests that the Federal Trade Commission (FTC) should review all investigations, consent orders, and injunctions, seeking modifications as necessary to avoid “unduly burden[ing] AI innovation.” The Action Plan also includes several broader policy recommendations, such as encouragement of open-source and open-weight AI models. The agencies’ leadership have mirrored the Action Plan’s goals and policies in remarks on enforcement in AI industries, suggesting that enforcers may align with the AI Action Plan as relevant to assessing competitive conditions or effects.
Large legacy technology companies, positioned both to offer critical AI infrastructure and to develop their own AI models and applications, have made immense investments. Traditionally structured acquisitions to improve existing infrastructure capabilities or to acquire promising technology or talent from AI start-ups remain widespread. For instance, in February 2025, IBM closed its $6.4 billion acquisition of HashiCorp (represented by Wilson Sonsini), which produces automation and security software for infrastructure underlying important next-generation hybrid cloud and generative AI technologies, following an extended review by the FTC.
Consistent with the Trump administration’s Action Plan and other announced policies, in September 2025, DOJ Assistant Attorney General Abigail Slater identified “exclusionary behavior that forecloses access to key inputs and distribution channels” as a particular concern in AI-related mergers. Slater also raised concerns about access to data and the potential consolidation of proprietary data sources. We expect reviews of transactions related to AI technologies to be searching, with an emphasis on avoiding lock-in or lock-out of key inputs set against a plain desire to enable firms to rapidly innovate and scale in AI.
Businesses have also used alternative deal structures to secure access to important inputs. Investment partnerships between infrastructure providers and leading AI model developers have become common. For example, in September 2025, OpenAI and NVIDIA announced a partnership whereby NVIDIA would invest up to $100 billion in connection with deploying compute for OpenAI. The antitrust agencies have shown strong interest in these investments but have not challenged the investments themselves or joint activity carried out pursuant to the investment agreements.
A January 2025 FTC report from a 6(b) study on partnerships involving large cloud service providers and AI firms Anthropic and OpenAI included a section identifying “potential implications” of those partnerships. But, anticipating the Trump administration’s policies, Commissioner (now Chair) Andrew Ferguson issued a statement dissenting from that section, cautioning against “headlong regulation” of AI and against “broad conclusions about the AI industry.” Instead, Ferguson stressed the need to remain a “vigilant competition watchman” focused on case-by-case analysis.
Businesses are also making use of “reverse acquihires,” involving poaching key employees along with technology licenses and/or minority investments. For instance, in June 2025, Meta paid $14.3 billion to acquire a 49 percent stake in Scale AI (represented by Wilson Sonsini) and poached the company’s CEO to lead Meta’s superintelligence unit. In July 2025, Google paid $2.4 billion to hire key executives from Windsurf after OpenAI’s bid to acquire the company outright fell apart over concerns as to Microsoft’s access to the Windsurf technology via its own investment agreements with OpenAI. While the agencies have authority to challenge such deals as anticompetitive—and FTC Commissioner Mark Meador has recently indicated concerns about them—firms have generally mitigated traditional antitrust concerns by avoiding the ability to control the target and taking non-exclusive licenses.
The Trump administration’s pro-adoption AI policies may also be reflected in the resolution of the Biden-era suit against RealPage concerning its AI-powered price recommendation algorithms. The Trump administration settled the case as to RealPage in November 2025, following settlements with certain landlord co-defendants. While the settlement prohibits the use of competitively sensitive data in runtime operation, RealPage remains able to use historical nonpublic data from landlords to train its AI models. Notably, state co-plaintiffs did not sign on to the DOJ settlement, and independent state lawsuits concerning RealPage’s algorithms remain pending.
Many state governments have taken a more interventionist approach and have considered and enacted a myriad of AI-related bills to protect against harmful uses of the technology. For instance, both California and New York have passed laws limiting the use of software that relies on pooled competitor data. But the Trump administration has signaled it may seek to disrupt state-level AI lawmaking.
In December 2025, the Trump administration issued an Executive Order (EO), “Ensuring a National Policy Framework for Artificial Intelligence” (more aggressively titled “Eliminating State Law Obstruction of National AI Policy” in a leaked draft). The EO creates multi-pronged pressure on state-level AI legislation inconsistent with the Trump administration’s priorities. Among other things, federal authorities are directed to challenge certain state laws as either preempted by the FTC Act’s prohibition on unfair or deceptive trade practices or an unconstitutional imposition on interstate commerce. In early January 2026, the DOJ directed the creation of a task force to pursue such litigation. Private parties have challenged state legislation as well; for instance, RealPage has sued to block both municipal and state-level restrictions on its software. State governments have signaled that they will vigorously defend against both federal and private challenges to their laws.
The combination of a fast-changing business landscape and diverging policies among jurisdictions raises risks, but these risks can be limited and navigated with appropriate management and counsel. Going forward, we expect close scrutiny of AI-related markets at the federal level but for enforcement to be narrowly tailored, with a preference for negotiated remedies and a focus on preventing the largest legacy technology companies from unduly leveraging their market positions. Enforcers have signaled that alignment with the Trump administration’s overarching AI policies may play into their evaluation of competitive impact. At the state level, overlapping or inconsistent regulations that go beyond federal law—if not preempted or successfully challenged—will create a challenging environment for firms deploying AI technologies.
Europe and the UK
In Europe, antitrust agencies are equally focused on AI. In a 2024 policy brief, the European Commission (EC) noted it would “use all tools at its disposal to address potential concerns in the generative AI . . . sectors, including antitrust, merger control, and the DMA.”
The antitrust agencies heavily scrutinized AI transactions in 2025, including partnerships, investments, and reverse acquihires. As in the U.S., however, there have been no challenges to date. Despite public statements by agency officials pushing for reviews, most agencies lacked the jurisdiction to do so, either because the transactions fell outside of the required scope (such as a change in control or size of investment stake) or because they involved early-stage targets that fell below filing thresholds.
Addressing this, the departing head of the EU’s competition arm, Olivier Guersent, noted in August 2025 that the EC was actively pushing national agencies with such powers to act and that EC was “working on it” within the forum for cooperation between the EC and national regulators (the ECN). Guersent also noted that acquihires can be considered a reviewable merger as staff are part of a company’s assets. The statements show that, despite the limited number of call-ins to date and the potential chilling effect of the European Court’s ruling in Illumina/Grail and pending appeal of the EC’s review of NVIDIA/Run:ai (following a call-in and subsequent EC referral by the Italian agency), 2026 could see more creative attempts by European agencies to review AI sector transactions.
The UK Competition and Markets Authority (CMA) has arguably one of the most expansive jurisdictional tests and was quickest off the mark in assessing AI partnerships and investments in 2024. However, its initial flurry of activity in the AI space abated in 2025 with the firing and replacement of the CMA’s head in January 2025 and a stark pivot to a new government “pro-Growth Agenda.” We can expect the CMA to take a “wait-and-see” approach to global AI transactions in 2026, absent UK-specific concerns.
Concerns over conduct involving AI have attracted significant attention from European antitrust authorities. In July 2025, Italy’s competition authority, Autorità Garante della Concorrenza e del Mercato (AGCM), initiated an investigation into Meta for pre-installing its Meta AI chatbot on WhatsApp. The investigation was broadened in November 2025 to include changes to WhatsApp’s business terms to allegedly exclude competing AI chatbots, and the AGCM adopted interim measures prohibiting those terms in December 2025. In the same month, the EC announced that it had opened its own formal investigation into Meta for the same conduct.
In December 2025, the EC also stated that it had opened a formal antitrust investigation into Google over two types of conduct. First, the EC is investigating whether Google may have used the content of web publishers to provide generative AI-powered services (“AI Overviews” and “AI Mode”) on its search results pages without appropriate compensation to publishers and without offering them the possibility to refuse such use of their content. Second, the EC is reviewing whether Google trained its generative AI models on video and other content uploaded to YouTube without offering appropriate compensation to creators or allowing them to refuse such use of their content.
The EC is also exploring the use of the EU’s digital regime, the Digital Markets Act (DMA) to tackle anticompetitive conduct in the AI sector. As part of the ongoing review of the DMA, on December 12, 2025, the High-Level Group on the Digital Markets Act (HLG), which is formed by representatives of relevant EU regulatory bodies, endorsed a joint paper on AI. The joint paper stated that the DMA may contribute to promoting market contestability in AI infrastructure and distribution, as well as access to data, especially user interaction data.
EC Competition Commissioner Teresa Ribera noted in a speech on December 9, 2025, when commenting on the opening of the Meta and Google investigations, “[o]ver a year ago, we began examining competition in generative AI and last September we published a Policy Brief outlining our initial competition concerns. Since then, we have monitored the market closely, and many of the risks we warned about are now beginning to materialize.” We therefore expect the active scrutiny of conduct involving AI to continue at pace.
For more information about managing the antitrust risks for AI-related businesses, please contact any member of the firm’s Antitrust and Competition practice.
[1] The most advanced line of cases deal with algorithmic pricing as a Section 1 conspiracy and are discussed in greater detail in our Algorithmic Pricing Preview.
[2] Similarly, the decision in FTC v. Meta, discussed in greater detail in our Big Tech Preview, is an important landmark for analyzing technology markets and a caution against assuming that the market position of even the strongest incumbents will be durable against technological and user preference changes.