Amazon faces more EU scrutiny over recommender algorithms and ads transparency

Amazon Fulfilment Center In Sosnowiec

Image Credits: Beata Zawrzel/NurPhoto / Getty Images

In its latest step targeting a major marketplace, the European Commission sent Amazon another request for information (RFI) Friday in relation to its compliance under the bloc’s rulebook for digital services.

The development highlights areas where EU enforcers are dialing up their scrutiny of the e-commerce giant, with the bloc asking for more info about Amazon’s recommender systems, ads transparency provisions and risk assessment measures.

An earlier Commission RFI to Amazon, last November, focused on risk assessments and mitigations around the dissemination of illegal products; and the protection of fundamental rights, including in relation to its recommender systems. A Commission spokesperson confirmed the e-commerce giant has received three RFIs in all — following a January ask for more info on how it’s providing data access for researchers.

The EU’s Digital Services Act (DSA) puts requirements on platforms and services to abide by a series of governance standards, including in areas like content moderation. In the case of online marketplaces the law also requires they implement measures to enable them to take action to tackle risks around the sale of illegal goods. Larger marketplaces, such as Amazon, have an additional layer of algorithmic transparency and accountability obligations under the regime — and this is where the Commission RFIs are focused.

The additional rules have applied on Amazon since the end of August last year, following its designation by the EU as a very large online platform (VLOP) in April 2023. It’s the Commission’s job to enforce these extra obligations on VLOPs.

While it remains to be seen if the latest Commission RFI to Amazon will lead to a formal investigation of its DSA compliance the stakes remain high for the e-commerce giant. Any confirmed violations could get very costly as penalties for breaching the pan-EU law can reach up to 6% of global annual turnover. (NB: The company’s full-year revenue for 2023 was $574.8 billion, meaning — on paper at least — its regulatory risk runs into double-figure billions.)

Detailing its action in a press release, the Commission said it has sent Amazon an RFI related to measures it has taken to comply with DSA rules related to the transparency of recommender systems and their parameters. It also said it’s asking for more info about Amazon’s provisions for maintaining an ad repository — another legally mandated transparency step for larger platforms.

The Commission also said it wants more detail about Amazon’s risk assessment report. The DSA requires VLOPs to both proactively assess systemic risks that might arise on their platforms and take steps to mitigate issues. Platforms also need to document their compliance process.

“In particular, Amazon is asked to provide detailed information on its compliance with the provisions concerning transparency of the recommender systems, the input factors, features, signals, information and metadata applied for such systems and options offered to users to opt out of being profiled for the recommender systems,” the EU wrote. “The company also has to provide more information on the design, development, deployment, testing and maintenance of the online interface of Amazon Store’s Ad Library and supporting documents regarding its risk assessment report.”

The EU has given Amazon until July 26 to provide the requested info. After that, any next steps will depend on its assessment of its response. But failure to respond satisfactorily to an RFI could itself trigger a sanction.

Last year the EU named online marketplaces as one of a handful of priority issues for its enforcement of the DSA’s rules for VLOPs. And it has looked attentive to the area.

Late last month it sent separate RFIs to rival marketplace VLOPs, Shein and Temu — soon after designating the pair. Although, in their case, the Commission’s RFIs also raised concern about illegal goods risks and manipulative design (including as a potential child safety risk), as well as asking them for more information about the operation of their own recommender systems.

Why so much interest here? Algorithmic sorting has the power to influence platform users’ whole experience by determining the content and/or products they see.

In a nutshell, the EU wants the DSA to crack open such blackbox AI systems to ensure that platforms’ commercial agendas — to grab users’ attention and/or drive more sales — aren’t the only thing programming these automated decisions. It therefore wants the DSA to act as a shield against the risks of AI-driven societal harms, such as platforms pushing content that’s harmful for people’s mental health or recommending shoppers buy dangerous products. But achieving that goal will require enforcement.

Amazon, meanwhile, is unhappy about the EU regime. Last year it challenged its DSA designation as a VLOP. And last fall it won an interim stay on one element of VLOPs’ DSA compliance — namely the requirement to publish an ads library. However, in March, the EU General Court reversed the earlier decision, overturning the partial suspension.

“Following its designation as a Very Large Online Platform and the Court’s decision to reject Amazon’s request to suspend the obligation to make its advertisement repository publicly available, Amazon is required to comply with the full set of DSA obligations,” the Commission wrote today. “This includes diligently identifying and assessing all systemic risks relevant to its service, providing an option in their recommender systems that is not based on user profiling, and have an advertisement repository publicly available.”

Given Amazon has spent money on lawyers to try to argue why it shouldn’t have to comply with the DSA ads library element — and the subsequent overturning of the stay — it’s not too surprising this is one of the areas where the Commission is seeking more information now.

The EU was contacted with questions. A Commission spokesperson confirmed the first RFI to Amazon, from November 2023, had “a strong focus on the dissemination of illegal products and the protection of fundamental rights online”, as well as asking questions about its recommender systems.

A second RFI, in January 2024, focused on measures Amazon has taken to comply with data access for eligible researchers, per the spokesperson. They said the latest RFI is strongly focused on measures taken to meet DSA obligations related to the transparency of recommender systems and their parameters, as well as to the provisions on maintaining an ad repository.

“These are actually different areas we are looking into,” the spokesperson added. “You are however right to say that today’s RFI also follows the Court’s decision to reject Amazon’s request to suspend the obligation to make its advertisement repository publicly available.”

We also reached out to Amazon for a response to the Commission’s RFI.

A company spokesperson emailed TechCrunch this statement: “We are reviewing this request and working closely with the European Commission. Amazon shares the goal of the European Commission to create a safe, predictable and trusted shopping environment. We think this is important for all participants in the retail industry, and we invest significantly in protecting our store from bad actors, illegal content, and in creating a trustworthy shopping experience. We have built on this strong foundation for DSA compliance.”

This report was updated with responses from the Commission

Amazon faces more EU scrutiny over recommender algorithms and ads transparency

Amazon Fulfilment Center In Sosnowiec

Image Credits: Beata Zawrzel/NurPhoto / Getty Images

In its latest step targeting a major marketplace, the European Commission sent Amazon another request for information (RFI) Friday in relation to its compliance under the bloc’s rulebook for digital services.

The development highlights areas where EU enforcers are dialing up their scrutiny of the e-commerce giant, with the bloc asking for more info about Amazon’s recommender systems, ads transparency provisions and risk assessment measures.

An earlier Commission RFI to Amazon, last November, focused on risk assessments and mitigations around the dissemination of illegal products; and the protection of fundamental rights, including in relation to its recommender systems. A Commission spokesperson confirmed the e-commerce giant has received three RFIs in all — following a January ask for more info on how it’s providing data access for researchers.

The EU’s Digital Services Act (DSA) puts requirements on platforms and services to abide by a series of governance standards, including in areas like content moderation. In the case of online marketplaces the law also requires they implement measures to enable them to take action to tackle risks around the sale of illegal goods. Larger marketplaces, such as Amazon, have an additional layer of algorithmic transparency and accountability obligations under the regime — and this is where the Commission RFIs are focused.

The additional rules have applied on Amazon since the end of August last year, following its designation by the EU as a very large online platform (VLOP) in April 2023. It’s the Commission’s job to enforce these extra obligations on VLOPs.

While it remains to be seen if the latest Commission RFI to Amazon will lead to a formal investigation of its DSA compliance the stakes remain high for the e-commerce giant. Any confirmed violations could get very costly as penalties for breaching the pan-EU law can reach up to 6% of global annual turnover. (NB: The company’s full-year revenue for 2023 was $574.8 billion, meaning — on paper at least — its regulatory risk runs into double-figure billions.)

Detailing its action in a press release, the Commission said it has sent Amazon an RFI related to measures it has taken to comply with DSA rules related to the transparency of recommender systems and their parameters. It also said it’s asking for more info about Amazon’s provisions for maintaining an ad repository — another legally mandated transparency step for larger platforms.

The Commission also said it wants more detail about Amazon’s risk assessment report. The DSA requires VLOPs to both proactively assess systemic risks that might arise on their platforms and take steps to mitigate issues. Platforms also need to document their compliance process.

“In particular, Amazon is asked to provide detailed information on its compliance with the provisions concerning transparency of the recommender systems, the input factors, features, signals, information and metadata applied for such systems and options offered to users to opt out of being profiled for the recommender systems,” the EU wrote. “The company also has to provide more information on the design, development, deployment, testing and maintenance of the online interface of Amazon Store’s Ad Library and supporting documents regarding its risk assessment report.”

The EU has given Amazon until July 26 to provide the requested info. After that, any next steps will depend on its assessment of its response. But failure to respond satisfactorily to an RFI could itself trigger a sanction.

Last year the EU named online marketplaces as one of a handful of priority issues for its enforcement of the DSA’s rules for VLOPs. And it has looked attentive to the area.

Late last month it sent separate RFIs to rival marketplace VLOPs, Shein and Temu — soon after designating the pair. Although, in their case, the Commission’s RFIs also raised concern about illegal goods risks and manipulative design (including as a potential child safety risk), as well as asking them for more information about the operation of their own recommender systems.

Why so much interest here? Algorithmic sorting has the power to influence platform users’ whole experience by determining the content and/or products they see.

In a nutshell, the EU wants the DSA to crack open such blackbox AI systems to ensure that platforms’ commercial agendas — to grab users’ attention and/or drive more sales — aren’t the only thing programming these automated decisions. It therefore wants the DSA to act as a shield against the risks of AI-driven societal harms, such as platforms pushing content that’s harmful for people’s mental health or recommending shoppers buy dangerous products. But achieving that goal will require enforcement.

Amazon, meanwhile, is unhappy about the EU regime. Last year it challenged its DSA designation as a VLOP. And last fall it won an interim stay on one element of VLOPs’ DSA compliance — namely the requirement to publish an ads library. However, in March, the EU General Court reversed the earlier decision, overturning the partial suspension.

“Following its designation as a Very Large Online Platform and the Court’s decision to reject Amazon’s request to suspend the obligation to make its advertisement repository publicly available, Amazon is required to comply with the full set of DSA obligations,” the Commission wrote today. “This includes diligently identifying and assessing all systemic risks relevant to its service, providing an option in their recommender systems that is not based on user profiling, and have an advertisement repository publicly available.”

Given Amazon has spent money on lawyers to try to argue why it shouldn’t have to comply with the DSA ads library element — and the subsequent overturning of the stay — it’s not too surprising this is one of the areas where the Commission is seeking more information now.

The EU was contacted with questions. A Commission spokesperson confirmed the first RFI to Amazon, from November 2023, had “a strong focus on the dissemination of illegal products and the protection of fundamental rights online”, as well as asking questions about its recommender systems.

A second RFI, in January 2024, focused on measures Amazon has taken to comply with data access for eligible researchers, per the spokesperson. They said the latest RFI is strongly focused on measures taken to meet DSA obligations related to the transparency of recommender systems and their parameters, as well as to the provisions on maintaining an ad repository.

“These are actually different areas we are looking into,” the spokesperson added. “You are however right to say that today’s RFI also follows the Court’s decision to reject Amazon’s request to suspend the obligation to make its advertisement repository publicly available.”

We also reached out to Amazon for a response to the Commission’s RFI.

A company spokesperson emailed TechCrunch this statement: “We are reviewing this request and working closely with the European Commission. Amazon shares the goal of the European Commission to create a safe, predictable and trusted shopping environment. We think this is important for all participants in the retail industry, and we invest significantly in protecting our store from bad actors, illegal content, and in creating a trustworthy shopping experience. We have built on this strong foundation for DSA compliance.”

This report was updated with responses from the Commission

Amazon faces more EU scrutiny over recommender algorithms and ads transparency

Amazon Fulfilment Center In Sosnowiec

Image Credits: Beata Zawrzel/NurPhoto / Getty Images

In its latest step targeting a major marketplace, the European Commission sent Amazon another request for information (RFI) Friday in relation to its compliance under the bloc’s rulebook for digital services.

The development highlights areas where EU enforcers are dialling up their scrutiny of the ecommerce giant, with the bloc asking for more info about Amazon’s recommender systems, ads transparency provisions and risk assessment measures.

An earlier Commission RFI to Amazon, last November, focused on risk assessments and mitigations around the dissemination of illegal products; and the protection of fundamental rights, including in relation to its recommender systems.

The EU’s Digital Services Act (DSA) puts requirements on platforms and services to abide by a series of governance standards, including in areas like content moderation. In the case of online marketplaces the law also requires they implement measures to enable them to take action to tackle risks around the sale of illegal goods. While larger marketplaces, such as Amazon, have an additional layer of algorithmic transparency and accountability obligations under the regime — and this is where the Commission RFIs are focused.

The additional rules have applied on Amazon since the end of August last year, following its designation by the EU as a very large online platform (VLOP) in April 2023. It’s the Commission’s job to enforce these extra obligations on VLOPs.

While it remains to be seen if the latest Commission RFI to Amazon will lead to a formal investigation of its DSA compliance the stakes remain high for the ecommerce giant. Any confirmed violations could get very costly as penalties for breaching the pan-EU law can reach up to 6% of global annual turnover. (NB: The company’s full year revenue for 2023 was $574.8 billion, meaning — on paper at least — its regulatory risk runs into double figure billions.)

Detailing its action in a press release, the Commission said it’s sent Amazon an RFI related to measures it’s taken to comply with DSA rules related to the transparency of recommender systems and their parameters. It also said it’s asking for more info about Amazon’s provisions for maintaining an ad repository — another legally mandated transparency step for larger platforms.

The Commission also said it wants more detail about Amazon’s risk assessment report. The DSA requires VLOPs to both proactively assess systemic risks that might arise on their platforms and take steps to mitigate issues. Platforms also need to document their compliance process.

“In particular, Amazon is asked to provide detailed information on its compliance with the provisions concerning transparency of the recommender systems, the input factors, features, signals, information and metadata applied for such systems and options offered to users to opt out of being profiled for the recommender systems,” the EU wrote. “The company also has to provide more information on the design, development, deployment, testing and maintenance of the online interface of Amazon Store’s Ad Library and supporting documents regarding its risk assessment report.”

The EU has given Amazon until July 26 to provide the requested info. After that any next steps will depend on its assessment of its response. But failure to respond satisfactorily to an RFI could itself trigger a sanction.

Last year the EU named online marketplaces as one of a handful of priority issues for its enforcement of the DSA’s rules for VLOPs. And it has looked attentive to the area.

Late last month it sent separate RFIs to rival marketplace VLOPs, Shein and Temu — soon after designating the pair. Although, in their case, the Commission’s RFIs also raised concern about illegal goods risks and manipulative design (including as a potential child safety risk), as well as asking them for more information about the operation of their own recommender systems.

Why so much interest here? Algorithmic sorting has the power to influence platform users’ whole experience by determining the content and/or products they see.

In a nutshell, the EU wants the DSA to crack open such blackbox AI systems to ensure that platforms’ commercial agendas — to grab users’ attention and/or drive more sales — aren’t the only thing programming these automated decisions. It therefore wants the DSA to act as a shield against the risks of AI-driven societal harms, such as platforms pushing content that’s harmful for people’s mental health or recommending shoppers buy dangerous products. But achieving that goal will require enforcement.

Amazon, meanwhile, is unhappy about the EU regime. Last year it challenged its DSA designation as a VLOP. And last fall it won an interim stay on one element of VLOPs’ DSA compliance — namely the requirement to publish an ads library. However, in March, the EU General Court reversed the earlier decision, overturning the partial suspension.

“Following its designation as a Very Large Online Platform and the Court’s decision to reject Amazon’s request to suspend the obligation to make its advertisement repository publicly available, Amazon is required to comply with the full set of DSA obligations,” the Commission wrote today. “This includes diligently identifying and assessing all systemic risks relevant to its service, providing an option in their recommender systems that is not based on user profiling, and have an advertisement repository publicly available.”

Given Amazon has spent money on lawyers to try to argue why it shouldn’t have to comply with the DSA ads library element — and the subsequent overturning of the stay — it’s not too surprising this is one of the areas where the Commission is seeking more information now.

The EU was contacted with questions. We also reached out to Amazon for a response to the Commission’s RFI.

A company spokesperson emailed TechCrunch this statement: “We are reviewing this request and working closely with the European Commission. Amazon shares the goal of the European Commission to create a safe, predictable and trusted shopping environment. We think this is important for all participants in the retail industry, and we invest significantly in protecting our store from bad actors, illegal content, and in creating a trustworthy shopping experience. We have built on this strong foundation for DSA compliance.”

a smartphone dropping a ballot into a box

EU dials up scrutiny of major platforms over GenAI risks ahead of elections

a smartphone dropping a ballot into a box

Image Credits: erhui1979 (opens in a new window) / Getty Images

The European Commission has sent a series of formal requests for information (RFI) to Google, Meta, Microsoft, Snap, TikTok and X about how they’re handling risks related to the use of generative AI.

The asks, which relate to Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube and X, are being made under the Digital Services Act (DSA), the bloc’s rebooted ecommerce and online governance rules. The eight platforms are designated as very large online platforms (VLOPs) under the regulation — meaning they’re required to assess and mitigate systemic risks, in addition to complying with the rest of the rulebook.

In a press release Thursday, the Commission said it’s asking them to provide more information on their respective mitigation measures for risks linked to generative AI on their services — including in relation to so-called “hallucinations” where AI technologies generate false information; the viral dissemination of deepfakes; and the automated manipulation of services that can mislead voters.

“The Commission is also requesting information and internal documents on the risk assessments and mitigation measures linked to the impact of generative AI  on electoral processes, dissemination of illegal content, protection of fundamental rights, gender-based violence, protection of minors and mental well-being,” the Commission added, emphasizing that the questions relate to “both the dissemination and the creation of Generative AI content”.

In a briefing with journalists the EU also said it’s planning a series of stress tests, slated to take place after Easter. These will test platforms’ readiness to deal with generative AI risks such as the possibility of a flood of political deepfakes ahead of the June European Parliament elections.

“We want to push the platforms to tell us whatever they’re doing to be as best prepared as possible… for all incidents that we might be able to detect and that we will have to react to in the run up to the elections,” said a senior Commission official, speaking on condition of anonymity.

The EU, which oversees VLOPs’ compliance with these Big Tech-specific DSA rules, has named election security as one of the priority areas for enforcement. It’s recently been consulting on election security rules for VLOPs, as it works on producing formal guidance.

Today’s asks are partly aimed at supporting that guidance, per the Commission. Although the platforms have been given until April 3 to provide information related to the protection of elections, which is being labelled as an “urgent” request. But the EU said it hopes to finalize the election security guidelines sooner than then — by March 27.

The Commission noted that the cost of producing synthetic content is going down dramatically — amping up the risks of misleading deepfakes being churned out during elections. Which is why it’s dialling up attention on major platforms with the scale to disseminate political deepfakes widely.

A tech industry accord to combat deceptive use of the AI during elections that came out of the Munich Security Conference last month, with backing from a number of the same platforms the Commission is sending RFIs now, does not go far enough in the EU’s view.

A Commission official said its forthcoming election security guidance will go “much further”, pointing to a triple whammy of safeguards it plans to leverage: Starting with the DSA’s “clear due diligence rules”, which give it powers to target specific “risk situations”; combined with more than five years’ experience from working with platforms via the (non-legally binding) Code of Practice Against Disinformation which the EU intends will become a Code of Conduct under the DSA; and — on the horizon — transparency labelling/AI model marking rules under the incoming AI Act.

The EU’s goal is to build “an ecosystem of enforcement structures” that can be tapped into in the run up to elections, the official added.

The Commission’s RFIs today also aim to address a broader spectrum of generative AI risks than voter manipulation — such as harms related to deepfake porn or other types of malicious synthetic content generation, whether the content produced is imagery/video or audio. These asks reflect other priority areas for the EU’s DSA enforcement on VLOPs, which include risks related to illegal content (such as hate speech) and child protection.

The platforms have been given until April 24 to provide responses to these other generative AI RFIs

Smaller platforms where misleading, malicious or otherwise harmful deepfakes may be distributed, and smaller AI tool makers that can enable generation of synthetic media at lower cost, are also on the EU’s risk mitigation radar.

Such platforms and tools won’t fall under the Commission’s explicit DSA oversight of VLOPs, as they are not designated. But its strategy to broaden the regulatory impact is to apply pressure indirectly, through larger platforms (which may act as amplifiers and/or distribution channels in this context); and via self regulatory mechanisms, such as the aforementioned Disinformation Code; and the AI Pact, which is due to get up and running shortly, once the (hard law) AI Act is adopted (expected within months).

EU’s draft election security guidelines for tech giants take aim at political deepfakes

Meta and Snap latest to get EU request for info on child safety, as bloc shoots for ‘unprecedented’ transparency

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Mistral logo on laptop screen

Image Credits: SOPA Images / Contributor / Getty Images / Getty Images

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment in French AI startup, Mistral AI, with the country’s Competition and Markets Authority (CMA) on Friday concluding that the partnership “does not qualify for investigation under the merger provisions of the Enterprise Act 2002.”

The decision comes three weeks after the CMA revealed a trio of early-stage probes into Amazon and Microsoft’s various AI investments and partnerships, including the Redmond-based company’s $16 million investment in Mistral AI, an OpenAI rival working on large language models. Shortly after, Microsoft hired the team behind Inflection AI, another OpenAI rival, essentially gutting the startup.

Elsewhere, the CMA said it was also poking at Amazon’s $4 billion investment in Anthropic, a U.S.-based AI company working on large language models.

Big Tech and the quasi-merger

There has been growing scrutiny of Big Tech’s latest tactic to dodge regulatory oversight by pursuing “quasi-mergers,” through which they seek to secure control over new technologies without buying startups outright. This might be through making investments, procuring seats on boards, hiring founding teams and so on.

Early in 2024, the Federal Trade Commission (FTC) launched investigations into Alphabet, Amazon and Microsoft’s investments in emerging AI firms to establish whether the “partnerships pursued by dominant companies risk distorting innovation and undermining fair competition.”

The CMA’s efforts are part of that same regulatory push. Two of its recently announced “invitations to comment” are still ongoing, and may lead to formal in-depth probes. Still, it’s telling that the CMA is throwing out the Mistral AI case on the grounds that it doesn’t “qualify” for investigation under existing rules.

Alex Haffner, competition partner at U.K. law firm Fladgate, says this finding suggests that the structure of Microsoft’s partnership with Mistral AI doesn’t grant the bigger company sufficient rights or influence, at least as it relates to M&A regulation. Ultimately, it was a minority investment into a double-unicorn that had closed a $415 million round just a few months earlier.

“In so doing, the decision vindicates Microsoft’s stated position on the tie-up,” Haffner said.

This “stated position” was that making a small investment isn’t enough to procure meaningful clout in the future direction of an up-and-coming AI startup. Microsoft would effectively own less than 1% of Mistral AI when its investment converts to equity at the French startup’s next funding round.

A Microsoft spokesperson said at the time of the CMA’s initial probe announcement:

“We remain confident that common business practices such as the hiring of talent or making a fractional investment in an AI startup promote competition and are not the same as a merger.”

Microsoft spokesperson, April 2024

While the CMA maintains that Big Tech could be adopting new methods to protect themselves from antitrust scrutiny, it has now confirmed that Microsoft hadn’t acquired any “material influence on Mistral AI’s commercial policy.”

“The CMA has considered information submitted by Microsoft and Mistral AI, together with feedback received in response to its invitation to comment,” a CMA spokesperson said. “Based on the evidence, the CMA does not believe that Microsoft has acquired material influence over Mistral AI as a result of the partnership and therefore does not qualify for investigation.”

Pollination works

Just last month, the CMA sounded an alarm over Big Tech’s waxing influence on the advanced AI market, expressing concerns over the growing connection and concentration between developers in the snowballing generative AI space. But the CMA has now said that at least one of the deals on its radar doesn’t qualify for investigation, suggesting that Big Tech’s tactics to pollinate the AI ecosystem far and wide might be working to a degree.

But that still leaves two more outstanding cases: Amazon’s gargantuan investment in Anthropic, and Microsoft’s hiring of key Inflection personnel. Could we expect a similar outcome there?

“The CMA has concluded that the arrangements between Microsoft and Mistral are not sufficient to give Microsoft ‘material influence’ over Mistral, which is the relevant jurisdictional test,” Haffner said. “Time will tell, but the assumption is therefore that the application of the test is more clear-cut here than with the other AI partnerships under investigation by the CMA.”

It’s certainly not as cut-and-dry. Anthropic got Amazon’s biggest venture investment to date, constituting more than half of the $7.6 billion the AI company has raised since its inception three years ago. And while Inflection technically still exists, Microsoft scooped up its founders and various key colleagues — in many ways, that was as good as an acquisition.

And let’s not forget about the CMA’s other separate, but related, ongoing case looking at Microsoft’s close ties with OpenAI. The regulator launched a formal “invitation to comment” aimed at relevant stakeholders in the AI and business spheres last year, and the European Commission (EC) followed suit in January.

So we probably shouldn’t make too many conclusions about the other pending cases based on today’s news.

“That the CMA has only confirmed the conclusions of the Mistral investigation is interesting, as it leaves open the position on the other two deals, as well as the CMA’s ongoing investigation into Microsoft’s role in the OpenAI project,” Haffner said. “Overall, therefore, it is clear that the competition authorities are continuing to engage very closely with developments in the AI sector, and we can expect several more announcements by the CMA in the near future as to the outcome of their ongoing workstreams in this space.”

We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.