Alexa co-creator gives first glimpse of Unlikely AI's tech strategy

William Tunstall-Pedoe, founder, Unlikely AI [© 2017 Yolande De Vries]

Image Credits: William Tunstall-Pedoe, founder, Unlikely AI (© 2017 Yolande De Vries)

After announcing a whopping $20 million seed last year, Unlikely AI founder William Tunstall-Pedoe has kept the budding U.K. foundation model maker’s approach under lock and key. Until now: TechCrunch can exclusively reveal Unlikely is taking a “neuro-symbolic” approach to its AI. In an additional development, it’s announcing two senior hires — including the former CTO of Stability AI, Tom Mason. 

Neuro-symbolic AI is a type of artificial intelligence that, as the name suggests, integrates both the modern neural network approaches to AI — as used by large language models (LLMs), like OpenAI’s GPT — and earlier Symbolic AI architectures to address the weaknesses of each.

Tunstall-Pedoe gained public profile in the U.K. tech scene back in 2012 when Amazon acquired his voice assistant startup, Evi. Two years later Amazon launched the Echo and Alexa, incorporating much of Evi’s technology. With Unlikely AI, Tunstall-Pedoe is aiming to put himself back in the limelight as he takes the wraps off the technology he and his team have been working on since 2019, when the startup was founded. 

At Stability AI, meanwhile, Mason managed the development of major foundational models across various fields and helped the AI company raise more than $170 million. Now he’s CTO of Unlikely AI, where he will oversee its “symbolic/algorithmic” approach.

In addition, Fred Becker is joining as chief administrative officer. He previously held senior roles at companies including Skype and Symphony. At Unlikely, his role will be to shepherd its now 60 full-time staff — who are based largely between Cambridge (U.K.) and London. 

The AI startup claims its approach to foundational AI models will try to avoid the risks we’ve quickly become all-too-familiar with — namely bias, “hallucination” (aka fabrication), accuracy and trust. It also claims its approach will use less energy in a bid to reduce the environmental impact of Big AI. 

“We’ve been working privately for a number of years and we’re very excited about our two new senior hires,” Tunstall-Pedoe told TechCrunch over a call. 

Fleshing out the team’s approach, he went on: “We’re building a ‘trustworthy’ AI platform that’s designed to address pretty much all of the key issues with AI at the moment, as it pertains to… hallucinations and accuracy. We’re combining the capabilities of generative AI, statistical AI, with symbolic algorithmic methods, [and] conventional software methods to get expandability and reliability.”

He described the platform as “horizontal” in that it would “compound many different types of applications.” 

Of the exact applications, he was more coy — but continued to emphasize the phrase “trustworthy AI.”

For his part, Mason said his time at Stability AI saw the company build “some amazing models” and “an unbelievable ecosystem around the models and the technology,” as he put it. It also featured the abrupt exit of founder Emad Mostaque, followed by a number of other high-profile team departures. While Mason wishes his former colleagues “all the best,” he said said he’s “super excited” to join Unlikely AI.

Stability AI CEO resigns because you’re ‘not going to beat centralized AI with more centralized AI’

Digging into the startup’s technology, Tunstall-Pedoe said the platform is composed of two things: “The word ‘neuro’ and the word ‘symbolic.’ ‘Neuro’ implies deep learning, so solving problems that machines have not been able to solve for decades… ‘Symbolic’ refers to the kind of software that powers your spreadsheets or other applications.

“One of the weaknesses of ‘neuro’ is that it’s sometimes wrong. When you train a model, you give it data, it gets better and better. But it never gets to 100%. It’s right, for example, 80% of the time, which means it’s wrong 20% of the time.”

He said this is “incredibly damaging to trust” because “the neuro calculation is opaque.” Indeed, there’s an entire field of research trying to understand what happens inside these huge LLMs.

Instead, he said Unlikely plans to combine the certainties of traditional software, such as spreadsheets, where the calculations are 100% accurate, with the “neuro” approach in generative AI. 

“What we’re doing is combining the best of both worlds,” suggested Tunstall-Pedoe. “We’re taking the capabilities of LLMs, of all the advances in deep learning, and we’re combining it with the trustworthiness and expandability and other advantages — including things like cost and environmental impact — of non-statistical machine learning… The vision we have of AI is all of those capabilities, but in a way that’s completely trustworthy.”

He argues a combined approach will bring cost and environmental benefits, too, compared to today’s LLMs: “These models are incredibly expensive [to run] and environmentally unfriendly, but they are also costly in terms of trust by producing answers that are wrong.”

Why haven’t other foundational models taken a similar route?

“I think that that’s happening,” Mason responded. “Sometimes we talk about it as ‘compound architecture.’ We’ve seen the rise of things like RAG. That’s a kind of compound architecture. This is very much in the same vein, but it’s building on all of that with the advantages symbolic reasoning, making it possible to have completely accurate reasoning.”

In this respect, he said Mason believes Unlikely AI is “ahead of the wave.”

Another question is whether Unlikely AI will produce a fuller foundational model, such as OpenAI — or take a mixed approach, akin to Mistral’s, offering both foundational and open source models?

Tunstall-Pedoe said the company is still yet to decide the direction of travel: “We haven’t made any decisions like that yet. That’s part of internal discussions. But we’re building a platform and the rest is TBD… It’s a decision that we’re going to make in the near future.”

One thing is confirmed, though: It’s going to be built out of London and Cambridge: “Obviously we’ve got a much smaller population than in the U.S. and China. But London is a fantastic place to be building an innovative AI startup. There’s lots of talent here. Lots of innovation.”

While the model release timeline isn’t clear, Unlikely AI is certain about the strength of its ambition. Given AI is the number one strategic priority of every trillion-dollar market cap company out there, Tunstall-Pedoe said he’s shooting for major adoption. “We want to be massively successful, we want to have a huge impact. We’re certainly open to different ways of achieving that,” he added.

Alexa co-creator gives first glimpse of Unlikely AI's tech strategy

William Tunstall-Pedoe, founder, Unlikely AI [© 2017 Yolande De Vries]

Image Credits: William Tunstall-Pedoe, founder, Unlikely AI (© 2017 Yolande De Vries)

After announcing a whopping $20 million seed last year, Unlikely AI founder William Tunstall-Pedoe has kept the budding U.K. foundation model maker’s approach under lock and key. Until now: TechCrunch can exclusively reveal Unlikely is taking a “neuro-symbolic” approach to its AI. In an additional development, it’s announcing two senior hires — including the former CTO of Stability AI, Tom Mason. 

Neuro-symbolic AI is a type of artificial intelligence that, as the name suggests, integrates both the modern neural network approaches to AI — as used by large language models (LLMs), like OpenAI’s GPT — and earlier Symbolic AI architectures to address the weaknesses of each.

Tunstall-Pedoe gained public profile in the U.K. tech scene back in 2012 when Amazon acquired his voice assistant startup, Evi. Two years later Amazon launched the Echo and Alexa, incorporating much of Evi’s technology. With Unlikely AI, Tunstall-Pedoe is aiming to put himself back in the limelight as he takes the wraps off the technology he and his team have been working on since 2019, when the startup was founded. 

At Stability AI, meanwhile, Mason managed the development of major foundational models across various fields and helped the AI company raise more than $170 million. Now he’s CTO of Unlikely AI, where he will oversee its “symbolic/algorithmic” approach.

In addition, Fred Becker is joining as chief administrative officer. He previously held senior roles at companies including Skype and Symphony. At Unlikely, his role will be to shepherd its now 60 full-time staff — who are based largely between Cambridge (U.K.) and London. 

The AI startup claims its approach to foundational AI models will try to avoid the risks we’ve quickly become all-too-familiar with — namely bias, “hallucination” (aka fabrication), accuracy and trust. It also claims its approach will use less energy in a bid to reduce the environmental impact of Big AI. 

“We’ve been working privately for a number of years and we’re very excited about our two new senior hires,” Tunstall-Pedoe told TechCrunch over a call. 

Fleshing out the team’s approach, he went on: “We’re building a ‘trustworthy’ AI platform that’s designed to address pretty much all of the key issues with AI at the moment, as it pertains to… hallucinations and accuracy. We’re combining the capabilities of generative AI, statistical AI, with symbolic algorithmic methods, [and] conventional software methods to get expandability and reliability.”

He described the platform as “horizontal” in that it would “compound many different types of applications.” 

Of the exact applications, he was more coy — but continued to emphasize the phrase “trustworthy AI.”

For his part, Mason said his time at Stability AI saw the company build “some amazing models” and “an unbelievable ecosystem around the models and the technology,” as he put it. It also featured the abrupt exit of founder Emad Mostaque, followed by a number of other high-profile team departures. While Mason wishes his former colleagues “all the best,” he said said he’s “super excited” to join Unlikely AI.

Stability AI CEO resigns because you’re ‘not going to beat centralized AI with more centralized AI’

Digging into the startup’s technology, Tunstall-Pedoe said the platform is composed of two things: “The word ‘neuro’ and the word ‘symbolic.’ ‘Neuro’ implies deep learning, so solving problems that machines have not been able to solve for decades… ‘Symbolic’ refers to the kind of software that powers your spreadsheets or other applications.

“One of the weaknesses of ‘neuro’ is that it’s sometimes wrong. When you train a model, you give it data, it gets better and better. But it never gets to 100%. It’s right, for example, 80% of the time, which means it’s wrong 20% of the time.”

He said this is “incredibly damaging to trust” because “the neuro calculation is opaque.” Indeed, there’s an entire field of research trying to understand what happens inside these huge LLMs.

Instead, he said Unlikely plans to combine the certainties of traditional software, such as spreadsheets, where the calculations are 100% accurate, with the “neuro” approach in generative AI. 

“What we’re doing is combining the best of both worlds,” suggested Tunstall-Pedoe. “We’re taking the capabilities of LLMs, of all the advances in deep learning, and we’re combining it with the trustworthiness and expandability and other advantages — including things like cost and environmental impact — of non-statistical machine learning… The vision we have of AI is all of those capabilities, but in a way that’s completely trustworthy.”

He argues a combined approach will bring cost and environmental benefits, too, compared to today’s LLMs: “These models are incredibly expensive [to run] and environmentally unfriendly, but they are also costly in terms of trust by producing answers that are wrong.”

Why haven’t other foundational models taken a similar route?

“I think that that’s happening,” Mason responded. “Sometimes we talk about it as ‘compound architecture.’ We’ve seen the rise of things like RAG. That’s a kind of compound architecture. This is very much in the same vein, but it’s building on all of that with the advantages symbolic reasoning, making it possible to have completely accurate reasoning.”

In this respect, he said Mason believes Unlikely AI is “ahead of the wave.”

Another question is whether Unlikely AI will produce a fuller foundational model, such as OpenAI — or take a mixed approach, akin to Mistral’s, offering both foundational and open source models?

Tunstall-Pedoe said the company is still yet to decide the direction of travel: “We haven’t made any decisions like that yet. That’s part of internal discussions. But we’re building a platform and the rest is TBD… It’s a decision that we’re going to make in the near future.”

One thing is confirmed, though: It’s going to be built out of London and Cambridge: “Obviously we’ve got a much smaller population than in the U.S. and China. But London is a fantastic place to be building an innovative AI startup. There’s lots of talent here. Lots of innovation.”

While the model release timeline isn’t clear, Unlikely AI is certain about the strength of its ambition. Given AI is the number one strategic priority of every trillion-dollar market cap company out there, Tunstall-Pedoe said he’s shooting for major adoption. “We want to be massively successful, we want to have a huge impact. We’re certainly open to different ways of achieving that,” he added.

Alexa co-creator gives first glimpse of Unlikely AI's tech strategy

William Tunstall-Pedoe, founder, Unlikely AI [© 2017 Yolande De Vries]

Image Credits: William Tunstall-Pedoe, founder, Unlikely AI (© 2017 Yolande De Vries)

After announcing a whopping $20 million seed last year, Unlikely AI founder William Tunstall-Pedoe has kept the budding U.K. foundation model maker’s approach under lock and key. Until now: TechCrunch can exclusively reveal Unlikely is taking a “neuro-symbolic” approach to its AI. In an additional development, it’s announcing two senior hires — including the former CTO of Stability AI, Tom Mason. 

Neuro-symbolic AI is a type of artificial intelligence that, as the name suggests, integrates both the modern neural network approaches to AI — as used by large language models (LLMs), like OpenAI’s GPT — and earlier Symbolic AI architectures to address the weaknesses of each.

Tunstall-Pedoe gained public profile in the U.K. tech scene back in 2012 when Amazon acquired his voice assistant startup, Evi. Two years later Amazon launched the Echo and Alexa, incorporating much of Evi’s technology. With Unlikely AI, Tunstall-Pedoe is aiming to put himself back in the limelight as he takes the wraps off the technology he and his team have been working on since 2019, when the startup was founded. 

At Stability AI, meanwhile, Mason managed the development of major foundational models across various fields and helped the AI company raise more than $170 million. Now he’s CTO of Unlikely AI, where he will oversee its “symbolic/algorithmic” approach.

In addition, Fred Becker is joining as chief administrative officer. He previously held senior roles at companies including Skype and Symphony. At Unlikely, his role will be to shepherd its now 60 full-time staff — who are based largely between Cambridge (U.K.) and London. 

The AI startup claims its approach to foundational AI models will try to avoid the risks we’ve quickly become all-too-familiar with — namely bias, “hallucination” (aka fabrication), accuracy and trust. It also claims its approach will use less energy in a bid to reduce the environmental impact of Big AI. 

“We’ve been working privately for a number of years and we’re very excited about our two new senior hires,” Tunstall-Pedoe told TechCrunch over a call. 

Fleshing out the team’s approach, he went on: “We’re building a ‘trustworthy’ AI platform that’s designed to address pretty much all of the key issues with AI at the moment, as it pertains to… hallucinations and accuracy. We’re combining the capabilities of generative AI, statistical AI, with symbolic algorithmic methods, [and] conventional software methods to get expandability and reliability.”

He described the platform as “horizontal” in that it would “compound many different types of applications.” 

Of the exact applications, he was more coy — but continued to emphasize the phrase “trustworthy AI.”

For his part, Mason said his time at Stability AI saw the company build “some amazing models” and “an unbelievable ecosystem around the models and the technology,” as he put it. It also featured the abrupt exit of founder Emad Mostaque, followed by a number of other high-profile team departures. While Mason wishes his former colleagues “all the best,” he said said he’s “super excited” to join Unlikely AI.

Stability AI CEO resigns because you’re ‘not going to beat centralized AI with more centralized AI’

Digging into the startup’s technology, Tunstall-Pedoe said the platform is composed of two things: “The word ‘neuro’ and the word ‘symbolic.’ ‘Neuro’ implies deep learning, so solving problems that machines have not been able to solve for decades… ‘Symbolic’ refers to the kind of software that powers your spreadsheets or other applications.

“One of the weaknesses of ‘neuro’ is that it’s sometimes wrong. When you train a model, you give it data, it gets better and better. But it never gets to 100%. It’s right, for example, 80% of the time, which means it’s wrong 20% of the time.”

He said this is “incredibly damaging to trust” because “the neuro calculation is opaque.” Indeed, there’s an entire field of research trying to understand what happens inside these huge LLMs.

Instead, he said Unlikely plans to combine the certainties of traditional software, such as spreadsheets, where the calculations are 100% accurate, with the “neuro” approach in generative AI. 

“What we’re doing is combining the best of both worlds,” suggested Tunstall-Pedoe. “We’re taking the capabilities of LLMs, of all the advances in deep learning, and we’re combining it with the trustworthiness and expandability and other advantages — including things like cost and environmental impact — of non-statistical machine learning… The vision we have of AI is all of those capabilities, but in a way that’s completely trustworthy.”

He argues a combined approach will bring cost and environmental benefits, too, compared to today’s LLMs: “These models are incredibly expensive [to run] and environmentally unfriendly, but they are also costly in terms of trust by producing answers that are wrong.”

Why haven’t other foundational models taken a similar route?

“I think that that’s happening,” Mason responded. “Sometimes we talk about it as ‘compound architecture.’ We’ve seen the rise of things like RAG. That’s a kind of compound architecture. This is very much in the same vein, but it’s building on all of that with the advantages symbolic reasoning, making it possible to have completely accurate reasoning.”

In this respect, he said Mason believes Unlikely AI is “ahead of the wave.”

Another question is whether Unlikely AI will produce a fuller foundational model, such as OpenAI — or take a mixed approach, akin to Mistral’s, offering both foundational and open source models?

Tunstall-Pedoe said the company is still yet to decide the direction of travel: “We haven’t made any decisions like that yet. That’s part of internal discussions. But we’re building a platform and the rest is TBD… It’s a decision that we’re going to make in the near future.”

One thing is confirmed, though: It’s going to be built out of London and Cambridge: “Obviously we’ve got a much smaller population than in the U.S. and China. But London is a fantastic place to be building an innovative AI startup. There’s lots of talent here. Lots of innovation.”

While the model release timeline isn’t clear, Unlikely AI is certain about the strength of its ambition. Given AI is the number one strategic priority of every trillion-dollar market cap company out there, Tunstall-Pedoe said he’s shooting for major adoption. “We want to be massively successful, we want to have a huge impact. We’re certainly open to different ways of achieving that,” he added.

Three types of hammers sit on a wood bench

UK's digital markets regulator gives flavor of rebooted rules coming for Big Tech

Three types of hammers sit on a wood bench

Image Credits: Robert Lowdon (opens in a new window) / Getty Images

The U.K.’s competition authority has fleshed out new details of how it plans to wield long anticipated powers, incoming under a reform bill that’s still in front of parliament, to proactively regulate digital giants with so-called strategic market status (SMS) — saying today that, in the first year of the regime coming into force, it expects to undertake 3-4 investigations of tech giants to determine if they meet the bar.

Of course the regulator isn’t naming any names as yet but it’s a fair guess that Apple and Google (aka Alphabet) will be towards the top of this investigation list.

The CMA previously found the pair’s gatekeeping of their respective mobile app stores creates substantial competition concerns. And, publishing a mobile market study on the duopoly back in December 2021, it wrote that its work “so far” suggests both would meet the incoming criteria for SMS designation for several of their ecosystem activities.

Tech giants that end up being subject to the U.K.’s special abuse regime can expect to face interventions that prevent them from preferencing their own products, the CMA also confirmed today.

Additionally, it said they may be required to provide competitors with greater access to “data and functionality” than their commercial interest might prefer.  Interoperability could also be imposed on designated tech giants, the CMA suggested, as well as mandates that they trade on fairer terms. Algorithmic transparency could be another demand made of them by the new digital markets regulator.

The need to arm the Competition and Markets Authority (CMA) with its own ex ante playbook to tackle the market muscle of Big Tech has been on the policymaking agenda in the U.K. for years. In November 2020 ministers confirmed their plan to set up a “pro-competition” regime targeting tech platforms with major market power with the goal of tackling some of the tipping seen in digital spaces, such as online advertising.

A key component of the plan for the new Digital Markets Unit (DMU), set up within the CMA, was that it would be empowered to tackle specific problems with bespoke interventions tailored to each platform. The reform also contained teeth, allowing for penalties of up to 10% of annual turnover for confirmed violations.

Three+ years ago, when the government first committed to the plan to tackle platform power, it looked pioneering. However the turmoil in U.K. politics of the past several years contributed to delaying progress on enacting the reform. As a consequence the U.K. has slipped behind peers like the European Union — which adopted its own flagship digital competition reform last year. The deadline for in-scope tech giants’ compliance with that regime is looming in early March.

Returning to the U.K., the domestic mood music changed again last April when the government, under prime minister Rishi Sunak, picked the ball back up and introduced the Digital Markets, Competition and Consumers Bill to parliament. Then, earlier this month, ministers wrote to the CMA asking it to set out a roadmap for implementing the future regime. Albeit, given the detail of the legislation remains under discussion by lawmakers, the ask was only for a “high level” plan.

The CMA’s response today takes the form of an overview that gives some steerage of what may be coming down the pipe for a handful of tech giants operating in the U.K. once the regime is up and running.

In the overview document, the regulator writes that the harms it will choose to focus on will be driven by a set of “prioritisation principles”. The text goes on to set out a list of 11 “operating principles” (see graphic below) it says will feed its decision-making on which of the myriad possible Big Tech abuse battles to pick — including saying it will have a focus on always applying a pro-competition lens; selecting for maximum impact; and seeking to move quickly (and repair harms, to coin a phrase) as issues develop.

“We will think broadly about consumer benefits,” the CMA also writes, fleshing out its thinking on principle 2 (aka impact). “As well as the price of goods and services (which in some digital markets is zero), consumers may also value choice, security, privacy, innovation, and their overall experience (for example, how much advertising they are exposed to).”

CMA 11 principles for DMU
Image credits: CMA

A similar ex ante digital competition reform that came into force in the EU last year — aka, the Digital Markets Act — takes a more prescriptive approach to prohibitions and obligations, by literally setting out a list of ‘dos and don’ts’ for regulated giants. Six tech giants have been designated as so called “gatekeepers” under the bloc’s regime so far (Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft), for a total of 22 “core platform services” they provide, which range from adtech and operating systems to search engines and messaging platforms.

Some of the gatekeepers, including Apple, have filed legal challenges to the DMA designations. But the EU regime applies regardless in the meanwhile.

A German ex ante digital competition reform has also been operating since early 2021. This update has seen the country’s regulator designate a number of tech giants, including Amazon, Apple, Google and Meta, as subject to a special abuse control regime for firms deemed to have “paramount significance for competition across markets”. Other tech giants remain under market power probe there.

The German regime has clocked up the most mileage of the regional ex ante reboots so far. And the Federal Cartel Office (FCO) can point to some notable shifts it’s extracted from in-scope giants since then — including Google agreeing to reform its data terms; and offering not to inject publisher content it’s directly licensing into search results, which the regulator was concerned would amount to a self-preferencing risk which could harm rival publishers who weren’t licensing their content to Google.

Under the FCO’s watch, Meta also agreed to provide users with a way to refuse its cross-site tracking last summer, in a win for privacy driven via the perhaps unlikely avenue of competition reform. (Although the FCO has long been a pioneer at reading privacy exploitation as a competition abuse.)

What impact the U.K.’s equivalent reform might have in the coming years remains to be seen. The government still needs to get on and get it through parliament. So it’s not clear when exactly it might be operational.

For one thing there will need to be enough parliamentary time left before the U.K. General Election that’s expected later this year to pass the bill. Once the legislation is in place there may be an implementation period — plus the CMA/DMU will have to undertake investigations to designate SMSes. So the regime may still be years, plural, out from actually being able to exert pressure on Big Tech decisions.

In the meanwhile, the CMA’s overview offers some interesting hints of where the DMU’s hammer could fall in the coming years. And one overarching trend at least is clear: Big Tech is facing increasing curbs on its operational freedom.

That said, rising oversight of market-dominating web giants may be contributing to a strategic quasi-outsourcing biz dev tactic whereby Big Tech firms seek to invest in and partner with less tightly regulated startups that can be involved in activities which, if they were doing the stuff directly, could ruffle regulators’ feathers.

The links between cloud-computing infrastructure-owning Big Tech and generative AI startups look instructive here. Vast amounts of money and compute resource are being deployed in a way that threatens to enable current-gen tech giants to further extend their market dominance, via strategic tie-ups to startups operating at a claimed arm’s length distance from their own business empires, in spite of amped up competition oversight of their own core platform services. Microsoft-OpenAI anyone?

UK to press ahead with long anticipated reform to tackle Big Tech’s market power

UK’s competition watchdog drafts principles for ‘responsible’ generative AI

Three types of hammers sit on a wood bench

UK's digital markets regulator gives flavor of rebooted rules coming for Big Tech

Three types of hammers sit on a wood bench

Image Credits: Robert Lowdon (opens in a new window) / Getty Images

The U.K.’s competition authority has fleshed out new details of how it plans to wield long anticipated powers, incoming under a reform bill that’s still in front of parliament, to proactively regulate digital giants with so-called strategic market status (SMS) — saying today that, in the first year of the regime coming into force, it expects to undertake 3-4 investigations of tech giants to determine if they meet the bar.

Of course the regulator isn’t naming any names as yet but it’s a fair guess that Apple and Google (aka Alphabet) will be towards the top of this investigation list.

The CMA previously found the pair’s gatekeeping of their respective mobile app stores creates substantial competition concerns. And, publishing a mobile market study on the duopoly back in December 2021, it wrote that its work “so far” suggests both would meet the incoming criteria for SMS designation for several of their ecosystem activities.

Tech giants that end up being subject to the U.K.’s special abuse regime can expect to face interventions that prevent them from preferencing their own products, the CMA also confirmed today.

Additionally, it said they may be required to provide competitors with greater access to “data and functionality” than their commercial interest might prefer.  Interoperability could also be imposed on designated tech giants, the CMA suggested, as well as mandates that they trade on fairer terms. Algorithmic transparency could be another demand made of them by the new digital markets regulator.

The need to arm the Competition and Markets Authority (CMA) with its own ex ante playbook to tackle the market muscle of Big Tech has been on the policymaking agenda in the U.K. for years. In November 2020 ministers confirmed their plan to set up a “pro-competition” regime targeting tech platforms with major market power with the goal of tackling some of the tipping seen in digital spaces, such as online advertising.

A key component of the plan for the new Digital Markets Unit (DMU), set up within the CMA, was that it would be empowered to tackle specific problems with bespoke interventions tailored to each platform. The reform also contained teeth, allowing for penalties of up to 10% of annual turnover for confirmed violations.

Three+ years ago, when the government first committed to the plan to tackle platform power, it looked pioneering. However the turmoil in U.K. politics of the past several years contributed to delaying progress on enacting the reform. As a consequence the U.K. has slipped behind peers like the European Union — which adopted its own flagship digital competition reform last year. The deadline for in-scope tech giants’ compliance with that regime is looming in early March.

Returning to the U.K., the domestic mood music changed again last April when the government, under prime minister Rishi Sunak, picked the ball back up and introduced the Digital Markets, Competition and Consumers Bill to parliament. Then, earlier this month, ministers wrote to the CMA asking it to set out a roadmap for implementing the future regime. Albeit, given the detail of the legislation remains under discussion by lawmakers, the ask was only for a “high level” plan.

The CMA’s response today takes the form of an overview that gives some steerage of what may be coming down the pipe for a handful of tech giants operating in the U.K. once the regime is up and running.

In the overview document, the regulator writes that the harms it will choose to focus on will be driven by a set of “prioritisation principles”. The text goes on to set out a list of 11 “operating principles” (see graphic below) it says will feed its decision-making on which of the myriad possible Big Tech abuse battles to pick — including saying it will have a focus on always applying a pro-competition lens; selecting for maximum impact; and seeking to move quickly (and repair harms, to coin a phrase) as issues develop.

“We will think broadly about consumer benefits,” the CMA also writes, fleshing out its thinking on principle 2 (aka impact). “As well as the price of goods and services (which in some digital markets is zero), consumers may also value choice, security, privacy, innovation, and their overall experience (for example, how much advertising they are exposed to).”

CMA 11 principles for DMU
Image credits: CMA

A similar ex ante digital competition reform that came into force in the EU last year — aka, the Digital Markets Act — takes a more prescriptive approach to prohibitions and obligations, by literally setting out a list of ‘dos and don’ts’ for regulated giants. Six tech giants have been designated as so called “gatekeepers” under the bloc’s regime so far (Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft), for a total of 22 “core platform services” they provide, which range from adtech and operating systems to search engines and messaging platforms.

Some of the gatekeepers, including Apple, have filed legal challenges to the DMA designations. But the EU regime applies regardless in the meanwhile.

A German ex ante digital competition reform has also been operating since early 2021. This update has seen the country’s regulator designate a number of tech giants, including Amazon, Apple, Google and Meta, as subject to a special abuse control regime for firms deemed to have “paramount significance for competition across markets”. Other tech giants remain under market power probe there.

The German regime has clocked up the most mileage of the regional ex ante reboots so far. And the Federal Cartel Office (FCO) can point to some notable shifts it’s extracted from in-scope giants since then — including Google agreeing to reform its data terms; and offering not to inject publisher content it’s directly licensing into search results, which the regulator was concerned would amount to a self-preferencing risk which could harm rival publishers who weren’t licensing their content to Google.

Under the FCO’s watch, Meta also agreed to provide users with a way to refuse its cross-site tracking last summer, in a win for privacy driven via the perhaps unlikely avenue of competition reform. (Although the FCO has long been a pioneer at reading privacy exploitation as a competition abuse.)

What impact the U.K.’s equivalent reform might have in the coming years remains to be seen. The government still needs to get on and get it through parliament. So it’s not clear when exactly it might be operational.

For one thing there will need to be enough parliamentary time left before the U.K. General Election that’s expected later this year to pass the bill. Once the legislation is in place there may be an implementation period — plus the CMA/DMU will have to undertake investigations to designate SMSes. So the regime may still be years, plural, out from actually being able to exert pressure on Big Tech decisions.

In the meanwhile, the CMA’s overview offers some interesting hints of where the DMU’s hammer could fall in the coming years. And one overarching trend at least is clear: Big Tech is facing increasing curbs on its operational freedom.

That said, rising oversight of market-dominating web giants may be contributing to a strategic quasi-outsourcing biz dev tactic whereby Big Tech firms seek to invest in and partner with less tightly regulated startups that can be involved in activities which, if they were doing the stuff directly, could ruffle regulators’ feathers.

The links between cloud-computing infrastructure-owning Big Tech and generative AI startups look instructive here. Vast amounts of money and compute resource are being deployed in a way that threatens to enable current-gen tech giants to further extend their market dominance, via strategic tie-ups to startups operating at a claimed arm’s length distance from their own business empires, in spite of amped up competition oversight of their own core platform services. Microsoft-OpenAI anyone?

UK to press ahead with long anticipated reform to tackle Big Tech’s market power

UK’s competition watchdog drafts principles for ‘responsible’ generative AI