Artists' lawsuit against generative AI makers can go forward, judge says

AI text on illuminated background

Image Credits: Eugene Mymrin / Getty Images

A class action lawsuit filed by artists who allege that Stability, Runway and DeviantArt illegally trained their AIs on copyrighted works can move forward, but only in part, the presiding judge decided on Monday. In a mixed ruling, several of the plaintiffs’ claims were dismissed while others survived, meaning the suit could end up at trial. That’s bad news for the AI makers: Even if they win, it’s a costly, drawn-out process where a lot of dirty laundry will be put on display. And they aren’t the only companies fighting off copyright claims — not by a long shot.

X files antitrust suit against advertising groups over ‘systematic illegal boycott’

Linda Yaccarino, CEO of X, testifies before the Senate Judiciary Committee at the Dirksen Senate Office Building on January 31, 2024 in Washington, DC.

Image Credits: Alex Wong / Getty Images

X CEO Linda Yaccarino on Tuesday announced that the social media platform has filed an antitrust lawsuit against the Global Alliance for Responsible Media (GARM) and the World Federation of Advertisers (WFA).

In a video posted to X, Yaccarino accuses the organizations — along with GARM members CVS Health, Mars, Orsted and Unilever — of what Yaccarino calls a “systematic illegal boycott” of the platform.

The executive cites a July report from the U.S. House of Representatives Judiciary Committee titled, “GARM’s (Global Alliance for Responsible Media) Harm.” According to the report:

Through GARM, large corporations, advertising agencies, and industry associations participated in boycotts and other coordinated action to demonetize platforms, podcasts, news outlets, and other content deemed disfavored by GARM and its members. This collusion can have the effect of eliminating a variety of content and viewpoints available to consumers.

GARM was founded by the World Federation of Advertisers in 2019 in a bid to “help the industry address the challenge of illegal or harmful content on digital media platforms and its monetization via advertising,” according to the organization’s site.

The Judiciary report specifically addresses boycotts of X, The Joe Rogan Experience/Spotify and “Candidates, platforms, and news outlets with opposing political views.”

In particular it addresses organization member concerns over Elon Musk’s acquisition of the platform then known as Twitter. One member, according to the report, suggested that fellow members stop paid advertisements on the service, contributing to a precipitous drop in revenue.

“GARM’s internal documents show that GARM was asked by a member to ‘arrange a meeting and hear more about [GARM’s] perspectives about the Twitter situation and a possible boycott from many companies,” the report’s authors note. GARM also held ‘extensive debriefing and discussion around Elon Musks’ [sic] takeover of Twitter,’ providing ample opportunity for the boycott to be organized.”

In her own statement, Yaccarino claims that GARM’s “illegal behavior of these organizations and their executives cost X billions of dollars.”

Musk was less measured in his response, posting, “We tried being nice for 2 years and got nothing but empty words. Now, it is war.” The executive had similarly incendiary words for advertisers last year, stating, “If somebody’s going to try to blackmail me with advertising, blackmail me with money? Go f*** yourself. Go. F***. Yourself. Is that clear?”

He also promised at the time to document companies participating in the boycott “in great detail.”

The suit follows a recent governmental crackdown on tech antitrust. Yesterday, Google lost a landmark battle alleging the software giant of maintaining a search monopoly through illegal acts.

X joined GARM in early July, noting, “X is committed to the safety of our global town square and proud to be part of the GARM community” through its Safety account.

Vitalik Buterin (Ethereum Foundation) at TechCrunch Disrupt SF 2017

Ethereum co-founder's warning against 'pro-crypto' candidates: 'Are they in it for the right reasons?'

Vitalik Buterin (Ethereum Foundation) at TechCrunch Disrupt SF 2017

Image Credits: David Paul Morris (opens in a new window) / Getty Images

Vitalik Buterin, the co-founder of Ethereum, issued a warning on Wednesday against choosing a candidate purely based on whether they claim to be “pro-crypto.” In a blog post, Buterin said it’s more important to scrutinize a candidate’s broader policies to ensure they support cryptocurrency’s underlying goals, including internationalism and protection for private communications.

“If a politician is pro-crypto, the key question to ask is: Are they in it for the right reasons?,” wrote Buterin. “Do they have a vision of how technology and politics and the economy should go in the 21st century that aligns with yours?”

Though Buterin does not mention any politicians or crypto investors by name, his comments come just one day after Marc Andreessen and Ben Horowitz threw their support behind former President Donald Trump in the 2024 Presidential election. The founders of Andreessen Horowitz noted on their podcast yesterday that Trump’s crypto regulation plan is “a flat-out blanket endorsement of the entire space.” The influential venture capitalists join the ranks of other notable Silicon Valley players, including Elon Musk, who endorsed Trump in the last week.

Further, Ethereum’s co-founder made the case that signaling you broadly support any “pro-crypto” candidates could incentivize politicians to promote the cause in bad faith. Buterin notes that authoritarian leaders, particularly in Russia, have claimed to support crypto in an effort to consolidate power.

“It doesn’t matter if they also support banning encrypted messaging, if they are a power-seeking narcissist, or if they push for bills that make it even harder for your Chinese or Indian friend to attend the next crypto conference — all that politicians have to do is make sure it’s easy for you to trade coins,” said Buterin.

The co-founder of Ethereum suggested to look into a “crypto-friendly” politician’s views on crypto five years ago. He says this can serve as a guide for whether the politician may reverse their position five years in the future.

Notably, former President Trump starkly opposed decentralized tokens five years ago. In a tweet from July 2019, Trump said he’s “not a fan of Bitcoin and other Cryptocurrencies, which are not money.” In a follow-up tweet, he said “we have only one real currency in the USA,” referring to the United States dollar.

But in May, Trump completed a total flip-flop on his stance regarding cryptocurrencies, becoming the first major presidential candidate to accept bitcoin donations. The Wall Street Journal reports that Trump’s crypto fundraising efforts have collected $3 million worth of donations in the second quarter.

'Model collapse': Scientists warn against letting AI eat its own tail

Ouroboros

Image Credits: mariaflaya / Getty Images

When you see the mythical Ouroboros, it’s perfectly logical to think, “Well, that won’t last.” A potent symbol — swallowing your own tail — but difficult in practice. It may be the case for AI as well, which, according to a new study, may be at risk of “model collapse” after a few rounds of being trained on data it generated itself.

In a paper published in Nature, British and Canadian researchers led by Ilia Shumailov at Oxford show that today’s machine learning models are fundamentally vulnerable to a syndrome they call “model collapse.” As they write in the paper’s introduction:

We discover that indiscriminately learning from data produced by other models causes “model collapse” — a degenerative process whereby, over time, models forget the true underlying data distribution …

How does this happen, and why? The process is actually quite easy to understand.

AI models are pattern-matching systems at heart: They learn patterns in their training data, then match prompts to those patterns, filling in the most likely next dots on the line. Whether you ask, “What’s a good snickerdoodle recipe?” or “List the U.S. presidents in order of age at inauguration,” the model is basically just returning the most likely continuation of that series of words. (It’s different for image generators, but similar in many ways.)

But the thing is, models gravitate toward the most common output. It won’t give you a controversial snickerdoodle recipe but the most popular, ordinary one. And if you ask an image generator to make a picture of a dog, it won’t give you a rare breed it only saw two pictures of in its training data; you’ll probably get a golden retriever or a Lab.

Now, combine these two things with the fact that the web is being overrun by AI-generated content and that new AI models are likely to be ingesting and training on that content. That means they’re going to see a lot of goldens!

And once they’ve trained on this proliferation of goldens (or middle-of-the road blogspam, or fake faces, or generated songs), that is their new ground truth. They will think that 90% of dogs really are goldens, and therefore when asked to generate a dog, they will raise the proportion of goldens even higher — until they basically have lost track of what dogs are at all.

This wonderful illustration from Nature’s accompanying commentary article shows the process visually:

Image Credits: Nature

A similar thing happens with language models and others that, essentially, favor the most common data in their training set for answers — which, to be clear, is usually the right thing to do. It’s not really a problem until it meets up with the ocean of chum that is the public web right now.

Basically, if the models continue eating each other’s data, perhaps without even knowing it, they’ll progressively get weirder and dumber until they collapse. The researchers provide numerous examples and mitigation methods, but they go so far as to call model collapse “inevitable,” at least in theory.

Though it may not play out as the experiments they ran show it, the possibility should scare anyone in the AI space. Diversity and depth of training data is increasingly considered the single most important factor in the quality of a model. If you run out of data, but generating more risks model collapse, does that fundamentally limit today’s AI? If it does begin to happen, how will we know? And is there anything we can do to forestall or mitigate the problem?

The answer to the last question at least is probably yes, although that should not alleviate our concerns.

Qualitative and quantitative benchmarks of data sourcing and variety would help, but we’re far from standardizing those. Watermarks of AI-generated data would help other AIs avoid it, but so far no one has found a suitable way to mark imagery that way (well … I did).

In fact, companies may be disincentivized from sharing this kind of information, and instead hoard all the hyper-valuable original and human-generated data they can, retaining what Shumailov et al. call their “first mover advantage.”

[Model collapse] must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet.

… [I]t may become increasingly difficult to train newer versions of LLMs without access to data that were crawled from the Internet before the mass adoption of the technology or direct access to data generated by humans at scale.

Add it to the pile of potentially catastrophic challenges for AI models — and arguments against today’s methods producing tomorrow’s superintelligence.

'Model collapse': Scientists warn against letting AI eat its own tail

Ouroboros

Image Credits: mariaflaya / Getty Images

When you see the mythical Ouroboros, it’s perfectly logical to think, “Well, that won’t last.” A potent symbol — swallowing your own tail — but difficult in practice. It may be the case for AI as well, which, according to a new study, may be at risk of “model collapse” after a few rounds of being trained on data it generated itself.

In a paper published in Nature, British and Canadian researchers led by Ilia Shumailov at Oxford show that today’s machine learning models are fundamentally vulnerable to a syndrome they call “model collapse.” As they write in the paper’s introduction:

We discover that indiscriminately learning from data produced by other models causes “model collapse” — a degenerative process whereby, over time, models forget the true underlying data distribution …

How does this happen, and why? The process is actually quite easy to understand.

AI models are pattern-matching systems at heart: They learn patterns in their training data, then match prompts to those patterns, filling in the most likely next dots on the line. Whether you ask, “What’s a good snickerdoodle recipe?” or “List the U.S. presidents in order of age at inauguration,” the model is basically just returning the most likely continuation of that series of words. (It’s different for image generators, but similar in many ways.)

But the thing is, models gravitate toward the most common output. It won’t give you a controversial snickerdoodle recipe but the most popular, ordinary one. And if you ask an image generator to make a picture of a dog, it won’t give you a rare breed it only saw two pictures of in its training data; you’ll probably get a golden retriever or a Lab.

Now, combine these two things with the fact that the web is being overrun by AI-generated content and that new AI models are likely to be ingesting and training on that content. That means they’re going to see a lot of goldens!

And once they’ve trained on this proliferation of goldens (or middle-of-the road blogspam, or fake faces, or generated songs), that is their new ground truth. They will think that 90% of dogs really are goldens, and therefore when asked to generate a dog, they will raise the proportion of goldens even higher — until they basically have lost track of what dogs are at all.

This wonderful illustration from Nature’s accompanying commentary article shows the process visually:

Image Credits: Nature

A similar thing happens with language models and others that, essentially, favor the most common data in their training set for answers — which, to be clear, is usually the right thing to do. It’s not really a problem until it meets up with the ocean of chum that is the public web right now.

Basically, if the models continue eating each other’s data, perhaps without even knowing it, they’ll progressively get weirder and dumber until they collapse. The researchers provide numerous examples and mitigation methods, but they go so far as to call model collapse “inevitable,” at least in theory.

Though it may not play out as the experiments they ran show it, the possibility should scare anyone in the AI space. Diversity and depth of training data is increasingly considered the single most important factor in the quality of a model. If you run out of data, but generating more risks model collapse, does that fundamentally limit today’s AI? If it does begin to happen, how will we know? And is there anything we can do to forestall or mitigate the problem?

The answer to the last question at least is probably yes, although that should not alleviate our concerns.

Qualitative and quantitative benchmarks of data sourcing and variety would help, but we’re far from standardizing those. Watermarks of AI-generated data would help other AIs avoid it, but so far no one has found a suitable way to mark imagery that way (well … I did).

In fact, companies may be disincentivized from sharing this kind of information, and instead hoard all the hyper-valuable original and human-generated data they can, retaining what Shumailov et al. call their “first mover advantage.”

[Model collapse] must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet.

… [I]t may become increasingly difficult to train newer versions of LLMs without access to data that were crawled from the Internet before the mass adoption of the technology or direct access to data generated by humans at scale.

Add it to the pile of potentially catastrophic challenges for AI models — and arguments against today’s methods producing tomorrow’s superintelligence.

Discord logo in flames

Discord took no action against server that coordinated costly Mastodon spam attacks

Discord logo in flames

Image Credits: Bryce Durbin/TechCrunch

Over the weekend, hackers targeted federated social networks like Mastodon to carry out ongoing spam attacks that were organized on Discord, and conducted using Discord applications. But Discord has yet to remove the server where the attacks are facilitated, and Mastodon community leaders have been unable to reach anyone at the company.

“The attacks were coordinated through Discord, and the software was distributed through Discord,” said Emelia Smith, a software engineer who regularly works on trust and safety issues in the fediverse, a network of decentralized social platforms built on the ActivityPub protocol. “They were using bots that integrated directly with Discord, such that a user didn’t even need to set up any servers or anything like that, because they could just run this bot directly from Discord in order to carry out the attack.”

Smith attempted to contact Discord through official channels on February 17, but still has only received form responses. She told TechCrunch that while Discord has mechanisms for reporting individual users or messages, it lacks a clear way to report whole servers.

“We’ve seen this costing server admins of Mastodon, Misskey, and others hundreds or thousands of dollars in infrastructure costs, and overall denial of service,” Smith wrote to Discord Trust & Safety in an email viewed by TechCrunch. “The only common link seems to be this discord server.”

In a statement to TechCrunch, a Discord spokesperson said, “Discord’s Terms of Service specifically prohibit platform abuse, which refers to activities that disrupt or alter the experience of Discord users, including spam, or sending unsolicited bulk messages or interactions.” Though Discord says it is monitoring the situation, the server responsible for the spam attacks remains online.

Mastodon founder and CEO Eugen Rochko said in a post that these attacks are more difficult to moderate than past ones, because they deliberately target smaller servers, which often have fewer moderation tools in place. Some of these servers offer open registration, making it possible to quickly start new accounts and post spam. And as Smith notes, these mass spam attacks can drive up server costs, leaving admins with unexpected bills.

According to reports on Mastodon, this fully automated attack was sparked by a conflict between teenagers on two different Japanese language Discord servers.

“It’s this sort of weird social behavior, where these kids are essentially acting like schoolyard bullies,” Smith told TechCrunch. She thinks that they carried out the attack simply to show that they can, not because they have any ill-will toward these social networks.

“They’ve got technological capabilities that are well above where they are emotionally or psychologically,” she said.

Kevin Beaumont, a cybersecurity expert, posted on Mastodon that this incident recalls a similar, yet much larger attack from 2016, in which three college kids created a botnet to make money on Minecraft. But what they built was so powerful that it was able to take down huge swaths of the internet, including sites like Reddit and Spotify.

“I had to do a radio show on NPR about that one and the presenter kept asking me if it was Putin — and I was like, no, it’s teenagers. Advanced Persistent Teenagers,” Beaumont posted.

As a decentralized social media network, Mastodon’s team is unable to intervene in moderation issues on servers that they don’t own, which is a vulnerability for the fediverse. On servers that are actively maintained and moderated, Mastodon offers tools to prevent automated account registration, like CAPTCHAs.

While Mastodon’s nonprofit, open source model gives users more ownership over their social media experiences, it also limits the company’s ability to hire more developers. Most of the social network is run by volunteers, like Smith herself.

“I would estimate that the entire fediverse is developed off of the backs of maybe, at best, 100 engineers,” she said. “All of whom are either low paid, underpaid, or unpaid, who are trying to build software, and at the same time, are supporting the userbase of monthly active users in the range of 1.1 million to 7.4 million.”

Spam attack on Twitter/X rival Mastodon highlights ‘fediverse’ vulnerabilities

TechCrunch Disrupt 2024

48-hour dash: Race against the clock to save $1,000 on Disrupt 2024

TechCrunch Disrupt 2024

Ticktock! Only 48 hours remain to snag your discounted tickets to Disrupt 2024! Shift into high gear and save up to a whopping $1,000 by seizing this opportunity before the clock strikes at 11:59 p.m. PT on Friday, March 15.

Picture it: Come October, you’ll be rubbing elbows with over 10,000 of the startup world’s elite at the ultimate tech extravaganza in San Francisco. Don’t miss your chance to dive into the cutting-edge of technology, all under one massive roof.

Choose your Disrupt pass

Ready to rev up your savings engine? Secure your Attendee, Founder, or Investor pass now and watch those dollars stack up in your pocket. Students and nonprofits, we’ve got you covered with heavily discounted passes too! Choose the ticket type that aligns with your function and unlock exclusive networking and session content! Learn more about the different ticket types and access here.

Want to speak at Disrupt 2024?

The call for content applications to showcase your groundbreaking ideas at TechCrunch Disrupt 2024 is now open through April 26. We’re on the hunt for the next wave of tech pioneers to join us on our breakout or roundtable stages to share their insights, experiences, and innovations with the world. Whether you’re a seasoned entrepreneur, a rising star in the startup scene, or an industry expert pushing the boundaries of what’s possible, we want to hear from you. Submit your application now and let your voice be heard at TechCrunch Disrupt 2024!

Is your company interested in sponsoring or exhibiting at TechCrunch Disrupt 2024? Contact our sponsorship sales team by filling out this form.

Apple retail store, exterior

US DOJ's blockbuster lawsuit against Apple is headline grabber but poses limited near-term impact

Apple retail store, exterior

Image Credits: Apple

The U.S. Department of Justice filed a lawsuit against Apple Thursday, accusing the company led by CEO Tim Cook of engaging in anticompetitive business practices. The allegations include claims that Apple prevents competitors from accessing certain iPhone features and that the company’s actions impact the “flow of speech” through its streaming service, Apple TV+.

However, even if the DOJ proves any of the allegations, it is highly unlikely that Apple will face material changes for years, as history shows that such lawsuits often take a significant amount of time to reach trial, let alone a resolution. The DOJ’s ongoing case against Google, filed in 2020, only went to trial in 2023, with no remedies or financial implications expected for up to two more years.

This is not the first time Apple has faced legal action from the DOJ. In 2012, the agency sued Apple for conspiring with publishers to increase e-book prices, a lawsuit that was not settled until 2016.

“Precedents suggest that resolution of the complaint will take three to five years, including appeals,” Bernstein analysts wrote in a note.

Morgan Stanley analysts said Friday that the current lawsuit could also favor Apple, as many similar allegations have already been ruled on by a judge in the Apple vs Epic case, with the ruling stating that Apple does not violate antitrust laws. The DOJ filing also only makes a relatively passing mention of Apple’s $10 billion-plus search deal with Google and doesn’t cite the App Store as one of its five principal examples of monopolistic behavior.

Previous major antitrust cases. Image Credits: Bernstein

Bernstein analysts added, “While the DoJ’s charges are focused on iPhone, we do not see likely remediation as materially impacting Apple financially or undermining the iPhone franchise: worst case, Apple pays a fine, and loosens restrictions for competition across the iOS platform, which we believe will have limited impact on iPhone user retention or on Services revenues.”

Which led Morgan Stanley analysts to conclude that the DOJ’s lawsuit creates “more of a headline risk than a near-term event risk” for Apple.

They added:

Said differently, yes, this lawsuit creates a stock overhang, but the market has a short term memory and in our view, fundamentals are more likely to drive Apple’s stock price over the next 12 months (and several years), rather than this lawsuit. We can cite a number of historical instances where companies in the thick of litigation threatening their core product/differentiating value proposition have outperformed despite the legal overhang: 1) Apple/Epic, where the stock outperformed by 15 points in the 18 months following Epic’s first legal filing threatening App Store take rates in August 2020, and 2) USA vs. Google, where the stock has nearly doubled since the DOJ first announced its investigation into Alphabet’s search practices. Our point being, regulation/litigation is a greater longterm tail risk for Apple than it has been historically, but the underlying drivers of the stock for the foreseeable future will almost certainly be fundamentals-based, especially given this lawsuit might not be resolved until at least 2028 (or even 2030) based on past cases.

apple-ghost-logo

The DOJ's case against Apple adds to a growing pile of antitrust problems for Cupertino

apple-ghost-logo

Image Credits: Bryce Durbin / TechCrunch

On home turf, Apple has enjoyed many years of relatively light regulatory scrutiny compared to Big Tech peers. The U.S. Department of Justice (DOJ) opened a monopoly case against Google back in October 2020, for instance. It followed with a second antitrust case at the start of last year, targeting Google’s adtech. While the FTC has been pursuing an antitrust case against Meta over a similar timeframe. And who could forget Microsoft’s Windows era tango with U.S. antitrust enforcers?

Thursday’s DOJ antitrust suit, accusing Apple of being a monopolist in the high-end and U.S. smartphone markets, where the iPhone maker is charged with anti-competitive exclusion in relation to a slew of restrictions it applies to iOS developers and users, shows the company’s honeymoon period with local law enforcers is well and truly over.

But it’s important to note Apple has already faced competition scrutiny and interventions in a number of other markets. More international trouble also looks to be brewing for the smartphone giant in the coming weeks and months ahead, especially as the European Union revs the engines of recently rebooted competition rules.

Read on for our analysis of what’s shaping up to be a tough year for Apple, with a range of antitrust activity bearing down on its mobile business.…

Antitrust trouble in paradise

Earlier this month, European Union enforcers hit Apple with a fine of close to $2 billion in a case linked to long-running complaints made by music streaming platform Spotify, dating back to at least 2019.

The decision followed several years of investigation — and some revisions to the EU’s theory of harm. Most notably, last year the bloc dropped an earlier concern related to Apple mandating use of its in-app payment tech, to concentrate on so-called anti-steering rules.

Under its revised complaint, the Commission found Apple had breached the bloc’s competition laws for music streaming services on its mobile platform, iOS, by applying anti-steering provisions to these apps, meaning they were unable to inform their users of cheaper offers elsewhere.

The EU framed Apple’s actions in this case as harmful to consumers — who they contend lost out on potentially cheaper and/or more innovative music services, as a result of restrictions the iPhone maker imposed on the App Store. So the case ended up not being about classically exclusionist business conduct — but “unfair trading conditions” — as the bloc applied a broader theory of consumer harm and essentially sanctioned Apple for exploiting iOS users.

Announcing the decision earlier this month, EVP and competition chief Margrethe Vestager summed up its conclusions: “Apple’s rules ended up in harming consumers. Critical information was withheld so that consumers could not effectively use or make informed choices. Some consumers may have paid more because they were unaware that they could pay less if they subscribed outside of the app. And other consumers may not have managed at all to subscribe to their preferred music streaming provider because they simply couldn’t find it.

“The Commission found that Apple’s rules result in withholding key information on prices and features of services from consumers. As such, they are neither necessary nor proportionate for the provision of the App Store on Apple’s mobile devices. We therefore consider them to be unfair trading conditions as they were unilaterally imposed by a dominant company capable of harming consumers’ interest.”

The penalty the EU imposed on Apple is notable, as the lion’s share of the fine was not based on direct sales — music streaming on iOS is a pretty tiny market, relatively speaking. Rather, enforcers added what Vestager referred to as a “lump sum” (a full €1.8 million!) explicitly to have a deterrent effect. The level of the basic fine (i.e., calculated on revenues) was just €40 million. But she argued a penalty of few millions of euros would have amounted to a “parking ticket” for a company as wealthy as Apple. So the EU found a way to impose a more substantial sanction.

The bloc’s rules for calculating antitrust fines allow for adjustments to the basic amount, based on factors like the gravity and length of the infringement, or aggravating circumstances. EU enforcers also have leeway to impose symbolic fines in some cases.

Exactly which of these rules the Commission relied upon to ratchet up the penalty on Apple isn’t clear. But what is clear is the EU is sending an unequivocal message to the iPhone maker — a deliberate shot across the bow — that the era of relatively light touch antitrust enforcement is over.

This same message is essentially what the DOJ came to tell the world this week.

During a March 4 press conference on the EU Apple decision, Vestager conceded such a deterrent penalty is rare in this type of competition abuse case — noting it’s more often used in cartel cases. But, asked during a Q&A with journalists whether the sanction for user exploitation marks a policy shift for the bloc’s competition enforcers, she responded by saying: “I think we have an obligation to keep developing how we see our legal basis.”

By way of example, she pointed to discussion about the need for merger reviews to factor in harm to innovation and choice — that is, not just look narrowly at impact on prices. “If you look at our antitrust cases, I think it’s also very important that we see the world as it is,” she added, going on to acknowledge competition enforcers must ensure their actions are lawful, of course, but stressing their duty is also to be “relevant for customers in Europe.”

Vestager’s remarks make it clear the EU’s competition machinery is in the process of shifting modus operandi — moving to a place where it’s not afraid to make broader and more creative assessments of complaints in order to adapt to changed times. The EU Digital Markets Act (DMA) is, in one sense, a big driver here. Although the ex ante competition reform, proposed by the Commission at the end of 2020, was drafted in response to complaints that classic competition enforcements couldn’t move quickly enough to prevent Big Tech abusing its market power. So the underlying impetus is — exactly — the problem of tipped digital markets and what to do about them. Which brings us right back to Apple.

It’s no accident whole sections of the DMA read as if they’re explicitly targeted at the iPhone maker. Because, essentially, large portions of the regulation absolutely are. Spotify and other app developers’ gripes about rent gouging app stores have clearly bent ears in Brussels and found their way into what’s — since just a few weeks — a legally enforceable text across the EU. Hence the requirements on designated mobile gatekeepers to allow things like app sideloading; to not block alternative app stores or browsers; to deal fairly with business users; and let consumers delete default apps, among other highly specific behavioral requirements.

The anti-steering restrictions Apple applied to music streaming apps were prohibited in the EU on March 4, when Vestager issued her enforcement decision on that case. But literally a few days later — by March 8 — Apple was banned from applying anti-steering restrictions to any iOS apps in the EU as the DMA compliance deadline expired.

This is the New World order being imposed on Cupertino in Europe. And it’s far more significant than any one fine (even a penalty of nearly $2 billion).

The bloc has taken other actions against Apple, too. It was already investigating Apple Pay back in 2020 — one obvious area of overlap with the DOJ case, as colleagues noted yesterday.

In January, Apple offered concessions aimed at resolving EU enforcers’ concerns about how it operates NFC payments and mobile wallet tech on iOS. These included proposing letting third party mobile wallet and payment service providers gain the necessary access to iOS tech to be able to offer rival payment services on Apple’s mobiles free of charge (and without being forced to use its own payment and wallet tech). Apple also pledged to provide access to additional features which help make payments on iOS more seamless (such as access to its Face ID authentication method). The company also pledged to play fair in the criteria applied for granting NFC access to third parties.

U.S. competition enforcers have a lot of similar concerns about Apple’s behavior in this area. And it’s notable that their filing makes mention of how Apple is opening up Apple Pay in Europe. (“There is no technical limitation on providing NFC access to developers seeking to offer third-party wallets,” runs para 115 of the DOJ complaint. “For example, Apple allows merchants to use the iPhone’s NFC antenna to accept tap-to-pay payments from consumers. Apple also acknowledges it is technically feasible to enable an iPhone user to set another app (e.g. a bank’s app) as the default payment app, and Apple intends to allow this functionality in Europe.”)

The obvious subtext here is: Why should iOS developers and users in Europe be getting something iOS developers and users in the U.S. are not?

Remember that, as we dive into other regulatory action targeting Apple overseas. Because as the EU enforces its shiny new behavioral rulebook on Apple, forcing the company to unlock and (regionally) open up different aspects of its ecosystem — from allowing non-WebKit-based browsers to letting iOS users sideload apps — U.S. government lawyers may well find other reasons to nitpick the iPhone maker’s more locked down playbook on home turf.

What the bloc likes to refer to as the “Brussels effect”, where an EU priority on law-making gives it a chance to set the global weather on regulation in strategic areas — such as digital technologies like AI or, indeed, platform power — could exert a growing influence on antitrust enforcements over the pond. Especially if there’s increasing divergence of opportunity being made available on major tech platforms as the DMA drives greater interoperability on Big Tech, and uses data portability mandates as a flywheel for encouraging service switching and multi-homing. (The EU missed a trick on driving messaging interoperability on Apple’s iMessage though, after last month deciding against designating it a DMA core platform service.)

It’s hardly a stretch to say the U.S. is unlikely to be happy to watch its citizens and developers getting less freedom on iPhones than people in Europe. The land of the free won’t like that second class feeling one bit.

EU enforcers have yet to confirm whether Apple’s offer, on Apple Pay, settles their concerns. But they are now engaged in a wider review of its entire DMA compliance plan. Last fall, Apple was designated under the DMA as a so-called “gatekeeper” for iOS, the App Store and its Safari browser. So multiple aspects of how it operates these platforms is under review. Formal investigations may soon follow — with some predicting DMA probes are likely, especially where criticisms persist. (And Apple appears to be the leading contender among the six designated gatekeepers for attracting claims of “malicious compliance” so far, followed by Meta and Google.)

Key here will be what the EU makes of Apple’s decision to respond to the new law by unbundling the fee structure it applies on iOS — applying a new “core tech” fee, as it refers to the new charge it levies on apps that opt into its DMA-amended T&Cs (charged at €0.50 for each first annual install per year over a 1 million threshold for apps distributed outside its App Store).

If you look at the text of the DMA it does not explicitly regulate gatekeeper pricing. Nor are in-scope app store operators literally banned from charging fees. But they do need to comply with the regulation’s requirement to apply FRAND terms (fair, reasonable and non-discriminatory) on business users.

What that means for compliance in the case of Apple’s bid to compensate for (forced) reductions in its usual platform take, i.e. as a result of being required to open up in ways that will enable developers to avoid its App Store fees, by devising a new fee it claims reflects the value developers get from access to its technologies remains to be seen.

A coalition of Apple critics, including Spotify and Epic Games, are continuing to lobby loudly against Apple’s gambit.

In an open letter at the start of this month they suggested the new fee was designed to act as a deterrent, arguing it will prevent developers from even signing up to Apple’s revised T&Cs (which they have to to tap into the DMA entitlements, per Apple’s rule revisions). “Apple’s new terms not only disregard both the spirit and letter of the law, but if left unchanged, make a mockery of the DMA and the considerable efforts by the European Commission and EU institutions to make digital markets competitive,” they fumed.

The EU is sounding sympathetic to this concern. In remarks to Reuters earlier this week, Vestager fired another shot across Apple’s bows — saying she was taking “a keen interest” in its new fee structure — and in the risk that it “will de facto not make it in any way attractive to use the benefits of the DMA”, as she put it. She added that this is “the kind of thing” the Commission will be investigating.

Behind the scenes Commission enforcers may well already be applying pressure on Apple to drop the fee. Although it’s notable that — so far — it hasn’t budged.

Whereas it has made a bunch of concessions in other areas related to DMA compliance, sometimes under public EU pressure. This includes reversing a decision to block progressive web apps (PWAs) in Europe (albeit, this always looked like a counter/retaliatory move/temper-tantrum in response to DMA requirements to open up to non-WebKit browser engines); making a few criteria concessions following developer complaints; reversing a decision to terminate Epic Games’ developer account; and announcing it will allow sideloading of apps in the coming weeks/months, after its initial proposal took a narrower interpretation of the law’s requirements there.

A cynic might suggest this is all part of Apple’s game-plan for avoiding damage to its core iOS business model by tossing the enforcers a few bones in the hopes they’ll be satisfied it’s done enough.

Certainly, it seems unlikely Apple will voluntarily abandon the new core fee. It’s also unlikely the usual suspect developers will stop screaming about unfair Apple fees. So it will probably fall to the Commission to wade in, investigate and formally lay down the law in this area. That is, after all, the task the bloc has set itself.

While the DOJ’s complaint against Apple mainly focuses on a few distinct areas — such as restrictions imposed on super apps, mobile cloud streaming, cross-platform messaging, payment tech and third party smartwatches — it isn’t silent on fees. In the filing it links Apple’s “shapeshifting rules and restrictions” to an ability to “extract higher fees”, in addition to a range of other competition-chilling effects. The DOJ also lists one of the aims of its case as “reducing fees for developers”.

If the EU ends up ordering Apple to ditch its unbundled core tech fee it could pass the baton back to U.S. antitrust enforcers to dial up their own focus on Apple’s fees.

The Commission could move quickly here, too. EU officials have talked in terms of DMA enforcement timescales being a matter of “days, weeks and months”. So corrective action should not take years (but absolutely expect the inevitable legal appeals to grind through the courts at the slower cadence).

On the opening of a non-compliance probe, the DMA allows up to 12 months for the market investigation, with up to six months for reporting preliminary conclusions. Within that time-frame in play — and given the whole raison d’être of the regulation is about empowering EU enforcers to come with faster and more effective interventions — it’s possible that a draft verdict on the legality of Apple’s core tech fee could be pronounced later this year, if the EU moves at pace to open an investigation.

The DMA also furnishes the Commission with interim measures powers, giving enforcers the ability to act ahead of formal non-compliance findings — if they believe there’s “urgency due to the risk of serious and irreparable damage for business users or end users of gatekeepers”.

So, again, 2024 could deliver a lot more antitrust pain for Apple. (Reminder: Penalties for infringements of the DMA can scale up to 10% of global annual turnover or 20% for repeat offences.)

Elsewhere in Europe, German competition authorities designed the iPhone maker as subject to their own domestic ex ante competition reform back in April 2023 — a status that applies on its business in that market until at least 2028. And already, since mid 2022, the German authority has been examining Apple’s requirement that third party apps obtain permission for tracking. So the Federal Cartel Office could force changes on Apple’s practices there in the near term if they conclude it’s harming competition.

In recent years, the iPhone maker has also had to respond to antitrust restrictions in South Korea on its in-app payment commissions after the country passed a 2021 law targeting app store restrictions. Antitrust authorities in India have also been investigating Apple’s practices in this area, since late 2021.

Looking a little further ahead, antitrust trouble looks to be brewing for Apple in the U.K., too, where the competition watchdog have spent years scrutinizing how it operates its mobile app store — concluding in a final report in mid 2022 that there are substantive concerns. The U.K. Competition and Markets Authority (CMA) has since moved on to probes of Apple’s restrictions on mobile web browsers and cloud gaming, which remain ongoing.

Almost a year ago the U.K. government announced it would press ahead with its own, long-planned ex ante competition reform, too. This future law will mean the CMA’s Digital Markets Unit will be able to proactively apply bespoke rules on tech giants with so called “strategic market status”, rather than enforcers having to first undertake a long investigation to prove abuse.

Apple is all but certain to fall in scope of the planned U.K. regime — so regional restrictions on its business look sure to keep dialling up.

The planned U.K. law may mirror elements of the EU’s DMA, as the CMA has suggested it could be used to ban self preferencing, enforce interoperability and data access/functionality requirements, and set fairness mandates for business terms. But the U.K. regime is not a carbon copy of the EU approach and looks set to give domestic enforcers more leeway to tailor interventions per platform. Which means there’s a prospect of an even tighter operational straightjacket being applied to Apple’s U.K. business in the years ahead. And zero prospect of a let up in the workload for Apple’s in-house lawyers.

UK’s digital markets regulator gives flavor of rebooted rules coming for Big Tech

Uber Eats bike courier

Uber Eats courier's fight against AI bias shows justice under UK law is hard won

Uber Eats bike courier

Image Credits: Jakub Porzycki/NurPhoto / Getty Images

On Tuesday, the BBC reported that Uber Eats courier Pa Edrissa Manjang, who is Black, had received a payout from Uber after “racially discriminatory” facial recognition checks prevented him from accessing the app, which he had been using since November 2019 to pick up jobs delivering food on Uber’s platform.

The news raises questions about how fit U.K. law is to deal with the rising use of AI systems. In particular, the lack of transparency around automated systems rushed to market, with a promise of boosting user safety and/or service efficiency, that may risk blitz-scaling individual harms, even as achieving redress for those affected by AI-driven bias can take years.

The lawsuit followed a number of complaints about failed facial recognition checks since Uber implemented the Real Time ID Check system in the U.K. in April 2020. Uber’s facial recognition system — based on Microsoft’s facial recognition technology — requires the account holder to submit a live selfie checked against a photo of them held on file to verify their identity.

Failed ID checks

Per Manjang’s complaint, Uber suspended and then terminated his account following a failed ID check and subsequent automated process, claiming to find “continued mismatches” in the photos of his face he had taken for the purpose of accessing the platform. Manjang filed legal claims against Uber in October 2021, supported by the Equality and Human Rights Commission (EHRC) and the App Drivers & Couriers Union (ADCU).

Years of litigation followed, with Uber failing to have Manjang’s claim struck out or a deposit ordered for continuing with the case. The tactic appears to have contributed to stringing out the litigation, with the EHRC describing the case as still in “preliminary stages” in fall 2023, and noting that the case shows “the complexity of a claim dealing with AI technology”. A final hearing had been scheduled for 17 days in November 2024.

That hearing won’t take place after Uber offered — and Manjang accepted — a payment to settle, meaning fuller details of what exactly went wrong and why won’t be made public. Terms of the financial settlement have not been disclosed, either. Uber did not provide details when we asked, nor did it offer comment on exactly what went wrong.

We also contacted Microsoft for a response to the case outcome, but the company declined comment.

Despite settling with Manjang, Uber is not publicly accepting that its systems or processes were at fault. Its statement about the settlement denies courier accounts can be terminated as a result of AI assessments alone, as it claims facial recognition checks are back-stopped with “robust human review.”

“Our Real Time ID check is designed to help keep everyone who uses our app safe, and includes robust human review to make sure that we’re not making decisions about someone’s livelihood in a vacuum, without oversight,” the company said in a statement. “Automated facial verification was not the reason for Mr Manjang’s temporary loss of access to his courier account.”

Clearly, though, something went very wrong with Uber’s ID checks in Manjang’s case.

Pa Edrissa Manjang
Pa Edrissa Manjang (Photo: Courtesy of ADCU)

Worker Info Exchange (WIE), a platform workers’ digital rights advocacy organization which also supported Manjang’s complaint, managed to obtain all his selfies from Uber, via a Subject Access Request under U.K. data protection law, and was able to show that all the photos he had submitted to its facial recognition check were indeed photos of himself.

“Following his dismissal, Pa sent numerous messages to Uber to rectify the problem, specifically asking for a human to review his submissions. Each time Pa was told ‘we were not able to confirm that the provided photos were actually of you and because of continued mismatches, we have made the final decision on ending our partnership with you’,” WIE recounts in discussion of his case in a wider report looking at “data-driven exploitation in the gig economy”.

Based on details of Manjang’s complaint that have been made public, it looks clear that both Uber’s facial recognition checks and the system of human review it had set up as a claimed safety net for automated decisions failed in this case.

Equality law plus data protection

The case calls into question how fit for purpose U.K. law is when it comes to governing the use of AI.

Manjang was finally able to get a settlement from Uber via a legal process based on equality law — specifically, a discrimination claim under the U.K.’s Equality Act 2006, which lists race as a protected characteristic.

Baroness Kishwer Falkner, chairwoman of the EHRC, was critical of the fact the Uber Eats courier had to bring a legal claim “in order to understand the opaque processes that affected his work,” she wrote in a statement.

“AI is complex, and presents unique challenges for employers, lawyers and regulators. It is important to understand that as AI usage increases, the technology can lead to discrimination and human rights abuses,” she wrote. “We are particularly concerned that Mr Manjang was not made aware that his account was in the process of deactivation, nor provided any clear and effective route to challenge the technology. More needs to be done to ensure employers are transparent and open with their workforces about when and how they use AI.”

U.K. data protection law is the other relevant piece of legislation here. On paper, it should be providing powerful protections against opaque AI processes.

The selfie data relevant to Manjang’s claim was obtained using data access rights contained in the U.K. GDPR. If he had not been able to obtain such clear evidence that Uber’s ID checks had failed, the company might not have opted to settle at all. Proving a proprietary system is flawed without letting individuals access relevant personal data would further stack the odds in favor of the much richer resourced platforms.

Enforcement gaps

Beyond data access rights, powers in the U.K. GDPR are supposed to provide individuals with additional safeguards, including against automated decisions with a legal or similarly significant effect. The law also demands a lawful basis for processing personal data, and encourages system deployers to be proactive in assessing potential harms by conducting a data protection impact assessment. That should force further checks against harmful AI systems.

However, enforcement is needed for these protections to have effect — including a deterrent effect against the rollout of biased AIs.

In the U.K.’s case, the relevant enforcer, the Information Commissioner’s Office (ICO), failed to step in and investigate complaints against Uber, despite complaints about its misfiring ID checks dating back to 2021.

Jon Baines, a senior data protection specialist at the law firm Mishcon de Reya, suggests “a lack of proper enforcement” by the ICO has undermined legal protections for individuals.

“We shouldn’t assume that existing legal and regulatory frameworks are incapable of dealing with some of the potential harms from AI systems,” he tells TechCrunch. “In this example, it strikes me…that the Information Commissioner would certainly have jurisdiction to consider both in the individual case, but also more broadly, whether the processing being undertaken was lawful under the U.K. GDPR.

“Things like — is the processing fair? Is there a lawful basis? Is there an Article 9 condition (given that special categories of personal data are being processed)? But also, and crucially, was there a solid Data Protection Impact Assessment prior to the implementation of the verification app?”

“So, yes, the ICO should absolutely be more proactive,” he adds, querying the lack of intervention by the regulator.

We contacted the ICO about Manjang’s case, asking it to confirm whether or not it’s looking into Uber’s use of AI for ID checks in light of complaints. A spokesperson for the watchdog did not directly respond to our questions but sent a general statement emphasizing the need for organizations to “know how to use biometric technology in a way that doesn’t interfere with people’s rights”.

“Our latest biometric guidance is clear that organisations must mitigate risks that come with using biometric data, such as errors identifying people accurately and bias within the system,” its statement also said, adding: “If anyone has concerns about how their data has been handled, they can report these concerns to the ICO.”

Meanwhile, the government is in the process of diluting data protection law via a post-Brexit data reform bill.

In addition, the government also confirmed earlier this year it will not introduce dedicated AI safety legislation at this time, despite Prime Minister Rishi Sunak making eye-catching claims about AI safety being a priority area for his administration.

Instead, it affirmed a proposal — set out in its March 2023 whitepaper on AI — in which it intends to rely on existing laws and regulatory bodies extending oversight activity to cover AI risks that might arise on their patch. One tweak to the approach it announced in February was a tiny amount of extra funding (£10 million) for regulators, which the government suggested could be used to research AI risks and develop tools to help them examine AI systems.

No timeline was provided for disbursing this small pot of extra funds. Multiple regulators are in the frame here, so if there’s an equal split of cash between bodies such as the ICO, the EHRC and the Medicines and Healthcare products Regulatory Agency, to name just three of the 13 regulators and departments the U.K. secretary of state wrote to last month asking them to publish an update on their “strategic approach to AI”, they could each receive less than £1 million to top up budgets to tackle fast-scaling AI risks.

Frankly, it looks like an incredibly low level of additional resource for already overstretched regulators if AI safety is actually a government priority. It also means there’s still zero cash or active oversight for AI harms that fall between the cracks of the U.K.’s existing regulatory patchwork, as critics of the government’s approach have pointed out before.

A new AI safety law might send a stronger signal of priority — akin to the EU’s risk-based AI harms framework that’s speeding toward being adopted as hard law by the bloc. But there would also need to be a will to actually enforce it. And that signal must come from the top.

Uber under pressure over facial recognition checks for drivers

UK to avoid fixed rules for AI – in favor of ‘context-specific guidance’