Elon Musk

Musk dodged Brazil's X ban by 'coincidence,' says Cloudflare CEO

Elon Musk

Image Credits: Axelle/Bauer-Griffin/FilmMagic / Getty Images

X went back online in Brazil earlier this week, three weeks after Elon Musk’s platform was blocked under orders from Brazil’s Supreme Court. That prompted Brazil’s top court to fine X Corp. nearly $1 million for every day the platform remained accessible in the country.

However, Cloudflare’s CEO Matthew Prince tells TechCrunch that X going back online in Brazil this week was all a “coincidence.”

“I don’t think anything about this change was intentional to overcome a block in Brazil,” said Prince in an interview with TechCrunch. “This was literally just [X] switching from one IT vendor to another IT vendor.”

Some months ago, Prince said, Cloudflare won a deal to provide X with cloud computing services in several regions across the globe, including Brazil. X had previously used Fastly, a competitor to Cloudflare, and the social media platform is currently in the process of rolling out that switch. Changing providers also changed IP addresses associated with X, which disrupted how Brazilian internet service providers were blocking the X platform.

“We have never talked with [X] about helping them get around the Brazilian dam,” said Prince. “They happened to transition a bunch of their traffic from Fastly over to us, especially in the Latin American region, over the last week.”

Prince describes this as wild coincidence, where his sales team won a deal, and as a result ended up inadvertently “wading into some geopolitical Elon Musk vortex of craziness” months later. Some may find that a bit hard to believe, given that Elon Musk has tried multiple avenues already to skirt Brazil’s ban on X. Musk tried delivering X directly to Brazilians through his Starlink satellites earlier this month, but later backed down.

A spokesperson for X says the platform changed network providers when Brazil shut down X weeks ago, which disrupted its infrastructure throughout the rest of Latin America, in a statement posted on its Global Government Affairs account. So is the timing of all this truly a coincidence? You be the judge.

However, Brazilian regulators say Cloudflare has been extremely cooperative in helping to get X reblocked, according to The New York Times.

Brazil implemented its block by requiring ISPs to block traffic to certain IP addresses. When X switched from Fastly to Cloudflare, therefore, the block was no longer in effect. However, Prince claims his company did not know this was going to happen, and even says he doesn’t think X was actively trying to circumvent Brazil’s ban. He even knocked Brazil for using an insufficient strategy to block X.

“They chose to implement it in a way which is kind of kludgy, and very fragile,” said Prince. “That assumes that X, Twitter, or whatever we call it, will always be on that IP address… It changed because they switched to Cloudflare, but if X were trying to play games here, they could have switched their IP address very easily without switching to Cloudflare.”

More bad news for Elon Musk after X user's legal challenge to shadowban prevails

X logo impaling twitter bird logo

Image Credits: Bryce Durbin / TechCrunch

It’s shaping up to be a terrible, no good, really bad news month for the company formerly known as Twitter. Elon Musk’s X has just been hit with a first clutch of grievances by the European Union for suspected breaches of the bloc’s Digital Services Act — an online governance and content moderation rulebook that features penalties of up to 6% of global annual turnover for confirmed violations.

But that’s not the only high-level decision that hasn’t gone Musk’s way lately. TechCrunch has learned that earlier this month X was found to have violated a number of provisions of the DSA and the bloc’s General Data Protection Regulation (GDPR), a pan-EU privacy framework where fines can reach 4% of annual turnover, following legal challenges brought by an individual after X shadowbanned his account.

X has long been accused of arbitrary shadowbanning — a particular egregious charge for a platform that claims to champion free speech.

PhD student Danny Mekić took action after he discovered X had applied visibility restrictions to his account in October last year. The company applied restrictions after he had shared a news article about an area of law he was researching, related to the bloc’s proposal to scan citizens’ private messages for child sexual abuse material (CSAM). X did not notify it had shadowbanned his account — which is one of the issues the litigation focused on.

Mekić only noticed his account had been impacted with restrictions when third parties contacted him to say they could no longer see his replies or find his account in search suggestions.

After his attempts to contact X directly to rectify the issue proved fruitless, Mekić filed a series of legal claims against X in the Netherlands under the EU Small Claims process, alleging the company had infringed key elements of the DSA, including failing to provide him with a point of contact (Article 12) to deal with his complaints; and failing to provide a statement of reasons (Article 17) for the restrictions applied to his account.

Mekić is a premium subscriber to X so he also sued the company for breach of contract.

On top of all that, after realizing he had been shadowbanned Mekić sought information from X about how it had processed his personal data — relying on the GDPR to make these data access requests. The regulation gives people in the EU a right to request a copy of information held on them, so when X failed to provide the personal information requested he had grounds for his second case: filing claims for breach of the bloc’s data protection rules.

In the DSA case, in a ruling on July 5 the court found X’s Irish subsidiary (which is actually still called Twitter) to be in breach of contract and ordered it to pay compensation for the period Mekić was deprived of the service he had paid for (just $1.87 — but the principle is priceless).

The court also ordered X to provide Mekić with a point of contact so he could communicate his complaints to the company within two weeks or face a fine of €100 per day.

On the DSA Article 17 complaint, Mekić also prevailed as the court agreed X should have sent him a statement of reasons when it shadowbanned his account. Instead he had to take the company to court to learn that an automated system had restricted his account after he shared a news article.

“I’m happy about that,” Mekić told TechCrunch. “There was a huge debate in the courtroom. Twitter said the DSA is not proportional and that shadowbans of complete accounts do not fall under DSA obligations.”

As a further kicker, the court deemed X’s general terms and conditions to be in breach of the EU’s Unfair Terms in Consumer Contracts Directive.

In the GDPR case, which the court ruled on on July 4, Mekić chalked up another series of wins. This case concerned the aforementioned data access rights but also Article 22 (automated decision making) — which states data subjects should not be subject to decisions based solely on automated processing where they have legal or significant effect.

The court agreed that the impact of X’s shadowban on Mekić was significant, finding it affected his professional visibility and potentially his employment prospects. The court therefore ordered X to provide him with meaningful information about the automated decision-making as required by the law within one month, along with the other personal information X has so far withheld, which Mekić had requested under GDPR data access rights.

If X continues to violate these data protection rules, the company is on the hook for fines of up to €4,000 per day.

X was also ordered to pay Mekić’s costs for both cases.

While the pair of rulings only concern individual complaints, they could have wider implications for enforcement of the DSA and the GDPR against X. The former is — as we’ve seen today — only just gearing up, as X gets stung with a first step of preliminary breach findings. But privacy campaigners have spent years warning the GDPR is being under-enforced against major platforms. And the strategic role core data protections should play in driving platform accountability remains far weaker than it could and should be.

“Bringing the claims was a final attempt to clarify my unjustified shadowban and get it removed,” Mekić told TechCrunch. “And, of course, I hope Twitter’s compliance with legal transparency obligations and low-threshold contact will improve to make it even better.”

“The European Commission seems to be very busy with investigations under the DSA. So far, regarding Twitter, the Commission seems to focus mainly on stricter content moderation. My appeal to the Commission is also to be mindful of the flip side: platforms should not overreach in their non-transparent content moderation practices,” he also told us.

“If you ask me, there is a simpler solution, namely, to curb algorithms on social media such as on Twitter, which are designed to maximise engagement and revenue and to bring back the chronological timelines of the heyday of Twitter and other social media platforms as standard.”

While the EU itself has a key role in enforcing the DSA’s rules on X, as is designated as a very large online platform (VLOP), its compliance with the wider general rules falls to a European member state-level oversight body: Ireland’s media regulator, Coimisiún na Meán.

Enforcement of the EU’s flagship data protection regime on Twitter/X typically falls to another Irish body, the Data Protection Commission (DPC), which is routinely accused of dragging its feet on investigating complaints about Big Tech.

Asked for information about its enforcement of various long-standing GDPR complaints against X, a spokesperson for the DPC said it could not provide a response by the time of publication.

Individuals bringing small claims against major platforms to try to get them to abide by pan-EU law is clearly suboptimal; there’s supposed to be a whole system of regulatory supervision to ensure compliance.

“On a side note, I did experience how much time and effort it takes to litigate in court,” said Mekić. “Despite the fact that in principle it can be done without a lawyer. Even so, you spend almost a year on it while the other party can outsource it to a battery of lawyers with near-infinite budgets and just ignore it in the meantime: indeed, I have never had direct contact with anyone from Twitter, they only communicate with me through lawyers.”

Asked whether he’s hopeful the outcome of his two cases will bring an end to X’s arbitrary shadowbanning for all EU users, Mekić said he doesn’t think his own success will be enough — regulatory enforcement is going to be needed for that.

“I hope so, but I’m afraid not,” he said. “There is little focus on the commercial motives behind shadowbans. If a user breaks a rule, you could temporarily block their account. That is transparent. But that also removes that user’s ad revenue for the platform. Shadowbans are a solution for that: the user is unaware of anything and continues to engage with and generate advertising revenue for the platform.”

“It would be a brave decision by social media platforms to stop applying shadow bans and only impose transparent, contestable restrictions on users. But that will presumably lead to loss of revenue. I hope Twitter will set other platforms a good example and inform users transparently about account restrictions, as required by the DSA. To do so, platforms do need to put their commercial intentions second,” said Mekić.

“It does surprise me that the Commission has not identified anything about the large-scale shadowbanning practices that users do not receive notifications about,” he added. “It happens daily on a large scale and is easier to prove than what they are focusing on now.”

X has been contacted for a response to the rulings.

Elon Musk’s X faces first DSA probe in EU over illegal content risks, moderation, transparency and deceptive design

Tesla Dojo: Elon Musk's big plan to build an AI supercomputer, explained

Image Credits: Bryce Durbin | TechCrunch

For years, Elon Musk has talked about Dojo — the AI supercomputer that will be the cornerstone of Tesla’s AI ambitions. It’s important enough to Musk that he recently said the company’s AI team is going to “double down” on Dojo as Tesla gears up to reveal its robotaxi in October. 

But what exactly is Dojo? And why is it so critical to Tesla’s long-term strategy?

In short: Dojo is Tesla’s custom-built supercomputer that’s designed to train its “Full Self-Driving” neural networks. Beefing up Dojo goes hand-in-hand with Tesla’s goal to reach full self-driving and bring a robotaxi to market. FSD, which is on almost 2 million Tesla vehicles today, can perform some automated driving tasks but still requires a human to be attentive behind the wheel. 

Tesla delayed the reveal of its robotaxi, which was slated for August, to October, but both Musk’s public rhetoric and information from sources inside Tesla tell us that the goal of autonomy isn’t going away.

And Tesla appears poised to spend big on AI and Dojo to reach that feat. 

Tesla’s Dojo backstory

Image Credits: SUZANNE CORDEIRO/AFP via Getty Images / Getty Images

Musk doesn’t want Tesla to be just an automaker, or even a purveyor of solar panels and energy storage systems. Instead, he wants Tesla to be an AI company, one that has cracked the code to self-driving cars by mimicking human perception. 

Most other companies building autonomous vehicle technology rely on a combination of sensors to perceive the world — like lidar, radar and cameras — as well as high-definition maps to localize the vehicle. Tesla believes it can achieve fully autonomous driving by relying on cameras alone to capture visual data and then use advanced neural networks to process that data and make quick decisions about how the car should behave. 

As Tesla’s former head of AI, Andrej Karpathy, said at the automaker’s first AI Day in 2021, the company is basically trying to build “a synthetic animal from the ground up.” (Musk had been teasing Dojo since 2019, but Tesla officially announced it at AI Day.)

Companies like Alphabet’s Waymo have commercialized Level 4 autonomous vehicles — which the SAE defines as a system that can drive itself without the need for human intervention under certain conditions — through a more traditional sensor and machine learning approach. Tesla has still yet to produce an autonomous system that doesn’t require a human behind the wheel. 

About 1.8 million people have paid the hefty subscription price for Tesla’s FSD, which currently costs $8,000 and has been priced as high as $15,000. The pitch is that Dojo-trained AI software will eventually be pushed out to Tesla customers via over-the-air updates. The scale of FSD also means Tesla has been able to rake in millions of miles worth of video footage that it uses to train FSD. The idea there is that the more data Tesla can collect, the closer the automaker can get to actually achieving full self-driving. 

However, some industry experts say there might be a limit to the brute force approach of throwing more data at a model and expecting it to get smarter. 

“First of all, there’s an economic constraint, and soon it will just get too expensive to do that,” Anand Raghunathan, Purdue University’s Silicon Valley professor of electrical and computer engineering, told TechCrunch. Further, he said, “Some people claim that we might actually run out of meaningful data to train the models on. More data doesn’t necessarily mean more information, so it depends on whether that data has information that is useful to create a better model, and if the training process is able to actually distill that information into a better model.” 

Raghunathan said despite these doubts, the trend of more data appears to be here for the short-term at least. And more data means more compute power needed to store and process it all to train Tesla’s AI models. That is where Dojo, the supercomputer, comes in. 

What is a supercomputer?

Dojo is Tesla’s supercomputer system that’s designed to function as a training ground for AI, specifically FSD. The name is a nod to the space where martial arts are practiced. 

A supercomputer is made up of thousands of smaller computers called nodes. Each of those nodes has its own CPU (central processing unit) and GPU (graphics processing unit). The former handles overall management of the node, and the latter does the complex stuff, like splitting tasks into multiple parts and working on them simultaneously. GPUs are essential for machine learning operations like those that power FSD training in simulation. They also power large language models, which is why the rise of generative AI has made Nvidia the most valuable company on the planet. 

Even Tesla buys Nvidia GPUs to train its AI (more on that later). 

Why does Tesla need a supercomputer?

Tesla’s vision-only approach is the main reason Tesla needs a supercomputer. The neural networks behind FSD are trained on vast amounts of driving data to recognize and classify objects around the vehicle and then make driving decisions. That means that when FSD is engaged, the neural nets have to collect and process visual data continuously at speeds that match the depth and velocity recognition capabilities of a human. 

In other words, Tesla means to create a digital duplicate of the human visual cortex and brain function. 

To get there, Tesla needs to store and process all the video data collected from its cars around the world and run millions of simulations to train its model on the data. 

Tesla appears to rely on Nvidia to power its current Dojo training computer, but it doesn’t want to have all its eggs in one basket — not least because Nvidia chips are expensive. Tesla also hopes to make something better that increases bandwidth and decreases latencies. That’s why the automaker’s AI division decided to come up with its own custom hardware program that aims to train AI models more efficiently than traditional systems. 

At that program’s core is Tesla’s proprietary D1 chips, which the company says are optimized for AI workloads. 

Tell me more about these chips

Ganesh Venkataramanan, former senior director of Autopilot hardware, presenting the D1 training tile at Tesla’s 2021 AI Day.
Ganesh Venkataramanan, former senior director of Autopilot hardware, presenting the D1 training tile at Tesla’s 2021 AI Day.
Image Credits: Tesla/screenshot of streamed event

Tesla is of a similar opinion to Apple in that it believes hardware and software should be designed to work together. That’s why Tesla is working to move away from the standard GPU hardware and design its own chips to power Dojo. 

Tesla unveiled its D1 chip, a silicon square the size of a palm, on AI Day in 2021. The D1 chip entered into production as of at least May this year. The Taiwan Semiconductor Manufacturing Company (TSMC) is manufacturing the chips using 7 nanometer semiconductor nodes. The D1 has 50 billion transistors and a large die size of 645 millimeters squared, according to Tesla. This is all to say that the D1 promises to be extremely powerful and efficient and to handle complex tasks quickly. 

“We can do compute and data transfers simultaneously, and our custom ISA, which is the instruction set architecture, is fully optimized for machine learning workloads,” said Ganesh Venkataramanan, former senior director of Autopilot hardware, at Tesla’s 2021 AI Day. “This is a pure machine learning.”

The D1 is still not as powerful as Nvidia’s A100 chip, though, which is also manufactured by TSMC using a 7 nanometer process. The A100 contains 54 billion transistors and has a die size of 826 square millimeters, so it performs slightly better than Tesla’s D1. 

To get a higher bandwidth and higher compute power, Tesla’s AI team fused 25 D1 chips together into one tile to function as a unified computer system. Each tile has a compute power of 9 petaflops and 36 terabytes per second of bandwidth, and contains all the hardware necessary for power, cooling and data transfer. You can think of the tile as a self-sufficient computer made up of 25 smaller computers. Six of those tiles make up one rack, and two racks make up a cabinet. Ten cabinets make up an ExaPOD. At AI Day 2022, Tesla said Dojo would scale by deploying multiple ExaPODs. All of this together makes up the supercomputer. 

Tesla is also working on a next-gen D2 chip that aims to solve information flow bottlenecks. Instead of connecting the individual chips, the D2 would put the entire Dojo tile onto a single wafer of silicon. 

Tesla hasn’t confirmed how many D1 chips it has ordered or expects to receive. The company also hasn’t provided a timeline for how long it will take to get Dojo supercomputers running on D1 chips. 

In response to a June post on X that said: “Elon is building a giant GPU cooler in Texas,” Musk replied that Tesla was aiming for “half Tesla AI hardware, half Nvidia/other” over the next 18 months or so. The “other” could be AMD chips, per Musk’s comment in January. 

What does Dojo mean for Tesla?

Tesla’s humanoid robot Optimus Prime II at WAIC in Shanghai, China, on July 7, 2024.
Image Credits: Costfoto/NurPhoto / Getty Images

Taking control of its own chip production means that Tesla might one day be able to quickly add large amounts of compute power to AI training programs at a low cost, particularly as Tesla and TSMC scale up chip production. 

It also means that Tesla may not have to rely on Nvidia’s chips in the future, which are increasingly expensive and hard to secure. 

During Tesla’s second-quarter earnings call, Musk said that demand for Nvidia hardware is “so high that it’s often difficult to get the GPUs.” He said he was “quite concerned about actually being able to get steady GPUs when we want them, and I think this therefore requires that we put a lot more effort on Dojo in order to ensure that we’ve got the training capability that we need.” 

That said, Tesla is still buying Nvidia chips today to train its AI. In June, Musk posted on X: 

Of the roughly $10B in AI-related expenditures I said Tesla would make this year, about half is internal, primarily the Tesla-designed AI inference computer and sensors present in all of our cars, plus Dojo. For building the AI training superclusters, Nvidia hardware is about 2/3 of the cost. My current best guess for Nvidia purchases by Tesla are $3B to $4B this year.

“Inference compute” refers to the AI computations performed by Tesla cars in real time and is separate from the training compute that Dojo is responsible for.

Dojo is a risky bet, one that Musk has hedged several times by saying that Tesla might not succeed. 

In the long run, Tesla could theoretically create a new business model based on its AI division. Musk has said that the first version of Dojo will be tailored for Tesla computer vision labeling and training, which is great for FSD and for training Optimus, Tesla’s humanoid robot. But it wouldn’t be useful for much else. 

Musk has said that future versions of Dojo will be more tailored to general-purpose AI training. One potential problem with that is almost all AI software out there has been written to work with GPUs. Using Dojo to train general-purpose AI models would require rewriting the software. 

That is, unless Tesla rents out its compute, similar to how AWS and Azure rent out cloud computing capabilities. Musk also noted during Q2 earnings that he sees “a path to being competitive with Nvidia with Dojo.”

A September 2023 report from Morgan Stanley predicted that Dojo could add $500 billion to Tesla’s market value by unlocking new revenue streams in the form of robotaxis and software services. 

In short, Dojo’s chips are an insurance policy for the automaker, but one that could pay dividends. 

How far along is Dojo?

Nvidia CEO Jensen Huang and Tesla CEO Elon Musk at the GPU Technology Conference in San Jose, California.
Image Credits: Kim Kulish/Corbis via Getty Images / Getty Images

Reuters reported last year that Tesla began production on Dojo in July 2023, but a June 2023 post from Musk suggested that Dojo had been “online and running useful tasks for a few months.”

Around the same time, Tesla said it expected Dojo to be one of the top five most powerful supercomputers by February 2024 — a feat that has yet to be publicly disclosed, leaving us doubtful that it has occurred.

The company also said it expects Dojo’s total compute to reach 100 exaflops in October 2024. (One exaflops is equal to 1 quintillion computer operations per second. To reach 100 exaflops, and assuming that one D1 can achieve 362 teraflops, Tesla would need more than 276,000 D1s, or around 320,500 Nvidia A100 GPUs.)

Tesla also pledged in January 2024 to spend $500 million to build a Dojo supercomputer at its gigafactory in Buffalo, New York.

In May 2024, Musk noted that the rear portion of Tesla’s Austin gigafactory will be reserved for a “super dense, water-cooled supercomputer cluster.”

Just after Tesla’s second-quarter earnings call, Musk posted on X that the automaker’s AI team is using Tesla HW4 AI computer (renamed AI4), which is the hardware that lives on Tesla vehicles, in the training loop with Nvidia GPUs. He noted that the breakdown is roughly 90,000 Nvidia H100s plus 40,000 AI4 computers. 

“And Dojo 1 will have roughly 8k H100-equivalent of training online by end of year,” he continued. “Not massive, but not trivial either.”

EU warns X over illegal content risks — Musk replies with Tropic Thunder insult meme

How is the European Union’s bid to get Elon Musk to follow its rules going? Judging by the memes, not well.

You may recall the X owner previously told his own advertisers to “go f— yourself,” so it’s perhaps little wonder he’s flirting with flipping the bird at Thierry Breton, the commissioner in charge of overseeing compliance with the EU’s Digital Services Act (DSA). Musk on Monday referenced a line from the film Tropic Thunder in which Tom Cruise, barely recognizable as studio exec Les Grossman, says (and then shouts): “Take a big step back AND LITERALLY F— YOUR OWN FACE!”

The same film contains another (heavily memed) line that Musk may be intending to remind his viewers of by injecting the comedy flick’s aura into this interaction with EU regulators: “Never go full re**rd.” (Note Musk’s emphatic use of the negative when he claims in the same post on X he would “NEVER do something so rude & irresponsible.” Ahem.)

There’s only negative revenue at stake here for X if it alienates the EU since the Commission has the power to issue penalties of up to 6% of global annual turnover for noncompliance with the DSA. The bloc already suspects X of breaking its online governance rulebook: In July, it reported preliminary findings for a subset of issues it’s been investigating X for, saying it found the platform’s blue check system to be an illegal dark pattern and that X also has major transparency problems.

A second DSA investigation on X has been ongoing since December concerning how it responds to illegal content and risks related to the spread of disinformation, including related to the Israel-Hamas war.

More recently, following civic unrest in the U.K., the Commission has warned that the disinformation being spread on X related to the violent disturbances in parts of the U.K. may be factored into its DSA enforcement. So this ongoing wide-ranging investigation clearly amps up the regulatory risk for X in the EU.

Still, maybe Musk figures he’s done such strong work to crater X’s revenue (by alienating advertisers, for example) that the prospect of losing a chunk of what’s left to EU fines isn’t very scary anymore. That’s billionaire logic, baby! (Er, never go full billionaire?)

Breton’s open letter to Musk, posted to X late Monday local time ahead of a livestreamed interview on X between Musk and former U.S. president Donald Trump, is probably not going to help the Commission’s propaganda war against the erratic billionaire, though.

First up, the letter reads like a first draft in sore need of a heavy edit. There are so many words it’s not immediately clear what the EU’s point is. That, ironically, risks the letter being misinterpreted as an attempt to censor speech on X.

Second, there seems to be a rather bizarre conflation of events by the Commission: Breton starts the letter saying he’s writing to Musk in relation both to “recent events in the U.K.” and to the upcoming Trump interview. If there’s an attempt to imply a link between the two events, it’s not clear what the EU might think that is.

Inciting violence and hate speech is likely to be illegal content in all the EU markets where the DSA applies, whereas an interview with Trump might qualify as a very tedious listen, but the fact of it happening isn’t illegal in and of itself.

In essence, the EU missive is a reminder to Musk of his legal obligations under the DSA to mitigate risks on his platform related to the spread of illegal content, such as posts intending to incite hatred, violence and civic unrest; and in relation to the risks posed by disinformation that might cause societal harms, such as by fueling civic unrest or undermining national security.

Given the letter’s timing, perhaps the EU was worried Trump was going to talk about the U.K. riots and dogwhistle for “civil war,” as Musk did last week.

But no such thing happened, per Politico‘s account of the interview. Musk tried to get Trump to attack the EU over censorship but the effort fell flat, as Trump preferred to stick to his knitting and bash the EU over trade tariffs.

Notably, the letter warns Musk that his own account on X is under DSA regulation, making explicit reference to his personal reach on X “as a user with over 190 million followers.”

This is a clearer shot at Musk, letting him know the EU has seen how he’s been using his account to amplify divisive narratives around the U.K.’s civic unrest, and warning him to stop the regional rabble-rousing or face DSA consequences.

“[W]e are monitoring the potential risks in the EU associated with the dissemination of content that may incite violence, hate and racism in conjunction with major political — or societal — events around the world, including debates and interviews in the context of elections,” Breton wrote.

The EU commissioner further stipulated that “any negative effect of illegal content on X in the EU, which could be attributed to the ineffectiveness of the way in which X applies the relevant provisions of the DSA, may be relevant in the context of the ongoing proceedings and of the overall assessment of X’s compliance with EU law.”

Aside from firing back at Breton with insulting memes, Musk’s immediate response has been to accuse the EU of overreach by suggesting, via his interview with Trump, that it’s trying to censor the views of people outside the EU.

However, content on X is obviously visible to EU users, and therefore subject to the DSA — regardless of any political point scoring Musk may be engaged in here.

The EU’s letter to Musk contains a further sting in the form of a pointed reminder it could opt to use “interim measures” to crack down on noncompliance. Fines aren’t the only game in town — the DSA empowers the Commission to order changes on platforms aimed at countering urgent threats, such as demanding infringing content is taken down or even temporarily blocking access to an entire service.

So, basically, an EU-wide shutdown of X is what Musk is being reminded may yet come to pass if he doesn’t get with the bloc’s program and comply with the DSA.

For a self-declared free speech absolutist like Musk — whose stated ambition with X is to own the global town square — the threat of being shut out of a market of more than 450 million people might give him more pause than the prospect of being fined a few tens of millions of dollars. That, too, is billionaire logic.

The banks that loaned Musk $13B to buy Twitter might be having regrets

Elon Musk

Image Credits: Kevin Winter / Getty Images

X, formerly known as Twitter, looks like a pretty bad investment right about now.

As readers might recall, Elon Musk borrowed $13 billion from Morgan Stanley, Bank of America and five other major banks to help finance its $44 billion acquisition. According to the WSJ, the deal has since become the worst merger-finance deal for banks since the 2008-2009 financial crisis.

Why? When banks lend money for takeovers, they usually sell that debt on to others, earning fees on the transaction. That hasn’t been possible with X because of its weak financials, so the loans have weighed the banks down, becoming, in industry parlance, “hung deals.”

The WSJ notes that the banks agreed to underwrite these loans “largely because the allure of banking the world’s richest person was too attractive to pass up.” Now, it looks like a costly mistake unless they can extract interest payments from X, plus a repayment of principal once the loans mature.

EU warns X over illegal content risks. Musk replies with Tropic Thunder insult meme

How is the European Union’s bid to get Elon Musk to follow its rules going? Judging by the memes, not well.

You may recall the X owner previously told his own advertisers to “go f*** yourself,” so it’s perhaps little wonder he’s flirting with flipping the bird at Thierry Breton, the commissioner in charge of overseeing compliance with the EU’s Digital Services Act (DSA). Musk on Monday referenced a line from the film Tropic Thunder in which Tom Cruise, barely recognizable as studio exec Les Grossman, says (and then shouts): “Take a big step back AND LITERALLY F**K YOUR OWN FACE!”

The same film contains another (heavily memed) line that Musk may be intending to remind his viewers of by injecting the comedy flick’s aura into this interaction with EU regulators: “Never go full re**rd.” (Note Musk’s emphatic use of the negative when he claims in the same post on X he would “NEVER do something so rude & irresponsible”. Ehem.)

There’s only negative revenue at stake here for X if it alienates the EU since the Commission has the power to issue penalties of up to 6% of global annual turnover for non-compliance with the DSA. The bloc already suspects X of breaking its online governance rulebook: In July, it reported preliminary findings for a subset of issues it’s been investigating X for, saying it found the platform’s blue check system to be an illegal dark pattern, and that X also has major transparency problems.

A second DSA investigation on X has been ongoing since December concerning how it responds to illegal content and risks related to the spread of disinformation, including related to the Israel-Hamas war.

More recently, following civic unrest in the U.K., the Commission has warned that the disinformation being spread on X related to the violent disturbances in parts of the U.K. may be factored into its DSA enforcement. So this ongoing wide-ranging investigation clearly amps up the regulatory risk for X in the EU.

Still, maybe Musk figures he’s done such strong work to crater X’s revenue (by alienating advertisers, for example) that the prospect of losing a chunk of what’s left to EU fines isn’t very scary anymore. That’s billionaire logic baby! (Er, never go full billionaire?)

Breton’s open letter to Musk, posted to X late Monday local time ahead of a livestreamed interview on X between Musk and former U.S. president Donald Trump, is probably not going to help the Commission’s propaganda war against the erratic billionaire, though.

First up, the letter reads like a first draft in sore need of a heavy edit. There are so many words it’s not immediately clear what the EU’s point is. That, ironically, risks the letter being misinterpreted as an attempt to censor speech on X.

Secondly, there seems to be a rather bizarre conflation of events by the Commission: Breton starts the letter saying he’s writing to Musk in relation both to “recent events in the U.K.” and to the upcoming Trump interview. If there’s an attempt to imply a link between the two events, it’s not clear what the EU might think that is.

Inciting violence and hate speech is likely to be illegal content in all the EU markets where the DSA applies, whereas an interview with Trump might qualify as a very tedious listen, but the fact of it happening isn’t illegal in and of itself.

In essence, the EU missive is a reminder to Musk of his legal obligations under the DSA to mitigate risks on his platform related to the spread of illegal content, such as posts intending to incite hatred, violence and civic unrest; and in relation to the risks posed by disinformation that might cause societal harms, such as by fuelling civic unrest or undermining national security.

Given the letter’s timing, perhaps the EU was worried Trump was going to talk about the U.K. riots and dogwhistle for “civil war,” as Musk did last week.

But no such thing happened, per Politico‘s account of the interview. Musk tried to get Trump to attack the EU over censorship but the effort fell flat, as Trump preferred to stick to his knitting and bash the EU over trade tariffs.

Notably, the letter warns Musk that his own account on X is under DSA regulation, making explicit reference to his personal reach on X “as a user with over 190 million followers.”

This is a clearer shot at Musk, letting him know the EU has seen how he’s been using his account to amplify divisive narratives around the U.K.’s civic unrest, and warning him to stop the regional rabble rousing or face DSA consequences.

“[W]e are monitoring the potential risks in the EU associated with the dissemination of content that may incite violence, hate and racism in conjunction with major political — or societal — events around the world, including debates and interviews in the context of elections,” Breton wrote.

The EU commissioner further stipulated that “any negative effect of illegal content on X in the EU, which could be attributed to the ineffectiveness of the way in which X applies the relevant provisions of the DSA, may be relevant in the context of the ongoing proceedings and of the overall assessment of X’s compliance with EU law.”

Aside from firing back at Breton with insulting memes, Musk’s immediate response has been to accuse the EU of overreach by suggesting, via his interview with Trump, that it’s trying to censor the views of people outside the EU.

However content on X is obviously visible to EU users, and therefore subject to the DSA — regardless of any political point scoring Musk may be engaged in here.

The EU’s letter to Musk contains a further sting in the form of a pointed reminder it could opt to use so-called “interim measures” to crack down on non-compliance. Fines aren’t the only game in town — the DSA empowers the Commission to order changes on platforms aimed at countering urgent threats, such as demanding infringing content is taken down or even temporarily blocking access to an entire service.

So, basically, an EU-wide shut-down of X is what Musk is being reminded may yet come to pass if he doesn’t get with the bloc’s program and DSA comply.

For a self-declared free speech absolutist like Musk — whose stated ambition with X is to own the global town square — the threat of being shut out of a market of more than 450 million people might give him more pause than the prospect of being fined a few tens of millions of dollars. That, too, is billionaire logic.

Tesla Dojo: Elon Musk's big plan to build an AI supercomputer, explained

Image Credits: Bryce Durbin | TechCrunch

For years, Elon Musk has talked about Dojo — the AI supercomputer that will be the cornerstone of Tesla’s AI ambitions. It’s important enough to Musk that he recently said the company’s AI team is going to “double down” on Dojo as Tesla gears up to reveal its robotaxi in October. 

But what exactly is Dojo? And why is it so critical to Tesla’s long-term strategy?

In short: Dojo is Tesla’s custom-built supercomputer that’s designed to train its “Full Self-Driving” neural networks. Beefing up Dojo goes hand-in-hand with Tesla’s goal to reach full self-driving and bring a robotaxi to market. FSD, which is on about 2 million Tesla vehicles today, can perform some automated driving tasks, but still requires a human to be attentive behind the wheel. 

Tesla delayed the reveal of its robotaxi, which was slated for August, to October, but both Musk’s public rhetoric and information from sources inside Tesla tell us that the goal of autonomy isn’t going away.

And Tesla appears poised to spend big on AI and Dojo to reach that feat. 

Tesla’s Dojo backstory

Elon Musk speaks at the Tesla Giga Texas manufacturing “Cyber Rodeo” grand opening party on April 7, 2022 in Austin, Texas. Image Credits: Suzanne Cordeiro/AFP via Getty images
Image Credits: Getty Images

Musk doesn’t want Tesla to be just an automaker, or even a purveyor of solar panels and energy storage systems. Instead, he wants Tesla to be an AI company, one that has cracked the code to self-driving cars by mimicking human perception. 

Most other companies building autonomous vehicle technology rely on a combination of sensors to perceive the world – like lidar, radar and cameras – as well as high-definition maps to localize the vehicle. Tesla believes it can achieve fully autonomous driving by relying on cameras alone to capture visual data and then use advanced neural networks to process that data and make quick decisions about how the car should behave. 

As Tesla’s former head of AI, Andrej Karpathy, said at the automaker’s first AI Day in 2021, the company is basically trying to build “a synthetic animal from the ground up.” (Musk had been teasing Dojo since 2019, but Tesla officially announced it at AI Day.)

Companies like Alphabet’s Waymo have commercialized Level 4 autonomous vehicles – which the SAE defines as a system that can drive itself without the need for human intervention under certain conditions — through a more traditional sensor and machine learning approach. Tesla has still yet to produce an autonomous system that doesn’t require a human behind the wheel. 

About 1.8 million people have paid the hefty subscription price for Tesla’s FSD, which currently costs $8,000 and has been priced as high as $15,000. The pitch is that Dojo-trained AI software will eventually be pushed out to Tesla customers via over-the-air updates. The scale of FSD also means Tesla has been able to rake in millions of miles worth of video footage that it uses to train FSD. The idea there is that the more data Tesla can collect, the closer the automaker can get to actually achieving full self-driving. 

However, some industry experts say there might be a limit to the brute force approach of throwing more data at a model and expecting it to get smarter. 

“First of all, there’s an economic constraint, and soon it will just get too expensive to do that,” Anand Raghunathan, Purdue University’s Silicon Valley professor of electrical and computer engineering, told TechCrunch. “Some people claim that we might actually run out of meaningful data to train the models on. More data doesn’t necessarily mean more information, so it depends on whether that data has information that is useful to create a better model, and if the training process is able to actually distill that information into a better model.” 

Raghunathan says despite these doubts, the trend of more data appears to be here for the short-term at least. And more data means more compute power needed to store and process it all to train Tesla’s AI models. That is where Dojo, the supercomputer, comes in. 

What is a supercomputer?

Dojo is Tesla’s supercomputer system that’s designed to function as a training ground for AI, specifically FSD. The name is a nod to the space where martial arts are practiced. 

A supercomputer is made up of thousands of smaller computers called nodes. Each of those nodes has its own CPU (central processing unit) and GPU (graphics processing unit). The former handles overall management of the node, and the latter does the complex stuff, like splitting tasks into multiple parts and working on them simultaneously. GPUs are essential for machine learning operations like those that power FSD training in simulation. They also power large language models, which is why the rise of generative AI has made Nvidia the most valuable company on the planet. 

Even Tesla buys Nvidia GPUs to train its AI (more on that later). 

Why does Tesla need a supercomputer?

Tesla’s vision-only approach is the main reason. The neural networks behind FSD are trained on vast amounts of driving data to recognize and classify objects around the vehicle and then make driving decisions. That means, when FSD is engaged, the neural nets have to collect and process visual data continuously at speeds that match the depth and velocity recognition capabilities of a human. 

In other words, Tesla means to create a digital duplicate of the human visual cortex and brain function. 

To get there, Tesla needs to store and process all the video data collected from its cars around the world and run millions of simulations to train its model on the data. 

Tesla appears to rely on Nvidia to power its current Dojo training computer, but it doesn’t want to have all its eggs in one basket — not least because Nvidia chips are expensive. Tesla also hopes to make something better that increases bandwidth and decreases latencies. That’s why the automaker’s AI division decided to come up with its own custom hardware program that aims to train AI models more efficiently than traditional systems. 

At that program’s core is Tesla’s proprietary D1 chips, which the company says are optimized for AI workloads. 

Tell me more about these chips

Ganesh Venkataramanan, former senior director of Autopilot hardware, presenting the D1 training tile at Tesla’s 2021 AI Day.
Ganesh Venkataramanan, former senior director of Autopilot hardware, presenting the D1 training tile at Tesla’s 2021 AI Day. Image Credits: Tesla/screenshot of streamed event
Image Credits: Screenshot | Tesla

Tesla is of a similar opinion to Apple, in that it believes hardware and software should be designed to work together. That’s why Tesla is working to move away from the standard GPU hardware and design its own chips to power Dojo. 

Tesla unveiled its D1 chip, a silicon square the size of a palm, on AI Day in 2021. The D1 chip entered into production as of at least May this year. The Taiwan Semiconductor Manufacturing Company (TSMC) is manufacturing the chips using 7 nanometer semiconductor nodes. The D1 has 50 billion transistors and a large die size of 645 millimeters squared, according to Tesla. This is all to say that the D1 promises to be extremely powerful and efficient, and handle complex tasks quickly. 

“We can do compute and data transfers simultaneously, and our custom ISA, which is the instruction set architecture, is fully optimized for machine learning workloads,” said Ganesh Venkataramanan, former senior director of Autopilot hardware, at Tesla’s 2021 AI Day. “This is a pure machine learning machine.”

The D1 is still not as powerful as Nvidia’s A100 chip, though, which is also manufactured by TSMC using a 7 nanometer process. The A100 contains 54 billion transistors and has a die size of 826 square millimeters, so it performs slightly better than Tesla’s D1. 

To get a higher bandwidth and higher compute power, Tesla’s AI team fused 25 D1 chips together into one tile to function as a unified computer system. Each tile has a compute power of 9 petaflops and 36 terabytes per second of bandwidth, and contains all the hardware necessary for power, cooling and data transfer. You can think of the tile as a self-sufficient computer made up of 25 smaller computers. Six of those tiles make up one rack, and two racks make up a cabinet. Ten cabinets make up an ExaPOD. At AI Day 2022, Tesla said Dojo would scale by deploying multiple ExaPODs. All of this together makes up the supercomputer. 

Tesla is also working on a next-gen D2 chip that aims to solve information flow bottlenecks. Instead of connecting the individual chips, the D2 would put the entire Dojo tile onto a single wafer of silicon. 

Tesla hasn’t confirmed how many D1 chips it has ordered or expects to receive. The company also hasn’t provided a timeline for how long it will take to get Dojo supercomputers running on D1 chips. 

In response to a June post on X that said: “Elon is building a giant GPU cooler in Texas,” Musk replied that Tesla was aiming for “half Tesla AI hardware, half Nvidia/other” over the next 18 months or so. The “other” could be AMD chips, per Musk’s comment in January. 

What does Dojo mean for Tesla?

Tesla’s humanoid robot Optimus Prime II at WAIC in Shanghai, China, on July 7, 2024. Image Credits: Costfoto/NurPhoto via Getty Images)
Image Credits: Getty Images

Taking control of its own chip production means that Tesla might one day be able to quickly add large amounts of compute power to AI training programs at a low cost. Particularly as Tesla and TSMC scale up chip production, making the chips more affordable. 

It also means that Tesla may not have to rely on Nvidia’s chips in the future, which are increasingly expensive and hard to secure. 

During Tesla’s second-quarter earnings call, Musk said that demand for Nvidia hardware is “so high that it’s often difficult to get the GPUs.” He said he was “quite concerned about actually being able to get steady GPUs when we want them, and I think this therefore requires that we put a lot more effort on Dojo in order to ensure that we’ve got the training capability that we need.” 

That said, Tesla is still buying Nvidia chips today to train its AI. In June, Musk posted on X: 

“Of the roughly $10B in AI-related expenditures I said Tesla would make this year, about half is internal, primarily the Tesla-designed AI inference computer and sensors present in all of our cars, plus Dojo. For building the AI training superclusters, Nvidia hardware is about 2/3 of the cost. My current best guess for Nvidia purchases by Tesla are $3B to $4B this year.”

Inference compute refers to the AI computations performed by Tesla cars in real time, and is separate from the training compute that Dojo is responsible for.

Dojo is a risky bet, one that Musk has hedged several times by saying that Tesla might not succeed. 

In the long run, Tesla could theoretically create a new business model based on its AI division. Musk has said that the first version of Dojo will be tailored for Tesla computer vision labeling and training, which is great for FSD and training Optimus, Tesla’s humanoid robot. But it wouldn’t be useful for much else. 

Musk has said that future versions of Dojo will be more tailored to general purpose AI training. One potential problem with that is that almost all AI software out there has been written to work with GPUs. Using Dojo to train general purpose AI models would require rewriting the software. 

That is, unless Tesla rents out its compute, similar to how AWS and Azure rent out cloud computing capabilities. Musk also noted during Q2 earnings that he sees “a path to being competitive with Nvidia with Dojo.”

A September 2023 report from Morgan Stanley predicted that Dojo could add $500 billion to Tesla’s market value by unlocking new revenue streams in the form of robotaxis and software services. 

In short, Dojo’s chips are an insurance policy for the automaker, but one that could pay dividends. 

How far along is Dojo?

Nvidia CEO Jen-Hsun Huang and Tesla CEO Elon Musk at the GPU Technology Conference in San Jose, California. Image Credits: Kim Kulish/Corbis via Getty Images
Image Credits: Getty Images

Reuters reported last year that Tesla began production on Dojo in July 2023, but a June 2023 post from Musk suggested that Dojo had been “online and running useful tasks for a few months.”

Around the same time, Tesla said it expected Dojo to be one of the top five most powerful supercomputers by February 2024 — a feat that has yet to be publicly disclosed, leaving us doubtful that it has occurred. The company also said it expects Dojo’s total compute to reach 100 exaflops in October 2024. 

(1 exaflop is equal to 1 quintillion computer operations per second. To reach 100 exaflops and assuming that one D1 can achieve 362 teraflops, Tesla would need more than 276,000 D1s, or around 320,500 Nvidia A100 GPUs.)

Tesla also pledged in January 2024 to spend $500 million to build a Dojo supercomputer at its gigafactory in Buffalo, New York.

In May 2024, Musk noted that the rear portion of Tesla’s Austin gigafactory will be reserved for a “super dense, water-cooled supercomputer cluster.”

Just after Tesla’s second-quarter earnings call, Musk posted on X that the automaker’s AI team is using Tesla HW4 AI computer (renamed AI4), which is the hardware that lives on Tesla vehicles, in the training loop with Nvidia GPUs. He noted that the breakdown is roughly 90,000 Nvidia H100s plus 40,000 AI4 computers. 

“And Dojo 1 will have roughly 8k H100-equivalent of training online by end of year,” he continued. “Not massive, but not trivial either.”

More bad news for Elon Musk after X user's legal challenge to shadowban prevails

X logo impaling twitter bird logo

Image Credits: Bryce Durbin / TechCrunch

It’s shaping up to be a terrible, no good, really bad news month for the company formerly known as Twitter. Elon Musk’s X has just been hit with a first clutch of grievances by the European Union for suspected breaches of the bloc’s Digital Services Act — an online governance and content moderation rulebook that features penalties of up to 6% of global annual turnover for confirmed violations.

But that’s not the only high-level decision that hasn’t gone Musk’s way lately. TechCrunch has learned that earlier this month X was found to have violated a number of provisions of the DSA and the bloc’s General Data Protection Regulation (GDPR), a pan-EU privacy framework where fines can reach 4% of annual turnover, following legal challenges brought by an individual after X shadowbanned his account.

X has long been accused of arbitrary shadowbanning — a particular egregious charge for a platform that claims to champion free speech.

PhD student Danny Mekić took action after he discovered X had applied visibility restrictions to his account in October last year. The company applied restrictions after he had shared a news article about an area of law he was researching, related to the bloc’s proposal to scan citizens’ private messages for child sexual abuse material (CSAM). X did not notify it had shadowbanned his account — which is one of the issues the litigation focused on.

Mekić only noticed his account had been impacted with restrictions when third parties contacted him to say they could no longer see his replies or find his account in search suggestions.

After his attempts to contact X directly to rectify the issue proved fruitless, Mekić filed a series of legal claims against X in the Netherlands under the EU Small Claims process, alleging the company had infringed key elements of the DSA, including failing to provide him with a point of contact (Article 12) to deal with his complaints; and failing to provide a statement of reasons (Article 17) for the restrictions applied to his account.

Mekić is a premium subscriber to X so he also sued the company for breach of contract.

On top of all that, after realizing he had been shadowbanned Mekić sought information from X about how it had processed his personal data — relying on the GDPR to make these data access requests. The regulation gives people in the EU a right to request a copy of information held on them, so when X failed to provide the personal information requested he had grounds for his second case: filing claims for breach of the bloc’s data protection rules.

In the DSA case, in a ruling on July 5 the court found X’s Irish subsidiary (which is actually still called Twitter) to be in breach of contract and ordered it to pay compensation for the period Mekić was deprived of the service he had paid for (just $1.87 — but the principle is priceless).

The court also ordered X to provide Mekić with a point of contact so he could communicate his complaints to the company within two weeks or face a fine of €100 per day.

On the DSA Article 17 complaint, Mekić also prevailed as the court agreed X should have sent him a statement of reasons when it shadowbanned his account. Instead he had to take the company to court to learn that an automated system had restricted his account after he shared a news article.

“I’m happy about that,” Mekić told TechCrunch. “There was a huge debate in the courtroom. Twitter said the DSA is not proportional and that shadowbans of complete accounts do not fall under DSA obligations.”

As a further kicker, the court deemed X’s general terms and conditions to be in breach of the EU’s Unfair Terms in Consumer Contracts Directive.

In the GDPR case, which the court ruled on on July 4, Mekić chalked up another series of wins. This case concerned the aforementioned data access rights but also Article 22 (automated decision making) — which states data subjects should not be subject to decisions based solely on automated processing where they have legal or significant effect.

The court agreed that the impact of X’s shadowban on Mekić was significant, finding it affected his professional visibility and potentially his employment prospects. The court therefore ordered X to provide him with meaningful information about the automated decision-making as required by the law within one month, along with the other personal information X has so far withheld, which Mekić had requested under GDPR data access rights.

If X continues to violate these data protection rules, the company is on the hook for fines of up to €4,000 per day.

X was also ordered to pay Mekić’s costs for both cases.

While the pair of rulings only concern individual complaints, they could have wider implications for enforcement of the DSA and the GDPR against X. The former is — as we’ve seen today — only just gearing up, as X gets stung with a first step of preliminary breach findings. But privacy campaigners have spent years warning the GDPR is being under-enforced against major platforms. And the strategic role core data protections should play in driving platform accountability remains far weaker than it could and should be.

“Bringing the claims was a final attempt to clarify my unjustified shadowban and get it removed,” Mekić told TechCrunch. “And, of course, I hope Twitter’s compliance with legal transparency obligations and low-threshold contact will improve to make it even better.”

“The European Commission seems to be very busy with investigations under the DSA. So far, regarding Twitter, the Commission seems to focus mainly on stricter content moderation. My appeal to the Commission is also to be mindful of the flip side: platforms should not overreach in their non-transparent content moderation practices,” he also told us.

“If you ask me, there is a simpler solution, namely, to curb algorithms on social media such as on Twitter, which are designed to maximise engagement and revenue and to bring back the chronological timelines of the heyday of Twitter and other social media platforms as standard.”

While the EU itself has a key role in enforcing the DSA’s rules on X, as is designated as a very large online platform (VLOP), its compliance with the wider general rules falls to a European member state-level oversight body: Ireland’s media regulator, Coimisiún na Meán.

Enforcement of the EU’s flagship data protection regime on Twitter/X typically falls to another Irish body, the Data Protection Commission (DPC), which is routinely accused of dragging its feet on investigating complaints about Big Tech.

Asked for information about its enforcement of various long-standing GDPR complaints against X, a spokesperson for the DPC said it could not provide a response by the time of publication.

Individuals bringing small claims against major platforms to try to get them to abide by pan-EU law is clearly suboptimal; there’s supposed to be a whole system of regulatory supervision to ensure compliance.

“On a side note, I did experience how much time and effort it takes to litigate in court,” said Mekić. “Despite the fact that in principle it can be done without a lawyer. Even so, you spend almost a year on it while the other party can outsource it to a battery of lawyers with near-infinite budgets and just ignore it in the meantime: indeed, I have never had direct contact with anyone from Twitter, they only communicate with me through lawyers.”

Asked whether he’s hopeful the outcome of his two cases will bring an end to X’s arbitrary shadowbanning for all EU users, Mekić said he doesn’t think his own success will be enough — regulatory enforcement is going to be needed for that.

“I hope so, but I’m afraid not,” he said. “There is little focus on the commercial motives behind shadowbans. If a user breaks a rule, you could temporarily block their account. That is transparent. But that also removes that user’s ad revenue for the platform. Shadowbans are a solution for that: the user is unaware of anything and continues to engage with and generate advertising revenue for the platform.”

“It would be a brave decision by social media platforms to stop applying shadow bans and only impose transparent, contestable restrictions on users. But that will presumably lead to loss of revenue. I hope Twitter will set other platforms a good example and inform users transparently about account restrictions, as required by the DSA. To do so, platforms do need to put their commercial intentions second,” said Mekić.

“It does surprise me that the Commission has not identified anything about the large-scale shadowbanning practices that users do not receive notifications about,” he added. “It happens daily on a large scale and is easier to prove than what they are focusing on now.”

X has been contacted for a response to the rulings.

Elon Musk’s X faces first DSA probe in EU over illegal content risks, moderation, transparency and deceptive design