WP Engine sends cease-and-desist letter to Automattic over Mullenweg's comments

Image Credits: Brian Ach / Stringer via Getty Images / Getty Images

WordPress hosting service WP Engine on Monday sent a cease-and-desist letter to Automattic after the latter’s CEO Matt Mullenweg called WP Engine a “cancer to WordPress” last week.

The notice asks Automattic and Mullenweg to retract their comments and stop making statements against the company.

WP Engine, which (like Automattic itself) commercializes the open-source WordPress project, also accused Mullenweg of threatening WP Engine before the WordCamp summit held last week.

“Automattic’s CEO Matthew Mullenweg threatened that if WP Engine did not agree to pay Automattic – his for-profit entity – a very large sum of money before his September 20th keynote address at the WordCamp US Convention, he was going to embark on a self-described ‘scorched earth nuclear approach’ toward WP Engine within the WordPress community and beyond,” the letter read.

“When his outrageous financial demands were not met, Mr. Mullenweg carried out his threats by making repeated false claims disparaging WP Engine to its employees, its customers, and the world,” the letter added.

The letter goes on to allege that Automattic last week started asking WP Engine to pay it “a significant percentage of its gross revenues – tens of millions of dollars in fact – on an ongoing basis” for a license to use trademarks like “WordPress.”

WP Engine defended its use of the “WordPress” trademark under fair use laws and said it was consistent with the platform’s guidelines. The letter also has screenshots of Mullenweg’s text messages to WP Engine’s CEO and board members that appear to state that Mullenweg would make the case to ban WP Engine from WordPress community events in his talk at WordCamp if the company did not accede to Automattic’s demands.

Automattic did not immediately respond to a request for comment.

Mullenweg, who co-created WordPress, last week criticized WP Engine for raking in profits without giving much back to the open source project, while also disabling key features that make WordPress such a powerful platform in the first place.

Last week, in a blog post, Mullenweg said WP Engine was contributing 47 hours per week to the “Five for the Future” investment pledge to contribute resources towards the sustained growth of WordPress. Comparatively, he said Automattic was contributing roughly 3,900 per week. He acknowledged that while these figures are just a “proxy,” there is a large gap in contribution despite both companies being a similar size and generating around half-a-billion dollars in revenue. (WP Engine pushes back against that characterization in its C&D letter.)

In a separate blog post, he also said WP Engine gives customers a “cheap knock-off” of WordPress.

Notably, Automattic invested in WP Engine in 2011, when the company raised $1.2 million in funding. Since then, WP Engine has raised over $300 million in equity, the bulk of which came from a $250 million investment from private equity firm Silver Lake in 2018.

DeepMind workers sign letter in protest of Google's defense contracts

AI

Image Credits: AlexSecret / Getty Images

At least 200 workers at DeepMind, Google’s AI R&D division, are displeased with Google’s reported defense contracts — and according to Time, they circulated a letter internally back in May to say as much.

The letter, dated May 16, says the undersigned are concerned by “Google’s contracts with military organizations,” citing articles about the tech giant’s contracts to supply AI and cloud computing services to the Israeli military.

“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter adds.

While a relatively small portion of the org’s overall staff, the memo hints at a culture clash between Google and DeepMind, which Google acquired in 2014 and whose tech Google pledged in 2018 would never be used for military or surveillance purposes.

Hundreds of AI luminaries sign letter calling for anti-deepfake legislation

Abstract digital human face.

Image Credits: Hiretual (opens in a new window)

Hundreds in the artificial intelligence community have signed an open letter calling for strict regulation of AI-generated impersonations, or deepfakes. While this is unlikely to spur real legislation (despite the House’s new task force), it does act as a bellwether for how experts lean on this controversial issue.

The letter, signed by more than 500 people in and adjacent to the AI field at time of publishing, declares that “deepfakes are a growing threat to society, and governments must impose obligations throughout the supply chain to stop the proliferation of deepfakes.”

They call for full criminalization of deepfake child sexual abuse materials (CSAM, aka child pornography) regardless of whether the figures depicted are real or fictional. Criminal penalties are called for in any case where someone creates or spreads harmful deepfakes. And developers are called on to prevent harmful deepfakes from being made using their products in the first place, with penalties if their preventative measures are inadequate.

Among the more prominent signatories of the letter are:

Jaron LanierFrances HaugenStuart RussellAndrew YangMarietje SchaakeSteven PinkerGary MarcusOren EtzioniGenevieve smithYoshua BengioDan HendrycksTim Wu

Also present are hundreds of academics from across the globe and many disciplines. In case you’re curious, one person from OpenAI signed, a couple from Google DeepMind, and none at press time from Anthropic, Amazon, Apple or Microsoft (except Lanier, whose position there is non-standard). Interestingly they are sorted in the letter by “Notability.”

This is far from the first call for such measures; in fact, they have been debated in the EU for years before being formally proposed earlier this month. Perhaps it is the EU’s willingness to deliberate and follow through that activated these researchers, creators, and executives to speak out.

EU proposes criminalizing AI-generated child sexual abuse and deepfakes

Or perhaps it is the slow march of KOSA (Kids Online Safety Act) toward acceptance — and its lack of protections for this type of abuse.

Or perhaps it is the threat of (as we have already seen) AI-generated scam calls that could sway the election or bilk naïve folks out of their money.

Or perhaps it is yesterday’s task force being announced with no particular agenda other than maybe writing a report about what some AI-based threats might be and how they might be legislatively restricted.

As you can see, there is no shortage of reasons for those in the AI community to be out here waving their arms around and saying, “Maybe we should, you know, do something?!”

Whether anyone will take notice of this letter is anyone’s guess — no one really paid attention to the infamous one calling for everyone to “pause” AI development, but of course this letter is a bit more practical. If legislators decide to take on the issue, an unlikely event given it’s an election year with a sharply divided Congress, they will have this list to draw from in taking the temperature of AI’s worldwide academic and development community.

AI-generated Biden calls came through shady telecom and Texan front ‘Life Corporation’

apple-ghost-logo

Spotify, Epic Games and others pen letter to EC, claiming Apple has made a 'mockery' of the DMA

apple-ghost-logo

Image Credits: Bryce Durbin / TechCrunch

Epic Games, Spotify, Proton, 37signals and other developers had already signaled their displeasure with how Apple has chosen to adapt its rules to meet the requirements of the new EU regulation, the Digital Markets Act (DMA), calling it “extortion” and “bad-faith” compliance, among other things. Now those companies have formalized their complaints in a letter addressed to the European Commission, where they collectively argue that Apple has made a mockery of the new law and urge the EC to take “swift, timely, and decisive action against Apple” in order to protect developers.

Apple’s new DMA rules have been widely criticized by developers and tech companies, including Meta, Mozilla and Microsoft. Instead of introducing a new, more level playing field where developers could easily compete with Apple’s App Store, Apple found a way to legally comply with the specifics of the regulation, but not its intention. Most notably, it introduced a Core Technology Fee for those developers adopting its DMA rules, which requires apps distributed outside the App Store to still pay Apple €0.50 for each first annual install per year over a 1 million threshold. This was bad news for would-be rivals that had wanted to set up their own app stores or distribute their apps outside of Apple’s walls to avoid paying commissions.

In the new letter, 34 companies and associations across a variety of sectors are asking the EC to take action.

“Apple’s new terms not only disregard both the spirit and letter of the law, but if left unchanged, make a mockery of the DMA and the considerable efforts by the European Commission and EU institutions to make digital markets competitive,” it reads.

The letter goes on to point out where the companies think that Apple is non-compliant with the DMA, noting that Apple’s system of requiring developers to choose to opt into the DMA terms adds unnecessary complexity and confusion, as both are non-compliant, it says. Plus, because of the new fee structure, and the Core Technology Fee, it’s clear that few will agree to the DMA terms, the companies said. While there has been much vocal criticism of the terms, at least one developer, MacPaw, recently announced it had accessed the terms to distribute its software subscription Setapp in the EU.

The companies also complain that Apple’s “scare screens,” designed to warn customers of the risks associated with transacting outside Apple’s App Store, will “mislead and degrade the user experience, depriving them of real choice and the benefits of the DMA.”

Finally, the letter argues that for the DMA to be effective, it needs to allow for alternative app stores and sideloading — the former which the companies say Apple makes difficult and the latter which Apple’s DMA rules don’t even allow for.

Apple, meanwhile, also published a whitepaper today that outlines its solutions to address the changes the DMA requires to commissions and payments. Here, it stresses the security and trust customers have with Apple and its emphasis on consumer privacy. In short, its position is that “Users should not be exposed to physical harm through iOS,” and that all its efforts with regard to DMA compliance are means of reducing any potential harms that users could be exposed to.

There are hints that Apple may be feeling the pressure, however, as it also today reversed an earlier decision to block progressive web apps from operating normally on devices in the EU. The FT had recently reported that the EC’s ruling focused on competition in the streaming music market will not be in Apple’s favor and will rather extract a €500 million fine from the iPhone maker. Apple responded to this by sharing details about Spotify’s success on iOS, noting that its app had been installed more than 119 billion times across Apple devices, among other things.

In response to the companies’ letter, an EC spokesperson told TechCrunch that the six-month deadline for Big Tech gatekeepers, like Apple, was there for a reason.

“Once the compliance solutions are fully known next week, these need to be properly analyzed both by the Commission and stakeholders, in its completeness and not just based on a few announcements,” they noted, adding that the Commission is looking “very carefully” at how companies are complying.

Once it has full enforcement powers, the EC will “not hesitate to act,” they also said.

Instagram logo reflected

Hundreds of creators sign letter slamming Meta's limit on political content

Instagram logo reflected

Image Credits: LIONEL BONAVENTURE/AFP / Getty Images

If you haven’t been seeing much political content on Instagram lately, there’s a reason for that. Since March, Instagram and Threads have instituted a new default setting that limits political content you see from people you’re not following.

Hundreds of creators, convened by GLAAD and Accountable Tech, have signed an open letter demanding that Instagram make the political content limit an opt-in feature, rather than on by default.

“With many of us providing authoritative and factual content on Instagram that helps people understand current events, civic engagement, and electoral participation, Instagram is thereby limiting our ability to reach people online to help foster more inclusive and participatory democracy and society during a critical inflection point for our country,” the letter reads.

The letter’s signatories include comedian Alok Vaid-Menon (1.3 million followers), Glee actor Kevin McHale (1.1 million), news account So Informed (3.1 million), activist Carlos Eduardo Espina (664,000), Under the Desk News (397,000) and other meme accounts, political organizers and entertainers.

Instagram’s definition of political content leaves a lot of room for interpretation, which stokes further concern among these creators. It describes political content as anything “potentially related to things like laws, elections, or social topics.”

The letter points out that this “endangers the reach of marginalized folks speaking to their own lived experience on Meta’s platforms” and limits the conversation around topics like climate change, gun control and reproductive rights.

For political creators, these limits can also impact their livelihood, since it will be harder to reach new audiences. While Instagram isn’t particularly lucrative (there’s no regular revenue share with creators), building a following on the platform can lead to other financial opportunities, like brand sponsorships.

As election season looms in the U.S., Instagram’s decision to distance itself from politics could seem like a way to do damage control — Meta has a less-than-stellar track record when it comes to its role in elections. But Meta could be creating even more problems by siloing its users into political echo chambers, where they’re never exposed to any information from people outside their existing circles.

“Removing political recommendations as a default setting, and consequently stopping people from seeing suggested political content poses a serious threat to political engagement, education, and activism,” the letter says.

How to turn off Instagram’s political content filter

TechCrunch Minute: You’re likely seeing less news and politics on Instagram. Here’s why