Stop playing games with online security, Signal president warns EU lawmakers

Signal messaging application President Meredith Whittaker.

Image Credits: PATRICIA DE MELO MOREIRA/AFP / Getty Images

A controversial European Union legislative proposal to scan the private messages of citizens in a bid to detect child sexual abuse material (CSAM) is a risk to the future of web security, Meredith Whittaker warned in a public blog post Monday. She’s the president of the not-for-profit foundation behind the end-to-end encrypted (E2EE) messaging app Signal.

“There is no way to implement such proposals in the context of end-to-end encrypted communications without fundamentally undermining encryption and creating a dangerous vulnerability in core infrastructure that would have global implications well beyond Europe,” she wrote.

The European Commission presented the original proposal for mass scanning of private messaging apps to counter the spread of CSAM online back in May 2022. Since then, Members of the European Parliament have united in rejecting the approach. They also suggested an alternative route last fall, which would have excluded E2EE apps from scanning. However the European Council, the legislative body made up of representatives of Member States governments, continues to push for strongly encrypted platforms to remain in scope of the scanning law.

The most recent Council proposal, which was put forward in May under the Belgian presidency, includes a requirement that “providers of interpersonal communications services” (aka messaging apps) install and operate what the draft text describes as “technologies for upload moderation”, per a text published by Netzpolitik.

Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”

Last month, Euractiv reported that the revised proposal would require users of E2EE messaging apps to consent to scanning to detect CSAM. Users who did not consent would be prevented from using features that involve the sending of visual content or URLs it also reported — essentially downgrading their messaging experience to basic text and audio.

Whittaker’s statement skewers the Council’s plan as an attempt to use “rhetorical games” to try to rebrand client-side scanning, the controversial technology which security and privacy experts argue is incompatible with the strong encryption that supports confidential communications.

“[M]andating mass scanning of private communications fundamentally undermines encryption. Full stop,” she emphasized. “Whether this happens via tampering with, for instance, an encryption algorithm’s random number generation, or by implementing a key escrow system, or by forcing communications to pass through a surveillance system before they’re encrypted.”

“We can call it a backdoor, a front door, or ‘upload moderation’. But whatever we call it, each one of these approaches creates a vulnerability that can be exploited by hackers and hostile nation states, removing the protection of unbreakable math and putting in its place a high-value vulnerability.”

Also hitting out at the revised Council proposal in a statement last month, Pirate Party MEP Patrick Breyer — who has opposed the Commission’s controversial message-scanning plan from the start — warned: “The Belgian proposal means that the essence of the EU Commission’s extreme and unprecedented initial chat control proposal would be implemented unchanged. Using messenger services purely for texting is not an option in the 21st century.”

The EU’s own data protection supervisor has also voiced concern. Last year, it warned that the plan poses a direct threat to democratic values in a free and open society.

Pressure on governments to force E2EE apps to scan private messages, meanwhile, is likely coming from law enforcement.

Back in April European police chiefs put out a joint statement calling for platforms to design security systems in such a way that they can still identify illegal activity and send reports on message content to law enforcement. Their call for “technical solutions” to ensure “lawful access” to encrypted data did not specify how platforms should achieve this sleight of hand. But, as we reported at the time, the lobbying was for some form of client-side scanning. It looks no accident, therefore, that just a few weeks later the Council produced its proposal for “upload moderation”.

The draft text does contain a few statements that seek to pop a proverbial fig leaf atop the gigantic security and privacy black hole that “upload moderation” implies — including a line that states “without prejudice to Article 10a, this Regulation shall not prohibit or make impossible end-to-end encryption”; as well as a claim that service providers will not be required to decrypt or provide access to E2EE data; a clause saying they should not introduce cybersecurity risks “for which it is not possible to take any effective measures to mitigate such risk”; and another line stating service providers should not be able to “deduce the substance of the content of the communications”.

“These are all nice sentiments, and they make of the proposal a self negating paradox,” Whittaker told TechCrunch when we sought her response to these provisos. “Because what is proposed — bolting mandatory scanning onto end-to-end encrypted communications — would undermine encryption and create a significant vulnerability.”

The Commission and the Belgian presidency of the Council were contacted for a response to her concerns but at press time neither had provided a response.

EU lawmaking is typically a three-way affair — so it remains to be seen where the bloc will finally end up on CSAM scanning. Once the Council agrees on its position, so-called trilogue talks kick off with the parliament and Commission to seek a final compromise. But it’s also worth noting that the make-up of the parliament has changed since MEPs agreed their negotiating mandate last year following the recent EU elections.

EU plan to force messaging apps to scan for CSAM risks millions of false positives, experts warn

Europe’s CSAM-scanning plan is a tipping point for democratic rights, experts warn

Reddit back online after a software update took it down

distorted reddit logo

Image Credits: TechCrunch

Reddit’s mobile and web applications went down on Wednesday afternoon, with more than 150,000 users reporting outages on Downdetector as of 1:30 p.m. in San Francisco. When trying to access Reddit’s homepage, users were met with the message, “Server error. Try again later.”

The company itself reported a “degraded status” for Reddit.com as of 1:16 p.m. PT on its status page, but now says its systems are operational roughly an hour later. After investigating the issue, the company says a software update pushed out to the platform was to blame.

“Earlier today we shipped an update that unintentionally impacted platform stability,” said Reddit spokesperson Tim Rathschmidt in an email to TechCrunch. “We deployed a fix and are back up and running.”

As of 2:16 p.m., Reddit’s status page was updated to show that the issue had been resolved after a fix was implemented.

In the past 90 days, Reddit has had very few major outages like this, according to its status page. Some users may still be having difficult accessing the platform, but services should trickle back online shortly, according to the company.

Stop playing games with online security, Signal president warns EU lawmakers

Signal messaging application President Meredith Whittaker.

Image Credits: PATRICIA DE MELO MOREIRA/AFP / Getty Images

A controversial European Union legislative proposal to scan the private messages of citizens in a bid to detect child sexual abuse material (CSAM) is a risk to the future of web security, Meredith Whittaker warned in a public blog post Monday. She’s the president of the not-for-profit foundation behind the end-to-end encrypted (E2EE) messaging app Signal.

“There is no way to implement such proposals in the context of end-to-end encrypted communications without fundamentally undermining encryption and creating a dangerous vulnerability in core infrastructure that would have global implications well beyond Europe,” she wrote.

The European Commission presented the original proposal for mass scanning of private messaging apps to counter the spread of CSAM online back in May 2022. Since then, Members of the European Parliament have united in rejecting the approach. They also suggested an alternative route last fall, which would have excluded E2EE apps from scanning. However the European Council, the legislative body made up of representatives of Member States governments, continues to push for strongly encrypted platforms to remain in scope of the scanning law.

The most recent Council proposal, which was put forward in May under the Belgian presidency, includes a requirement that “providers of interpersonal communications services” (aka messaging apps) install and operate what the draft text describes as “technologies for upload moderation”, per a text published by Netzpolitik.

Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”

Last month, Euractiv reported that the revised proposal would require users of E2EE messaging apps to consent to scanning to detect CSAM. Users who did not consent would be prevented from using features that involve the sending of visual content or URLs it also reported — essentially downgrading their messaging experience to basic text and audio.

Whittaker’s statement skewers the Council’s plan as an attempt to use “rhetorical games” to try to rebrand client-side scanning, the controversial technology which security and privacy experts argue is incompatible with the strong encryption that supports confidential communications.

“[M]andating mass scanning of private communications fundamentally undermines encryption. Full stop,” she emphasized. “Whether this happens via tampering with, for instance, an encryption algorithm’s random number generation, or by implementing a key escrow system, or by forcing communications to pass through a surveillance system before they’re encrypted.”

“We can call it a backdoor, a front door, or ‘upload moderation’. But whatever we call it, each one of these approaches creates a vulnerability that can be exploited by hackers and hostile nation states, removing the protection of unbreakable math and putting in its place a high-value vulnerability.”

Also hitting out at the revised Council proposal in a statement last month, Pirate Party MEP Patrick Breyer — who has opposed the Commission’s controversial message-scanning plan from the start — warned: “The Belgian proposal means that the essence of the EU Commission’s extreme and unprecedented initial chat control proposal would be implemented unchanged. Using messenger services purely for texting is not an option in the 21st century.”

The EU’s own data protection supervisor has also voiced concern. Last year, it warned that the plan poses a direct threat to democratic values in a free and open society.

Pressure on governments to force E2EE apps to scan private messages, meanwhile, is likely coming from law enforcement.

Back in April European police chiefs put out a joint statement calling for platforms to design security systems in such a way that they can still identify illegal activity and send reports on message content to law enforcement. Their call for “technical solutions” to ensure “lawful access” to encrypted data did not specify how platforms should achieve this sleight of hand. But, as we reported at the time, the lobbying was for some form of client-side scanning. It looks no accident, therefore, that just a few weeks later the Council produced its proposal for “upload moderation”.

The draft text does contain a few statements that seek to pop a proverbial fig leaf atop the gigantic security and privacy black hole that “upload moderation” implies — including a line that states “without prejudice to Article 10a, this Regulation shall not prohibit or make impossible end-to-end encryption”; as well as a claim that service providers will not be required to decrypt or provide access to E2EE data; a clause saying they should not introduce cybersecurity risks “for which it is not possible to take any effective measures to mitigate such risk”; and another line stating service providers should not be able to “deduce the substance of the content of the communications”.

“These are all nice sentiments, and they make of the proposal a self negating paradox,” Whittaker told TechCrunch when we sought her response to these provisos. “Because what is proposed — bolting mandatory scanning onto end-to-end encrypted communications — would undermine encryption and create a significant vulnerability.”

The Commission and the Belgian presidency of the Council were contacted for a response to her concerns but at press time neither had provided a response.

EU lawmaking is typically a three-way affair — so it remains to be seen where the bloc will finally end up on CSAM scanning. Once the Council agrees on its position, so-called trilogue talks kick off with the parliament and Commission to seek a final compromise. But it’s also worth noting that the make-up of the parliament has changed since MEPs agreed their negotiating mandate last year following the recent EU elections.

EU plan to force messaging apps to scan for CSAM risks millions of false positives, experts warn

Europe’s CSAM-scanning plan is a tipping point for democratic rights, experts warn

Stop playing games with online security, Signal president warns EU lawmakers

Signal messaging application President Meredith Whittaker poses for a photograph before an interview at the Europe's largest tech conference, the Web Summit, in Lisbon on November 4, 2022. (Photo by PATRICIA DE MELO MOREIRA/AFP via Getty Images)

Image Credits: PATRICIA DE MELO MOREIRA/AFP / Getty Images

A controversial European Union legislative proposal to scan the private messages of citizens in a bid to detect child sexual abuse material (CSAM) is a risk to the future of web security, Meredith Whittaker warned in a public blog post Monday. She’s the president of the not-for-profit foundation behind the end-to-end encrypted (E2EE) messaging app Signal.

“There is no way to implement such proposals in the context of end-to-end encrypted communications without fundamentally undermining encryption and creating a dangerous vulnerability in core infrastructure that would have global implications well beyond Europe,” she wrote.

The European Commission presented the original proposal for mass scanning of private messaging apps to counter the spread of CSAM online back in May 2022. Since then, Members of the European Parliament have united in rejecting the approach. They also suggested an alternative route last fall, which would have excluded E2EE apps from scanning. However the European Council, the legislative body made up of representatives of Member States governments, continues to push for strongly encrypted platforms to remain in scope of the scanning law.

The most recent Council proposal, which was put forward in May under the Belgian presidency, includes a requirement that “providers of interpersonal communications services” (aka messaging apps) install and operate what the draft text describes as “technologies for upload moderation”, per a text published by Netzpolitik.

Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”

Last month, Euractiv reported that the revised proposal would require users of E2EE messaging apps to consent to scanning to detect CSAM. Users who did not consent would be prevented from using features that involve the sending of visual content or URLs it also reported — essentially downgrading their messaging experience to basic text and audio.

Whittaker’s statement skewers the Council’s plan as an attempt to use “rhetorical games” to try to rebrand client-side scanning, the controversial technology which security and privacy experts argue is incompatible with the strong encryption that supports confidential communications.

“[M]andating mass scanning of private communications fundamentally undermines encryption. Full stop,” she emphasized. “Whether this happens via tampering with, for instance, an encryption algorithm’s random number generation, or by implementing a key escrow system, or by forcing communications to pass through a surveillance system before they’re encrypted.”

“We can call it a backdoor, a front door, or ‘upload moderation’. But whatever we call it, each one of these approaches creates a vulnerability that can be exploited by hackers and hostile nation states, removing the protection of unbreakable math and putting in its place a high-value vulnerability.”

Also hitting out at the revised Council proposal in a statement last month, Pirate Party MEP Patrick Breyer — who has opposed the Commission’s controversial message-scanning plan from the start — warned: “The Belgian proposal means that the essence of the EU Commission’s extreme and unprecedented initial chat control proposal would be implemented unchanged. Using messenger services purely for texting is not an option in the 21st century.”

The EU’s own data protection supervisor has also voiced concern. Last year, it warned that the plan poses a direct threat to democratic values in a free and open society.

Pressure on governments to force E2EE apps to scan private messages, meanwhile, is likely coming from law enforcement.

Back in April European police chiefs put out a joint statement calling for platforms to design security systems in such a way that they can still identify illegal activity and send reports on message content to law enforcement. Their call for “technical solutions” to ensure “lawful access” to encrypted data did not specify how platforms should achieve this sleight of hand. But, as we reported at the time, the lobbying was for some form of client-side scanning. It looks no accident, therefore, that just a few weeks later the Council produced its proposal for “upload moderation”.

The draft text does contain a few statements that seek to pop a proverbial figleaf atop the gigantic security and privacy black hole that “upload moderation” implies — including a line that states “without prejudice to Article 10a, this Regulation shall not prohibit or make impossible end-to-end encryption”; as well as a claim that service providers will not be required to decrypt or provide access to E2EE data; a clause saying they should not introduce cybersecurity risks “for which it is not possible to take any effective measures to mitigate such risk”; and another line stating service providers should not be able to “deduce the substance of the content of the communications”.

“These are all nice sentiments, and they make of the proposal a self negating paradox,” Whittaker told TechCrunch when we sought her response to these provisos. “Because what is proposed — bolting mandatory scanning onto end-to-end encrypted communications — would undermine encryption and create a significant vulnerability.”

The Commission and the Belgian presidency of the Council were contacted for a response to her concerns but at press time neither had provided a response.

EU lawmaking is typically a three-way affair — so it remains to be seen where the bloc will finally end up on CSAM scanning. Once the Council agrees its position, so-called trilogue talks kick off with the parliament and Commission to seek a final compromise. But it’s also worth noting that the make-up of the parliament has changed since MEPs agreed their negotiating mandate last year following the recent EU elections.

EU plan to force messaging apps to scan for CSAM risks millions of false positives, experts warn

Europe’s CSAM-scanning plan is a tipping point for democratic rights, experts warn

Owner.com, online restaurant management

Owner.com grabs $33M Series B to improve online guest experiences for mom-and-pop restaurants

Owner.com, online restaurant management

Image Credits: Owner.com

Independent restaurant owners used to heavily rely on foot traffic as their way of marketing. In 2020, the global pandemic changed all that.

Overnight, restaurants needed to set up online ordering, have a plan for pick-up and delivery and find new ways to get in front of customers no longer going out to eat.

Inspired by his mother’s struggles to attract customers to her dog grooming business, Adam Guild teamed up with Dean Bloembergen to create Owner.com to help independent restaurant owners better manage their online presence.

“People are hurting worse than ever in the restaurant industry because the inflation has created rising labor costs and rising food costs,” Guild told TechCrunch. “It’s compressing the already low profit margins of these restaurant owners, so it is more important to them than ever to figure out ways to increase sales and save money.”

To that end, Owner.com offers an all-in-one platform for restaurants that includes online ordering, a website builder, customer relationship management tools, marketing automation and a branded mobile app generator.

Owner.com whips up new tools for independent restaurants following funding

Restaurant sales were poised to reach sales of $997 billion in 2023, according to the National Restaurant Association. That provides huge opportunities for companies like Owner.com, Guild said.

There’s also room for other startups to leverage technology with tools focused on mom-and-pop restaurants. Last year, Superorder raised $10 million to help restaurants with their online presence, and there are dozens of companies with operations tools to manage both the front and back of the house, including Zitti, MarginEdge, OneOrder and TouchBistro.

To differentiate itself more, Owner.com recently tapped into artificial intelligence for new features, including an email marketer. As he described it, restaurant owners can start typing in a sentence of what they want the email to say or do. For example, “Tell my customers about my new yellow curry,” and the AI will produce an email for them designed in their style, written in their tone and ready to go out to customers.

Today, the company has thousands of customers after tripling the number in the past year. Owner processes hundreds of millions of dollars annually for its customers and yields tens of millions in revenue, Guild said.

Dean Bloembergen, Adam Guild, Owner.com, online restaurant management
Dean Bloembergen and Adam Guild, co-founders of Owner.com. Image Credits: Owner.com

TechCrunch has followed Owner.com as it attracted $10.7 million in seed funding, led by SaaStr Fund, then $15 million in Series A capital led by Altman Capital.

Now the company has raised $33 million in Series B funding on a $200 million post-money valuation, Guild said. Existing investors Redpoint Ventures and Altman Capital co-led the round, with participation from Horsley Bridge, Activant Capital and Transpose Platform Management. Total capital raised is now $58.7 million.

Guild expects to continue investing in engineering teams and design teams to cater to what his customers are saying they need. In addition to the AI-powered email marketer, there are other tools in the pipeline.

“Even just five years ago, it used to be enough to provide great food and great service because those things are hard to do on their own,” Guild said. “It’s no longer enough if you want to be a successful restaurant. More of the guest experience has shifted online, even where people discover restaurants. Large corporations can afford to spend billions of dollars on teams of engineers and marketers and ad spend. Independent restaurant owners are being screwed over. What we are building is to help these mom-and-pop owners not only survive as a result of this technological change, but continue to thrive as more of their business goes online.”

As startups whip up a restaurant tech frenzy, is anyone close to Toast?

United States capitol in Instagram colors

Lawmakers revise Kids Online Safety Act to address LGBTQ advocates' concerns

United States capitol in Instagram colors

Image Credits: Bryce Durbin / TechCrunch

The Kids Online Safety Act (KOSA) is getting closer to becoming a law, which would make social platforms significantly more responsible for protecting children who use their products. With 62 senators backing the bill, KOSA seems poised to clear the Senate and progress to the House.

KOSA creates a duty of care for social media platforms to limit addictive or harmful features that have demonstrably affected the mental health of children. The bill also requires platforms to develop more robust parental controls.

But under a previous version of KOSA, LGBTQ advocates pushed back on a part of the bill that would give individual state attorneys general the ability to decide what content is inappropriate for children. This rings alarm bells in a time when LGBTQ rights are being attacked on the state level, and books with LGBTQ characters and themes are being censored in public schools. Senator Marsha Blackburn (R-TN), who introduced the bill with Senator Richard Blumenthal (D-CT), said that a top priority for conservatives should be to “protect minor children from the transgender [sic] in this culture,” including on social media.

Jamie Susskind, Senator Blackburn’s legislative director, said in a statement, “KOSA will not — nor was it designed to — target or censor any individual or community.”

After multiple amendments, the new draft of KOSA has appeased some concerns from LGBTQ rights groups like GLAAD, the Human Rights Campaign and The Trevor Project; for one, the FTC will instead be responsible for nationwide enforcement of KOSA, rather than state-specific enforcement by attorneys general.

A letter to Senator Blumenthal from seven LGBTQ rights organizations said: “The considerable changes that you have proposed to KOSA in the draft released on February 15, 2024, significantly mitigate the risk of it being misused to suppress LGBTQ+ resources or stifle young people’s access to online communities. As such, if this draft of the bill moves forward, our organizations will not oppose its passage.”

Other privacy-minded activist groups like the Electronic Frontier Foundation (EFF) and Fight for the Future are still skeptical of the bill, even after the changes.

In a statement shared with TechCrunch, Fight for the Future said that these changes are promising, but don’t go far enough.

“As we have said for months, the fundamental problem with KOSA is that its duty of care covers content specific aspects of content recommendation systems, and the new changes fail to address that. In fact, personalized recommendation systems are explicitly listed under the definition of a design feature covered by the duty of care,” Fight for the Future said. “This means that a future Federal Trade Commission (FTC) could still use KOSA to pressure platforms into automated filtering of important but controversial topics like LGBTQ issues and abortion, by claiming that algorithmically recommending that content ’causes’ mental health outcomes that are covered by the duty of care like anxiety and depression.”

The Blumenthal and Blackburn offices said that the duty of care changes were made to regulate the business model and practices of social media companies, rather than the content that is posted on them.

KOSA was also amended last year to address earlier concerns about age-verification requirements for users of all ages that could endanger privacy and security. Jason Kelley, the EFF’s activism director, is concerned that these amendments aren’t enough to ward off dangerous interpretations of the bill.

“Despite these latest amendments, KOSA remains a dangerous and unconstitutional censorship bill which we continue to oppose,” Kelly said in a statement to TechCrunch. “It would still let federal and state officials decide what information can be shared online and how everyone can access lawful speech. It would still require an enormous number of websites, apps, and online platforms to filter and block legal, and important, speech. It would almost certainly still result in age verification requirements.”

The issue of children’s online safety has stayed at the forefront of lawmakers’ minds, especially after five big tech CEOs testified before the Senate a few weeks ago. With increasing support for KOSA, Blumenthal’s office told TechCrunch that it is intent on fast-tracking the bill forward.

Update, 2/16/23, 12:30 PM ET with statement from Jamie Susskind.

Microsoft, X throw their weight behind KOSA, the controversial kids online safety bill

Fan fiction writers rally fandoms against KOSA, the bill purporting to protect kids online

Discord comes back online after widespread outage

The logo of the social network application Discord on the screen of a phone.

Image Credits: MARTIN BUREAU/AFP via Getty Images

Discord is back online after an outage this morning, the company confirmed to TechCrunch. The outage came as Meta’s Instagram, Facebook and Threads all went down this morning. YouTube has also confirmed that its service is having issues this morning too, and that it’s working on a fix.

“This incident has been resolved,” Discord’s status page reads. “We are reviewing the updated rate limiting that triggered the initial session start issues, as well as the scaling targets for the internal service which limited guild loading during initial recovery.”

Discord says it is monitoring the recovery of multiple systems.

According to third-party monitoring website DownDetector, the issues began at around 10:50 a.m. ET. Users reported that they were unable to load messages, while others say said they were unable to access the service at all.

The outages across the multiple services come on Super Tuesday, a day when people across a number of U.S. states are voting in the primary. The outages, mainly on Facebook and Instagram, may make it harder for candidates to continue their outreach and remind people to head to the polls on an important day.

Update 03/05/2024 11:45 a.m. ET: The article has been updated to reflect that Discord has solved the issue and is back online. 

Facebook, Instagram and Threads were all down in massive Meta outage on Super Tuesday

A stack of three cardboards boxes with the image of a shopping cart printed on them, resting on top of a laptop with the screen open to an e-commerce marketplace

AliExpress is first online marketplace to face DSA investigation by EU

A stack of three cardboards boxes with the image of a shopping cart printed on them, resting on top of a laptop with the screen open to an e-commerce marketplace

Image Credits: Getty Images

The European Union has opened its third formal investigation of a very large platform under the Digital Services Act (DSA), with China’s AliExpress earning itself the dubious honor of being the first online marketplace to face formal probe by the Commission.

The DSA is the bloc’s rebooted e-commerce rules which demand risk assessments and mitigations by larger platforms which face tough penalties (of up to 6% of global annual turnover) for violations.

Social media platforms X and TikTok are the two other very large online platforms (VLOPs) already under formal DSA investigation (since December and February, respectively). Those probes remain ongoing.

In a press release announcing the formal proceeding on AliExpress, the Commission says it suspects the marketplace of breaching DSA rules in areas linked to the management and mitigation of risks; content moderation and its internal complaint handling mechanism; the transparency of advertising and recommender systems; and the traceability of traders and to data access for researchers.

AliExpress was designated a VLOP back in April last year, alongside other marketplaces, including Amazon and Zalando.

The safety of ecommerce marketplaces is one of a handful of enforcement priorities for the Commission, along with illegal hate speech, child protection and election security.

In a background briefing with journalists Thursday, a Commission official said concerns about AliExpress cover areas such as non-compliant medicines, foods; and child safety risks related to the distribution of pornography and to the sale of toys.

They said it will also look into transparency and safety concerns related to influencers’ use of AliExpress. The platform offers an affiliate program aimed at social media influencers who can earn a commission through links to goods being sold on the platform. The Commission said it suspects some of this activity is leading to the sale of non-compliant — and potentially dangerous or otherwise risky — products.

It said it will also investigate how the influencer affiliate program is implemented to verify whether it complies with DSA transparency rules.

The full list of suspected breaches by AliExpress is long: Running to ten articles (Articles 16, 20, 26, 27, 30, 34, 35, 38, 39 and 40).

However today’s proceeding does not confirm any violations of the DSA as yet. Rather it means the Commission will now carry out an in-depth investigation “as a matter of priority”. The formal step unlocks additional powers for the EU — including the ability to impose interim measures.

There’s no fixed timeline for the EU to conclude a DSA investigation.

Alibaba, AliExpress’ parent company, was contacted for comment. Update: Here’s its statement: “We respect all applicable rules and regulations in the markets where we operate. As a VLOP, we have been working with, and will continue to work with, the relevant authorities on making sure we comply with applicable standards and will continue to ensure that we will be able to meet the requirements of the DSA. AliExpress is committed to creating a safe and compliant marketplace for all consumers.”

Elon Musk’s X faces first DSA probe in EU over illegal content risks, moderation, transparency and deceptive design

EU opens formal probe of TikTok under Digital Services Act, citing child safety, risk management and other concerns

Daily commuters of Delhi Metro are seen coming out of Metro station, on a raised walkway

India will fact-check online posts about government matters

Daily commuters of Delhi Metro are seen coming out of Metro station, on a raised walkway

Image Credits: Mayank Makhija / NurPhoto / Getty Images

Updated at 1.30pm IST, March 21: India’s Supreme Court has put the gazette notification on hold until petitions challenging it have been resolved.

In India, a government-run agency will now monitor and undertake fact-checking for government related matters on social media even as tech giants expressed grave concerns about it last year.

The Ministry of Electronics and IT on Wednesday wrote in a gazette notification that it is amending the IT Rules 2021 to cement into law the proposal to make the fact checking unit of Press Information Bureau the dedicated arbiter of truth for New Delhi matters.

Tech companies as well as other firms that serve more than 5 million users in India will be required to “make reasonable efforts” to not display, store, transmit or otherwise share information that deceives or misleads users about matters pertaining to the government, the IT ministry said.

India’s move comes just weeks ahead of the general elections in the country.

“In exercise of the powers conferred by sub-clause (v) of clause (b) of sub-rule (1) of rule 3 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, the Central Government hereby notifies the Fact Check Unit under the Press Information Bureau of the Ministry of Information and Broadcasting as the fact check unit of the Central Government for the purposes of the said sub-clause, in respect of any business of the Central Government,” the gazette notification said.

The Ministry of Information and Broadcast established the fact-checking unit of Press Information Bureau in 2019 with the aim to dispel misinformation about government matters. The unit, however, has been criticized for falsely labeling information critical to the government as misleading.

Relying on a government agency such as the Press Information Bureau as the sole source to fact-check government business without giving it a clear definition or providing clear checks and balances “may lead to misuse during implementation of the law, which will profoundly infringe on press freedom,” Asia Internet Coalition, an industry group that represents Meta, Amazon, Google and Apple, cautioned last year.

The Editors Guild of India and comedian Kunal Kamra recently legally challenged New Delhi from moving ahead with the proposal. In a petition, Kamra cautioned that New Delhi’s move could create an environment that forces social media firms to welcome “a regime of self- interested censorship.”

Rajeev Chandrasekhar, Indian minister of state for IT, assured last year that the then-proposal wasn’t designed to censor journalism.