Gov. Newsom vetoes California’s controversial AI bill, SB 1047

Image Credits: Getty Images

California Governor Gavin Newsom has vetoed SB 1047, a high-profile bill that would have regulated the development of AI.

The bill was authored by State Senator Scott Wiener and would have made companies that develop AI models liable for implementing safety protocols to prevent “critical harms.” The rules would only have applied to models that cost at least $100 million and use 10^26 FLOPS (floating point operations, a measure of computation) during training.

SB 1047 was opposed by many in Silicon Valley, including companies like OpenAI, high-profile technologists like Meta’s chief AI scientist Yann LeCun, and even Democratic politicians such as U.S. Congressman Ro Khanna. That said, the bill had also been amended based on suggestions by AI company Anthropic and other opponents.

While California’s state legislature passed SB 1047, opponents were holding out hope that Newsom might veto it — and indeed, he’d already indicated that he had reservations about the bill.

In a statement about today’s veto, Newsom said, “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Congresswoman and longtime House Speaker Nancy Pelosi had also criticized the bill as “well-intentioned but ill-informed.” After the veto was announced, she praised Newsom “for recognizing the opportunity and responsibility we all share to enable small entrepreneurs and academia – not big tech – to dominate.”

In the same announcement, Newsom’s office noted that he’s signed 17 bills around the regulation and deployment of AI technology in the last 30 days, and it said he’s asked experts such as Fei-Fei Li, Tino Cuéllar, and Jennifer Tour Chayes to “help California develop workable guardrails for deploying GenAI.” (Known as the “godmother of AI,” Li had previously said SB 1047 would “harm our budding AI ecosystem.”)

Wiener, meanwhile, published a statement describing the veto as “a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.” He also claimed that the debate around the bill “has dramatically advanced the issue of AI safety on the international stage.”

EU member states remain divided on controversial CSAM-scanning plan — but for how long?

a collection of patterned illustrated eyes in blue and pink on a darker blue background

Image Credits: Jake O'Limb / PhotoMosh / Getty Images

A key body of European Union lawmakers remains stalled over a controversial legislative proposal that could see millions of users of messaging apps forced to agree to their photo and video uploads being scanned by AI to detect child sexual abuse material (CSAM).

Critics of the plan include tech industry messaging giants like WhatsApp; privacy-focused players like Signal and Proton; legal, security and data protection experts; civil society and digital rights groups; and a majority of lawmakers from across the political spectrum in the European Parliament. They warn that the proposal will break encryption, arguing it poses an existential threat to the bloc’s democratic freedoms and fundamental rights like privacy.

Opponents also contend the EU plan will fail at its claimed aim of protecting children, suggesting law enforcement will instead be swamped by millions of false positives as everyday app users’ messages are fed through flawed AI-based CSAM detection systems.

On Thursday, a meeting of ambassadors representing the bloc’s 27 member states’ governments had been expected to reach a position on the file to open negotiations with the European Parliament after the Belgian presidency put the item on an agenda for Thursday’s meeting. However, a spokesperson for Belgium’s permanent representative to the EU confirmed to TechCrunch the item was dropped after it became clear governments were still too divided to achieve a qualified majority on a negotiating mandate.

“We had the intention to reach a mandate at the meeting of the ambassadors today, but it was not clear yet whether we would have the required majority,” said Belgium’s spokesperson. “In the last hours before the meeting it … was clear that the required qualified majority could just not be met today so we decided to remove the item from the agenda and to continue the consultation between the Member States — to continue working on the text.”

This is important, as EU law tends to be a three-way affair, with the Commission proposing legislation and the Parliament and Council debating (and often amending) draft laws until a final compromise can be reached. But these so-called trilogue talks on the CSAM-scanning file can’t start until the Council adopts its position. So if member states remain divided, as they have been for some two years since the Commission introduced the CSAM-scanning proposal, the file will remain parked.

Earlier this week, Signal president Meredith Whittaker dialed up her attacks on the controversial EU proposal. “[M]andating mass scanning of private communications fundamentally undermines encryption. Full stop,” she warned, accusing regional lawmakers of attempting a cynical rebrand of client-side scanning to try to cloak a plan that amounts to mass surveillance of private communications.

Despite loud and growing alarm over the bloc’s apparent hard-pivot to digital surveillance, the European Commission and Council have continued to push for a framework that would require message platforms to bake in scanning of citizens’ private messages — including for end-to-end encrypted (E2EE) platforms like Signal — rather than supporting the more targeted searches and carve out for E2EE platforms proposed by MEPs in the European Parliament last year.

Last month, details of a revised CSAM proposal circulated by the Belgians for Member States’ governments’ consideration emerged via leaks, causing fresh consternation.

Pirate Party MEP Patrick Breyer, who has opposed the Commission’s CSAM-scanning plan from the start, argues that the Council’s revised proposal will require messaging app users in the EU to agree to scanning of all images and videos they sent to others, via a technical scheme the text couches as “upload moderation,” or else lose the ability to send imagery to others. “The leaked Belgian proposal means that the essence of the EU Commission’s extreme and unprecedented initial chat control proposal would be implemented unchanged,” he warned at the time.

Private messaging app makers, including Signal, have also warned they would leave the EU rather than be forced to comply with a mass surveillance law.

In a press email Thursday, Breyer welcomed the failure of enough EU ambassadors to agree on a way forward, but he cautioned that this is likely just a stay of execution, writing: “For now the surveillance extremists among the EU governments and Big Sister [home affairs commissioner] Ylva Johansson have failed to build a qualified majority. But they will not give up and could try again in the next few days. When will they finally learn from the EU Parliament that effective, court-proof and majority-capable child protection needs a new approach?”

Also responding to the Council’s setback in a statement, Proton founder Andy Yen made a similar point about the need to keep up the fight. “We must not rest on our laurels,” he wrote. “Anti-encryption proposals have been defeated before only to be repackaged and brought back into the political arena again and again and again. It’s vital that the defenders of privacy remain vigilant and don’t fall for the spin and window-dressing when the next attack on encryption is launched.”

It certainly looks like any celebration of the Council’s ongoing divisions on the file should be tempered with caution as member states’ governments appear to be a hair’s distance away from reaching the necessary qualified majority to kick off talks with MEPs, in which they would immediately be pressing parliamentarians to agree to legislate for mass scanning of citizens’ devices despite their own opposition. “We are extremely, extremely close to a qualified majority,” Belgium’s spokesperson told TechCrunch. “If just one country changes opinion we have a qualified majority and we have a mandate for the Council.”

The spokesperson also told us that a final Coreper meeting next week, the last before its six-month term ends, already has a full agenda, suggesting that talks to try to agree to the Council’s mandate will therefore fall to Hungary, which takes up the rotating Council presidency for six months starting on July 1.

“As far as we are concerned, as presidency, in the coming days — at the expert level — we continue to work and to see whether the Member States that were not happy or satisfied with the proposal we will continue to discuss how we can fine-tune it to make it viable for everyone,” the spokesperson added. “And then it will be up for the next presidency to discuss.

“As far as we understand, they are keen to continue working on the topic. The Commission is also willing to. And the parliament is waiting for us so we need to.”

Stop playing games with online security, Signal president warns EU lawmakers

Controversial internet bill KOSA passed by Senate

The Kids Online Safety Act (KOSA) has passed in the Senate after Majority Leader Chuck Schumer (D-NY) pushed the internet bill to a vote.

Proposed in 2022, KOSA requires that online platforms take reasonable steps to protect users from harm, and could become the most significant children’s online safety legislation to take effect since COPPA. This “duty of care” would apply to large internet companies, like social media platforms, gaming networks and streaming services.

Under KOSA, platforms can be held legally accountable if they don’t prove they’re doing enough to protect minors from a long list of harms, including sexual exploitation, eating disorders, suicide, substances abuse and advertisements for age-restricted products like tobacco or gambling. These companies would have to disclose when and how they’re using personalized content recommendation algorithms, and give minors the option to opt out of data collection. On minors’ accounts, these companies would have to limit addictive features like autoplay, or ones that gamify engagement.

Despite its noble intentions to protect children, critics have expressed worries that the bill could be misused to enact surveillance and censorship. In order for platforms to determine which users are minors, they would have to use some sort of age verification system.

Among privacy advocates, age verification is frowned upon because it limits the ability to use the internet anonymously, which could endanger whistleblowers, human rights activists and people trying to flee dangerous situations, like victims of domestic abuse. These identity verification platforms could be vulnerable to hackers. Au10tix – a service used by X, TikTok, and Uber – left administrative credentials exposed online for over a year, which could have allowed cybercriminals to access people’s drivers’ licenses and social security numbers.

“Collecting ID online is fundamentally different – and more dangerous – than in-person ID checks in the physical world. Online ID checks are not just a momentary display – they require adults to upload data-rich, government-issued identifying documents to either the website or a third-party verifier, and create a potentially lasting record of their visit to the establishment,” said India McKinney, Director of Federal Affairs at the Electronic Frontier Foundation, in a statement.

Since KOSA was introduced in 2022, some human rights groups have been worried about the potential for the bill to be weaponized against LGBTQ+ youth.

In a previous version of KOSA, activists pushed back against a part of the bill that would give individual state attorneys general the ability to decide what online content is appropriate for minors to access. This ability could potentially be weaponized against marginalized kids in a time when LGBTQ+ rights are already being attacked on the state level. As of a February edit to KOSA, the bill now gives the Federal Trade Commission (FTC) the right to enforce the legislation. While some LGBTQ+ activist groups like the Trevor Project and GLAAD backed down after these changes, some advocates remain concerned.

“Under a potential Trump administration, the FTC could easily use KOSA to target content related to gender affirming care, abortion, racial justice, climate change, or anything else that Project 2025 infused agency is willing to claim makes kids ‘depressed’ or ‘anxious,’” said Evan Greer, Director of Fight for the Future.

Senator Marsha Blackburn (R-TN), who introduced the bill alongside Senator Richard Blumenthal (D-CT), has dismissed these concerns.

Jamie Susskind, Senator Blackburn’s legislative director, said in a statement, “KOSA will not – nor was it designed to – target or censor any individual or community.”

Not all legislators are convinced, though. Senator Ron Wyden (D-OR) explained in a statement why he does not support KOSA.

“Unfortunately, KOSA’s improvements, while constructive, remain insufficient,” he said. “I fear this bill could be used to sue services that offer privacy-enhancing technologies like encryption or anonymity features that are essential to young people’s ability to communicate securely and privately without being spied on by predators online. I also take seriously concerns voiced by the American Civil Liberties Union, Fight for the Future, and LGBTQ+ teens and advocates that a future MAGA administration could still use this bill to pressure companies to censor gay, trans and reproductive health information.”

Among tech companies, KOSA has picked up steam. Microsoft, X and Snap all came out in support of the bill, even though the requirements may be challenging for the companies to meet.

KOSA would have to pass in both the Senate and the House of Representatives before it heads to President Joe Biden’s desk, who has indicated that he supports the bill and would sign it into law. But the House has its own version of KOSA and may not be friendly to the bill as written by the Senate. (This paragraph originally stated that the bill had passed back and forth with amendments, which is incorrect; it is more accurate to say there are versions competing in parallel.)

“With vocal opposition from the chair of the House Energy and Commerce Committee, House leadership, and even the youngest member of the House, Maxwell Frost, KOSA currently has no path to becoming law,” Greer said.

But even if the House and Executive act swiftly to turn the bill into law, the Electronic Frontier Foundation has maintained that the Kids Online Safety Act is unconstitutional.

“It’s an unconstitutional censorship bill that would give the Federal Trade Commission, and potentially state Attorneys General, the power to restrict protected online speech they find objectionable,” said McKinney. That means KOSA would likely face legal challenges from day one.

Controversial internet bill KOSA passed by Senate

The Kids Online Safety Act (KOSA) has passed in the Senate after Majority Leader Chuck Schumer (D-NY) pushed the internet bill to a vote.

Proposed in 2022, KOSA requires that online platforms take reasonable steps to protect users from harm, and could become the most significant children’s online safety legislation to take effect since COPPA. This “duty of care” would apply to large internet companies, like social media platforms, gaming networks and streaming services.

Under KOSA, platforms can be held legally accountable if they don’t prove they’re doing enough to protect minors from a long list of harms, including sexual exploitation, eating disorders, suicide, substances abuse and advertisements for age-restricted products like tobacco or gambling. These companies would have to disclose when and how they’re using personalized content recommendation algorithms, and give minors the option to opt out of data collection. On minors’ accounts, these companies would have to limit addictive features like autoplay, or ones that gamify engagement.

Despite its noble intentions to protect children, critics have expressed worries that the bill could be misused to enact surveillance and censorship. In order for platforms to determine which users are minors, they would have to use some sort of age verification system.

Among privacy advocates, age verification is frowned upon because it limits the ability to use the internet anonymously, which could endanger whistleblowers, human rights activists and people trying to flee dangerous situations, like victims of domestic abuse. These identity verification platforms could be vulnerable to hackers. Au10tix – a service used by X, TikTok, and Uber – left administrative credentials exposed online for over a year, which could have allowed cybercriminals to access people’s drivers’ licenses and social security numbers.

“Collecting ID online is fundamentally different – and more dangerous – than in-person ID checks in the physical world. Online ID checks are not just a momentary display – they require adults to upload data-rich, government-issued identifying documents to either the website or a third-party verifier, and create a potentially lasting record of their visit to the establishment,” said India McKinney, Director of Federal Affairs at the Electronic Frontier Foundation, in a statement.

Since KOSA was introduced in 2022, some human rights groups have been worried about the potential for the bill to be weaponized against LGBTQ+ youth.

In a previous version of KOSA, activists pushed back against a part of the bill that would give individual state attorneys general the ability to decide what online content is appropriate for minors to access. This ability could potentially be weaponized against marginalized kids in a time when LGBTQ+ rights are already being attacked on the state level. As of a February edit to KOSA, the bill now gives the Federal Trade Commission (FTC) the right to enforce the legislation. While some LGBTQ+ activist groups like the Trevor Project and GLAAD backed down after these changes, some advocates remain concerned.

“Under a potential Trump administration, the FTC could easily use KOSA to target content related to gender affirming care, abortion, racial justice, climate change, or anything else that Project 2025 infused agency is willing to claim makes kids ‘depressed’ or ‘anxious,’” said Evan Greer, Director of Fight for the Future.

Senator Marsha Blackburn (R-TN), who introduced the bill alongside Senator Richard Blumenthal (D-CT), has dismissed these concerns.

Jamie Susskind, Senator Blackburn’s legislative director, said in a statement, “KOSA will not – nor was it designed to – target or censor any individual or community.”

Not all legislators are convinced, though. Senator Ron Wyden (D-OR) explained in a statement why he does not support KOSA.

“Unfortunately, KOSA’s improvements, while constructive, remain insufficient,” he said. “I fear this bill could be used to sue services that offer privacy-enhancing technologies like encryption or anonymity features that are essential to young people’s ability to communicate securely and privately without being spied on by predators online. I also take seriously concerns voiced by the American Civil Liberties Union, Fight for the Future, and LGBTQ+ teens and advocates that a future MAGA administration could still use this bill to pressure companies to censor gay, trans and reproductive health information.”

Among tech companies, KOSA has picked up steam. Microsoft, X and Snap all came out in support of the bill, even though the requirements may be challenging for the companies to meet.

KOSA would have to pass in both the Senate and the House of Representatives before it heads to President Joe Biden’s desk, who has indicated that he supports the bill and would sign it into law. But the bill has bounced between Senate and House with amendments, and now returns to the latter in its current form (to likely opposition, or potentially amendment and another return to the Senate).

“With vocal opposition from the chair of the House Energy and Commerce Committee, House leadership, and even the youngest member of the House, Maxwell Frost, KOSA currently has no path to becoming law,” Greer said.

But even if the House and Executive act swiftly to turn the bill into law, the Electronic Frontier Foundation has maintained that the Kids Online Safety Act is unconstitutional.

“It’s an unconstitutional censorship bill that would give the Federal Trade Commission, and potentially state Attorneys General, the power to restrict protected online speech they find objectionable,” said McKinney. That means KOSA would likely face legal challenges from day one.

EU member states remain divided on controversial CSAM-scanning plan — but for how long?

a collection of patterned illustrated eyes in blue and pink on a darker blue background

Image Credits: Jake O'Limb / PhotoMosh / Getty Images

A key body of European Union lawmakers remains stalled over a controversial legislative proposal that could see millions of users of messaging apps forced to agree to their photo and video uploads being scanned by AI to detect child sexual abuse material (CSAM).

Critics of the plan include tech industry messaging giants like WhatsApp; privacy-focused players like Signal and Proton; legal, security and data protection experts; civil society and digital rights groups; and a majority of lawmakers from across the political spectrum in the European Parliament. They warn that the proposal will break encryption, arguing it poses an existential threat to the bloc’s democratic freedoms and fundamental rights like privacy.

Opponents also contend the EU plan will fail at its claimed aim of protecting children, suggesting law enforcement will instead be swamped by millions of false positives as everyday app users’ messages are fed through flawed AI-based CSAM detection systems.

On Thursday, a meeting of ambassadors representing the bloc’s 27 member states’ governments had been expected to reach a position on the file to open negotiations with the European Parliament after the Belgian presidency put the item on an agenda for Thursday’s meeting. However, a spokesperson for Belgium’s permanent representative to the EU confirmed to TechCrunch the item was dropped after it became clear governments were still too divided to achieve a qualified majority on a negotiating mandate.

“We had the intention to reach a mandate at the meeting of the ambassadors today, but it was not clear yet whether we would have the required majority,” said Belgium’s spokesperson. “In the last hours before the meeting it … was clear that the required qualified majority could just not be met today so we decided to remove the item from the agenda and to continue the consultation between the Member States — to continue working on the text.”

This is important, as EU law tends to be a three-way affair, with the Commission proposing legislation and the Parliament and Council debating (and often amending) draft laws until a final compromise can be reached. But these so-called trilogue talks on the CSAM-scanning file can’t start until the Council adopts its position. So if member states remain divided, as they have been for some two years since the Commission introduced the CSAM-scanning proposal, the file will remain parked.

Earlier this week, Signal president Meredith Whittaker dialed up her attacks on the controversial EU proposal. “[M]andating mass scanning of private communications fundamentally undermines encryption. Full stop,” she warned, accusing regional lawmakers of attempting a cynical rebrand of client-side scanning to try to cloak a plan that amounts to mass surveillance of private communications.

Despite loud and growing alarm over the bloc’s apparent hard-pivot to digital surveillance, the European Commission and Council have continued to push for a framework that would require message platforms to bake in scanning of citizens’ private messages — including for end-to-end encrypted (E2EE) platforms like Signal — rather than supporting the more targeted searches and carve out for E2EE platforms proposed by MEPs in the European Parliament last year.

Last month, details of a revised CSAM proposal circulated by the Belgians for Member States’ governments’ consideration emerged via leaks, causing fresh consternation.

Pirate Party MEP Patrick Breyer, who has opposed the Commission’s CSAM-scanning plan from the start, argues that the Council’s revised proposal will require messaging app users in the EU to agree to scanning of all images and videos they sent to others, via a technical scheme the text couches as “upload moderation,” or else lose the ability to send imagery to others. “The leaked Belgian proposal means that the essence of the EU Commission’s extreme and unprecedented initial chat control proposal would be implemented unchanged,” he warned at the time.

Private messaging app makers, including Signal, have also warned they would leave the EU rather than be forced to comply with a mass surveillance law.

In a press email Thursday, Breyer welcomed the failure of enough EU ambassadors to agree on a way forward, but he cautioned that this is likely just a stay of execution, writing: “For now the surveillance extremists among the EU governments and Big Sister [home affairs commissioner] Ylva Johansson have failed to build a qualified majority. But they will not give up and could try again in the next few days. When will they finally learn from the EU Parliament that effective, court-proof and majority-capable child protection needs a new approach?”

Also responding to the Council’s setback in a statement, Proton founder Andy Yen made a similar point about the need to keep up the fight. “We must not rest on our laurels,” he wrote. “Anti-encryption proposals have been defeated before only to be repackaged and brought back into the political arena again and again and again. It’s vital that the defenders of privacy remain vigilant and don’t fall for the spin and window-dressing when the next attack on encryption is launched.”

It certainly looks like any celebration of the Council’s ongoing divisions on the file should be tempered with caution as member states’ governments appear to be a hair’s distance away from reaching the necessary qualified majority to kick off talks with MEPs, in which they would immediately be pressing parliamentarians to agree to legislate for mass scanning of citizens’ devices despite their own opposition. “We are extremely, extremely close to a qualified majority,” Belgium’s spokesperson told TechCrunch. “If just one country changes opinion we have a qualified majority and we have a mandate for the Council.”

The spokesperson also told us that a final Coreper meeting next week, the last before its six-month term ends, already has a full agenda, suggesting that talks to try to agree to the Council’s mandate will therefore fall to Hungary, which takes up the rotating Council presidency for six months starting on July 1.

“As far as we are concerned, as presidency, in the coming days — at the expert level — we continue to work and to see whether the Member States that were not happy or satisfied with the proposal we will continue to discuss how we can fine-tune it to make it viable for everyone,” the spokesperson added. “And then it will be up for the next presidency to discuss.

“As far as we understand, they are keen to continue working on the topic. The Commission is also willing to. And the parliament is waiting for us so we need to.”

Stop playing games with online security, Signal president warns EU lawmakers

EU Member States remain divided on controversial CSAM-scanning plan — but for how long?

a collection of patterned illustrated eyes in blue and pink on a darker blue background

Image Credits: Jake O'Limb / PhotoMosh / Getty Images

A key body of European Union lawmakers remains stalled over a controversial legislative proposal that could see millions of users of messaging apps forced to agree to their photo and video uploads being scanned by AI to detect child sexual abuse material (CSAM).

Critics of the plan include tech industry messaging giants like WhatsApp to privacy-focused players like Signal and Proton; legal, security and data protection experts; as well as civil society and digital rights groups; and a majority of lawmakers from across the political spectrum in the European Parliament. They warn that the proposal will break encryption, arguing it poses an existential threat to the bloc’s democratic freedoms and fundamental rights like privacy.

Opponents also contend the EU plan will fail at its claimed aim of protecting children, suggesting law enforcement will instead be swamped by millions of false positives as everyday app users’ messages are fed through flawed AI-based CSAM detection systems.

On Thursday, a meeting of ambassadors representing the bloc’s 27 Member States’ governments had been expected to reach a position on the file to open negotiations with the European Parliament after the Belgian presidency put the item on an agenda for Thursday’s meeting. However a spokesperson for Belgian’s permanent representative to the EU confirmed to TechCrunch the item was dropped after it became clear governments were still too divided to achieve a qualified majority on a negotiating mandate.

“We had the intention to reach a mandate at the meeting of the ambassadors today, but it was not clear yet whether we would have the required majority,” said Belgium’s spokesperson. “In the last hours before the meeting it … was clear that the required qualified majority could just not be met today so we decided to remove the item from the agenda and to continue the consultation between the Member States — to continue working on the text.”

This is important as EU law tends to be a three-way affair, with the Commission proposing legislation and the Parliament and Council debating (and often amending) draft laws until a final compromise can be reached. But these so-called trilogue talks on the CSAM-scanning file can’t start until the Council adopts its position. So if Member States remain divided, as they have been for some two years since the Commission introduced the CSAM-scanning proposal, the file will remain parked.

Earlier this week, Signal president Meredith Whittaker dialed up her attacks on the controversial EU proposal. “[M]andating mass scanning of private communications fundamentally undermines encryption. Full stop,” she warned, accusing regional lawmakers of attempting a cynical rebrand of client-side scanning to try to cloak a plan that amounts to mass surveillance of private communications.

Despite loud and growing alarm over the bloc’s apparent hard-pivot to digital surveillance, the European Commission and Council have continued to push for a framework that would require message platforms to bake in scanning of citizens’ private messages — including for end-to-end encrypted (E2EE) platforms like Signal — rather than supporting the more targeted searches and carve out for E2EE platforms proposed by MEPs in the European Parliament last year.

Last month details of a revised CSAM proposal circulated by the Belgians for Member States’ governments’ consideration emerged via leaks, causing fresh consternation.

Pirate Party MEP Patrick Breyer, who has opposed the Commission’s CSAM-scanning plan from the start, argues that the Council’s revised proposal will require messaging app users in the EU to agree to scanning of all images and videos they sent to others, via a technical scheme the text couches as “upload moderation,” or else lose the ability to send imagery to others. “The leaked Belgian proposal means that the essence of the EU Commission’s extreme and unprecedented initial chat control proposal would be implemented unchanged,” he warned at the time.

Private messaging app makers including Signal have also warned they would leave the EU rather than be forced to comply with a mass surveillance law.

In a press email Thursday, Breyer welcomed the failure of enough EU ambassadors’ to agree a way forward, but he cautioned that this is likely just a stay of execution, writing: “For now the surveillance extremists among the EU governments and Big Sister [home affairs commissioner] Ylva Johansson have failed to build a qualified majority. But they will not give up and could try again in the next few days. When will they finally learn from the EU Parliament that effective, court-proof and majority-capable child protection needs a new approach?”

Also responding to the Council’s setback in a statement, Proton founder Andy Yen made a similar point about the need to keep up the fight. “We must not rest on our laurels,” he wrote. “Anti-encryption proposals have been defeated before only to be repackaged and brought back into the political arena again and again and again. It’s vital that the defenders of privacy remain vigilant and don’t fall for the spin and window-dressing when the next attack on encryption is launched.”

It certainly looks like any celebration of the Council’s ongoing divisions on the file should be tempered with caution as Member States’ governments appear to be a hair’s distance away from reaching the necessary qualified majority to kick off talks with MEPs, in which they would immediately be pressing parliamentarians to agree to legislate for mass scanning of citizens’ devices despite their own opposition. “We are extremely, extremely close to a qualified majority,” Belgian’s spokesperson told TechCrunch. “If just one country changes opinion we have a qualified majority and we have a mandate for the Council.”

The spokesperson also told us that a final coreper meeting next week, the last before its six-month term ends, already has a full agenda, suggesting that talks to try to agree the Council’s mandate will therefore fall to Hungary, which takes up the rotating Council presidency for six months starting on July 1.

“As far as we are concerned, as presidency, in the coming days — at the expert level — we continue to work and to see whether the Member States that were not happy or satisfied with the proposal we will continue to discuss how we can fine-tune it to make it viable for everyone,” the spokesperson added. “And then it will be up for the next presidency to discuss.

“As far as we understand they are keen to continue working on the topic. The Commission is also willing to. And the parliament is waiting for us so we need to.”

Stop playing games with online security, Signal president warns EU lawmakers

Evan Spiegel SnapDSC04002

Snapchat turns off controversial 'Solar System' feature by default after bad press

Evan Spiegel SnapDSC04002

Image Credits: Snap CEO Evan Spiegel; / TechCrunch

Less than a week after The Wall St. Journal reported on how a Snapchat feature dubbed “solar system” was adding to teens’ anxiety, the company has responded by adjusting how the feature works. The ranking system for paid subscribers today shows you how close you are to your Snapchat friends by displaying your position in their solar system. For example, a friend in the “Mercury” position would be someone you communicate with a lot, while “Uranus” would be someone not as close.

Of course, online chatting doesn’t necessarily correlate to real-world relationships, and such a feature can lead to hurt feelings when someone realizes that they’re not as close to a friend as they thought.

Snap says it has received feedback that it can feel good to know you’re close to someone but it can also feel bad to know you aren’t as close as you’d like to be.

“We’ve heard and understand that the Solar System can make that feeling worse, and we want to avoid that,” the company announced in a post on Friday.

However, instead of removing the feature, as it did with the dangerous and controversial speed filter, which it was sued over for “negligent design,” Snap is simply turning the Solar System feature off by default. Snapchat+ subscribers will still be able to turn the option on if they choose.

“We hope this strikes the right balance between providing a feature that is desired by many who use it while avoiding upsetting those who don’t want to use it,” the company explains.

Turning it off by default may provide some friction, but if the feature is already in demand among teens, then they’ll simply dig around to find the setting to turn it back on.

Snap argues that Solar System is not that popular, noting that less than 0.25% of the community uses the option. But since it’s only available to paid subscribers, the small percentage is not surprising. A more relevant stat would be how many Snapchat+ users have used Solar System or viewed the feature.

Although users can’t see who’s closer or farther away from the friend as they are, finding out they’re not number one has led to some tough conversations, The WSJ reported, even breakups.

Snap defends the feature by saying that people wanted to know more about their friendships, and features like Solar System provide “additional awareness and context.” But in reality, it’s a way to keep young people — a demographic where social hierarchy is key — addicted to using Snapchat.

The Solar System feature was only one of Snapchat’s friend ranking systems. It also offers a private feature called “Best Friends” that puts the people with whom you communicate most at the top of your contact list, along with a heart or smiley emoji, The WSJ pointed out.

Another much-debated feature called “Streaks” is a tool that Snapchat uses to encourage repeated use of its app by offering a visual representation of how many consecutive days users have stayed in touch with one another on the app. After much backlash from parents and families, lawmakers, and regulators alike over the feature’s addictive nature and psychological harms, Snap last year introduced a way to pause your streaks. It also added a way for users to restore a lost Streak.

While Snap promises in its blog post that it’s “committed to mitigating the potential downsides of online communication wherever possible,” it has intentionally built features and tools that have at least left it open to lawsuits and Congressional inquiry, if not worse.

CEOs from Meta, TikTok, Snap, X and Discord head to Congress for kids’ online safety hearing

YouTube and Snapchat were asked to defend their apps’ age ratings in Senate hearing