Snapchat introduces new safety features to limit bad actors from contacting users

Snapchat logo

Image Credits: KIRILL KUDRYAVTSEV/AFP / Getty Images

Snapchat on Tuesday announced a new suite of safety features, including updates to its account blocking functionality and enhanced friending safeguards, making it difficult for strangers to contact users on its platform. The new move comes amid concerns over predators exploiting teens on social media apps, which often results in severe incidents, including sextortion.

One of the latest updates to address such issues is improving Snapchat’s existing user blocking tool. It will block new friend requests sent from bad actors who were already blocked by the user when they are sent using other accounts created on the same device. This will help further limit outreach from other existing or new accounts created by the blocked account, Snap said in a statement.

Snapchat will also introduce more frequent reminders informing them about which friends they share their location with on the Snap Map. Additionally, Snapchat will gain a simplified location-sharing feature, which will make it easier to customize which friends can see their location. Snap recommends that users only share their live location with family members or close friends.

Snapchat simplified location sharing
Image Credits: Snap

Alongside the new location-sharing updates, Snapchat is expanding in-app pop-up warnings, first launched in 2023, that appear if they add a friend who doesn’t share mutual friends or is not a part of their contacts. The update will add another pop-up message to warn users if they receive a chat from someone who has been blocked or reported by other users or is from a region where the teen’s friend network is not typically located. This feature will initially be available in the U.S., U.K., Canada, Australia, New Zealand, Nordics and parts of Europe at launch.

Another new friending safeguard will prevent the delivery of friend requests if a teen sends or receives a request from someone they do not share mutual friends with and the person has a history of accessing Snapchat in locations often linked with scams. This is an addition to the earlier feature that restricted teens from getting friend suggestions of an account in Quick Add or Search unless they have multiple mutual connections. The new feature is currently available in a select few countries and will soon be launched in India in a more localized form, the company said.

Snapchat in-app warning for suspicious contact
Image Credits: Snap

“Our newest safety features are all about supporting genuine friendships, empowering teens to make smart choices, and ensuring that every Snapchatter feels secure and confident while using our app,” said Uthara Ganesh, Head of Public Policy, South Asia, Snap, in a statement.

Snapchat has been quite popular among teens, with over 20 million using the app in the U.S. alone, per Snap CEO Evan Spiegel during the Senate Judiciary Committee hearing in January. However, the app — alongside other social media platforms — is often criticized for not taking significant steps to safeguard minor users.

In 2022, Snap introduced a Family Center to let parents monitor their teens’ activity on the platform. It was launched in response to the regulatory pressure social networks faced to protect minors. However, Spiegel stated in his comments to Senator Alex Padilla in January that only around 200,000 parents use its parental supervision controls.

Sam Altman OpenAI

OpenAI pledges to give U.S. AI Safety Institute early access to its next model

Sam Altman OpenAI

Image Credits: TechCrunch

OpenAI CEO Sam Altman says that OpenAI is working with the U.S. AI Safety Institute, a federal government body that aims to assess and address risks in AI platforms, on an agreement to provide early access to its next major generative AI model for safety testing.

The announcement, which Altman made in a post on X late Thursday evening, was light on details. But it — along with a similar deal with the U.K.’s AI safety body struck in June — appears to be intended to counter the narrative that OpenAI has deprioritized work on AI safety in the pursuit of more capable, powerful generative AI technologies.

In May, OpenAI effectively disbanded a unit working on the problem of developing controls to prevent “superintelligent” AI systems from going rogue. Reporting — including ours — suggested that OpenAI cast aside the team’s safety research in favor of launching new products, ultimately leading to the resignation of the team’s two co-leads, Jan Leike (who now leads safety research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who started his own safety-focused AI company, Safe Superintelligence Inc.).

In response to a growing chorus of critics, OpenAI said it would eliminate its restrictive non-disparagement clauses that implicitly discouraged whistleblowing and create a safety commission, as well as dedicate 20% of its compute to safety research. (The disbanded safety team had been promised 20% of OpenAI’s compute for its work, but ultimately never received this.) Altman re-committed to the 20% pledge and re-affirmed that OpenAI voided the non-disparagement terms for new and existing staff in May.

The moves did little to placate some observers, however — particularly after OpenAI staffed the safety commission entirely with company insiders including Altman and, more recently, reassigned a top AI safety executive to another org.

Five senators, including Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI’s policies in a recent letter addressed to Altman. OpenAI chief strategy officer Jason Kwon responded to the letter today, writing that OpenAI “[is] dedicated to implementing rigorous safety protocols at every stage of our process.”

The timing of OpenAI’s agreement with the U.S. AI Safety Institute seems a tad suspect in light of the company’s endorsement earlier this week of the Future of Innovation Act, a proposed Senate bill that would authorize the Safety Institute as an executive body that sets standards and guidelines for AI models. The moves together could be perceived as an attempt at regulatory capture — or at the very least an exertion of influence from OpenAI over AI policymaking at the federal level.

Not for nothing, Altman is among the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which provides recommendations for the “safe and secure development and deployment of AI” throughout the U.S.’ critical infrastructures. And OpenAI has dramatically increased its expenditures on federal lobbying this year, spending $800,000 in the first six months of 2024 versus $260,000 in all of 2023.

The U.S. AI Safety Institute, housed within the Commerce Department’s National Institute of Standards and Technology, consults with a consortium of companies that includes Anthropic, as well as big tech firms like Google, Microsoft, Meta, Apple, Amazon and Nvidia. The industry group is tasked with working on actions outlined in President Joe Biden’s October AI executive order, including developing guidelines for AI red-teaming, capability evaluations, risk management, safety and security and watermarking synthetic content.

Sam Altman OpenAI

OpenAI pledges to give U.S. AI Safety Institute early access to its next model

Sam Altman OpenAI

Image Credits: TechCrunch

OpenAI CEO Sam Altman says that OpenAI is working with the U.S. AI Safety Institute, a federal government body that aims to assess and address risks in AI platforms, on an agreement to provide early access to its next major generative AI model for safety testing.

The announcement, which Altman made in a post on X late Thursday evening, was light on details. But it — along with a similar deal with the U.K.’s AI safety body struck in June — appears to be intended to counter the narrative that OpenAI has deprioritized work on AI safety in the pursuit of more capable, powerful generative AI technologies.

In May, OpenAI effectively disbanded a unit working on the problem of developing controls to prevent “superintelligent” AI systems from going rogue. Reporting — including ours — suggested that OpenAI cast aside the team’s safety research in favor of launching new products, ultimately leading to the resignation of the team’s two co-leads, Jan Leike (who now leads safety research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who started his own safety-focused AI company, Safe Superintelligence Inc.).

In response to a growing chorus of critics, OpenAI said it would eliminate its restrictive non-disparagement clauses that implicitly discouraged whistleblowing and create a safety commission, as well as dedicate 20% of its compute to safety research. (The disbanded safety team had been promised 20% of OpenAI’s compute for its work, but ultimately never received this.) Altman re-committed to the 20% pledge and re-affirmed that OpenAI voided the non-disparagement terms for new and existing staff in May.

The moves did little to placate some observers, however — particularly after OpenAI staffed the safety commission entirely with company insiders including Altman and, more recently, reassigned a top AI safety executive to another org.

Five senators, including Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI’s policies in a recent letter addressed to Altman. OpenAI chief strategy officer Jason Kwon responded to the letter today, writing that OpenAI “[is] dedicated to implementing rigorous safety protocols at every stage of our process.”

The timing of OpenAI’s agreement with the U.S. AI Safety Institute seems a tad suspect in light of the company’s endorsement earlier this week of the Future of Innovation Act, a proposed Senate bill that would authorize the Safety Institute as an executive body that sets standards and guidelines for AI models. The moves together could be perceived as an attempt at regulatory capture — or at the very least an exertion of influence from OpenAI over AI policymaking at the federal level.

Not for nothing, Altman is among the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which provides recommendations for the “safe and secure development and deployment of AI” throughout the U.S.’ critical infrastructures. And OpenAI has dramatically increased its expenditures on federal lobbying this year, spending $800,000 in the first six months of 2024 versus $260,000 in all of 2023.

The U.S. AI Safety Institute, housed within the Commerce Department’s National Institute of Standards and Technology, consults with a consortium of companies that includes Anthropic, as well as big tech firms like Google, Microsoft, Meta, Apple, Amazon and Nvidia. The industry group is tasked with working on actions outlined in President Joe Biden’s October AI executive order, including developing guidelines for AI red-teaming, capability evaluations, risk management, safety and security and watermarking synthetic content.

Snapchat introduces new safety features to limit bad actors from contacting users

Snapchat

Image Credits: KIRILL KUDRYAVTSEV/AFP / Getty Images

Snapchat on Tuesday announced a new suite of safety features, including updates to its account blocking functionality and enhanced friending safeguards, making it difficult for strangers to contact users on its platform. The new move comes amid concerns over predators exploiting teens on social media apps, which often results in severe incidents, including sextortion.

One of the latest updates to address such issues is improving Snapchat’s existing user blocking tool. It will block new friend requests sent from bad actors who were already blocked by the user when they are sent using other accounts created on the same device. This will help further limit outreach from other existing or new accounts created by the blocked account, Snap said in a statement.

Snapchat will also introduce more frequent reminders informing them about which friends they share their location with on the Snap Map. Additionally, Snapchat will gain a simplified location-sharing feature, which will make it easier to customize which friends can see their location. Snap recommends that users only share their live location with family members or close friends.

Snapchat simplified location sharing
Image Credits: Snap

Alongside the new location-sharing updates, Snapchat is expanding in-app pop-up warnings, first launched in 2023, that appear if they add a friend who doesn’t share mutual friends or is not a part of their contacts. The update will add another pop-up message to warn users if they receive a chat from someone who has been blocked or reported by other users or is from a region where the teen’s friend network is not typically located. This feature will initially be available in the U.S., U.K., Canada, Australia, New Zealand, Nordics and parts of Europe at launch.

Another new friending safeguard will prevent the delivery of friend requests if a teen sends or receives a request from someone they do not share mutual friends with and the person has a history of accessing Snapchat in locations often linked with scams. This is an addition to the earlier feature that restricted teens from getting friend suggestions of an account in Quick Add or Search unless they have multiple mutual connections. The new feature is currently available in a select few countries and will soon be launched in India in a more localized form, the company said.

Snapchat in-app warning for suspicious contact
Image Credits: Snap

“Our newest safety features are all about supporting genuine friendships, empowering teens to make smart choices, and ensuring that every Snapchatter feels secure and confident while using our app,” said Uthara Ganesh, Head of Public Policy, South Asia, Snap, in a statement.

Snapchat has been quite popular among teens, with over 20 million using the app in the U.S. alone, per Snap CEO Evan Spiegel during the Senate Judiciary Committee hearing in January. However, the app — alongside other social media platforms — is often criticized for not taking significant steps to safeguard minor users.

In 2022, Snap introduced a Family Center to let parents monitor their teens’ activity on the platform. It was launched in response to the regulatory pressure social networks faced to protect minors. However, Spiegel stated in his comments to Senator Alex Padilla in January that only around 200,000 parents use its parental supervision controls.

OpenAI's new safety committee is made up of all insiders

OpenAI logo with spiraling pastel colors (Image Credits: Bryce Durbin / TechCrunch)

Image Credits: Bryce Durbin / TechCrunch

OpenAI has formed a new committee to oversee “critical” safety and security decisions related to the company’s projects and operations. But, in a move that’s sure to raise the ire of ethicists, OpenAI’s chosen to staff the committee with company insiders — including Sam Altman, OpenAI’s CEO — rather than outside observers.

Altman and the rest of the Safety and Security Committee — OpenAI board members Bret Taylor, Adam D’Angelo and Nicole Seligman as well as chief scientist Jakub Pachocki, Aleksander Madry (who leads OpenAI’s “preparedness” team), Lilian Weng (head of safety systems), Matt Knight (head of security) and John Schulman (head of “alignment science”) — will be responsible for evaluating OpenAI’s safety processes and safeguards over the next 90 days, according to a post on the company’s corporate blog. The committee will then share its findings and recommendations with the full OpenAI board of directors for review, OpenAI says, at which point it’ll publish an update on any adopted suggestions “in a manner that is consistent with safety and security.”

“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to [artificial general intelligence,],” OpenAI writes. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”

OpenAI has over the past few months seen several high-profile departures from the safety side of its technical team — and some of these ex-staffers have voiced concerns about what they perceive as an intentional de-prioritization of AI safety.

Daniel Kokotajlo, who worked on OpenAI’s governance team, quit in April after losing confidence that OpenAI would “behave responsibly” around the release of increasingly capable AI, as he wrote on a post in his personal blog. And Ilya Sutskever, an OpenAI co-founder and formerly the company’s chief scientist, left in May after a protracted battle with Altman and Altman’s allies — reportedly in part over Altman’s rush to launch AI-powered products at the expense of safety work.

More recently, Jan Leike, a former DeepMind researcher who while at OpenAI was involved with the development of ChatGPT and ChatGPT’s predecessor, InstructGPT, resigned from his safety research role, saying in a series of posts on X that he believed OpenAI “wasn’t on the trajectory” to get issues pertaining to AI security and safety “right.” AI policy researcher Gretchen Krueger, who left OpenAI last week, echoed Leike’s statements, calling on the company to improve its accountability and transparency and “the care with which [it uses its] own technology.”

Quartz notes that, besides Sutskever, Kokotajlo, Leike and Krueger, at least five of OpenAI’s most safety-conscious employees have either quit or been pushed out since late last year, including former OpenAI board members Helen Toner and Tasha McCauley. In an op-ed for The Economist published Sunday, Toner and McCauley wrote that — with Altman at the helm — they don’t believe that OpenAI can be trusted to hold itself accountable.

“[B]ased on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives,” Toner and McCauley said.

To Toner and McCauley’s point, TechCrunch reported earlier this month that OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources — but rarely received a fraction of that. The Superalignment team has since been dissolved, and much of its work placed under the purview of Schulman and a safety advisory group OpenAI formed in December.

OpenAI has advocated for AI regulation. At the same time, it’s made efforts to shape that regulation, hiring an in-house lobbyist and lobbyists at an expanding number of law firms and spending hundreds of thousands of dollars on U.S. lobbying in Q4 2023 alone. Recently, the U.S. Department of Homeland Security announced that Altman would be among the members of its newly formed Artificial Intelligence Safety and Security Board, which will provide recommendations for “safe and secure development and deployment of AI” throughout the U.S.’ critical infrastructures.

In an effort to avoid the appearance of ethical fig-leafing with the exec-dominated Safety and Security Committee, OpenAI has pledged to retain third-party “safety, security and technical” experts to support the committee’s work, including cybersecurity veteran Rob Joyce and former U.S. Department of Justice official John Carlin. However, beyond Joyce and Carlin, the company hasn’t detailed the size or makeup of this outside expert group — nor has it shed light on the limits of the group’s power and influence over the committee.

In a post on X, Bloomberg columnist Parmy Olson notes that corporate oversight boards like the Safety and Security Committee, similar to Google’s AI oversight boards like its Advanced Technology External Advisory Council, “[do] virtually nothing in the way of actual oversight.” Tellingly, OpenAI says it’s looking to address “valid criticisms” of its work via the committee — “valid criticisms” being in the eye of the beholder, of course.

Altman once promised that outsiders would play an important role in OpenAI’s governance. In a 2016 piece in the New Yorker, he said that OpenAI would “[plan] a way to allow wide swaths of the world to elect representatives to a … governance board.” That never came to pass — and it seems unlikely it will at this point.

Anthropic hires former OpenAI safety lead to head up new team

Anthropic Claude logo

Image Credits: Anthropic

Jan Leike, a leading AI researcher who earlier this month resigned from OpenAI before publicly criticizing the company’s approach to AI safety, has joined OpenAI rival Anthropic to lead a new “superalignment” team.

In a post on X, Leike said that his team at Anthropic will focus on various aspects of AI safety and security, specifically “scalable oversight,” “weak-to-strong generalization” and automated alignment research.

A source familiar with the matter tells TechCrunch that Leike will report directly to Jared Kaplan, Anthropic’s chief science officer, and that Anthropic researchers currently working on scalable oversight — techniques to control large-scale AI’s behavior in predictable and desirable ways — will move to report to Leike as Leike’s team spins up.

In many ways, Leike’s team sounds similar in mission to OpenAI’s recently-dissolved Superalignment team. The Superalignment team, which Leike co-led, had the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years, but often found itself hamstrung by OpenAI’s leadership.

Anthropic has often attempted to position itself as more safety-focused than OpenAI.

Anthropic’s CEO, Dario Amodei, was once the VP of research at OpenAI, and reportedly split with OpenAI after a disagreement over the company’s direction — namely OpenAI’s growing commercial focus. Amodei brought with him a number of ex-OpenAI employees to launch Anthropic, including OpenAI’s former policy lead Jack Clark.

Cruise names first chief safety officer following crash and controversy

cruise-robotaxi-houston

Image Credits: Cruise

Cruise has named its first “chief safety officer” as part of the company’s effort to rehabilitate itself following an incident — and ensuing controversy — last year that left a pedestrian stuck under and then dragged by one of its robotaxis.

Steve Kenner, an autonomous vehicle industry veteran who has held top safety roles at Kodiak, Locomation, Aurora and Uber’s now-defunct self-driving division, is filling the newly created role. Kenner will report directly to Cruise president and chief administrative officer Craig Glidden. He will “oversee Cruise’s safety management systems and operations” and work “in direct partnership with the Cruise Board of Directors,” the company said in a statement Monday.

Louise Zhang, a VP of safety and systems at Cruise and one of the highest ranking safety-related employees prior to Kenner’s arrival, will remain in her position.

Kenner’s appointment comes just three weeks after the release of a 195-page report by law firm Quinn Emanuel examining the October crash, where a Cruise robotaxi struck and dragged a pedestrian who was previously hit by a human-driven car, as well as the company’s response. That report ultimately determined that Cruise’s leadership had a “myopic” focus on the media’s response to the crash, and that it left out important facts when discussing the event with the public and with regulators.

The crash, and Cruise’s handling of it, are now the subject of many government investigations. The Department of Justice, Securities and Exchange Commission, California Department of Motor Vehicles, California Public Utilities Commission and the National Highway Traffic Safety Administration are all probing the company’s actions.

Kenner will start his new role at the company at a time when the entire robotaxi fleet is grounded. Cruise recently slashed its workforce by 24% and pushed out a number of high-level employees. Cruise co-founder and Kyle Vogt and co-founder Dan Kan resigned last year.

General Motors, which owns Cruise, has said it will scale back investment in the autonomous vehicle company by $1 billion this year. The automaker installed Glidden as chief administrative officer in November as it started sorting through why the company handled the October crash so poorly.

Concept of risk and hazards associated with uncovered electrical outlets with a sharp metal object that could be inserted and cause a shock.

Safety by design

Concept of risk and hazards associated with uncovered electrical outlets with a sharp metal object that could be inserted and cause a shock.

Image Credits: Steven White (opens in a new window) / Getty Images

W
elcome to the TechCrunch Exchange, a weekly startups-and-markets newsletter. It’s inspired by the daily TechCrunch+ column where it gets its name. Want it in your inbox every Saturday? Sign up here.

Tech’s ability to reinvent the wheel has its downsides: It can mean ignoring blatant truths that others have already learned. But the good news is that new founders are sometimes figuring it out for themselves faster than their predecessors. — Anna

AI, trust and safety

This year is an Olympic year, a leap year . . . and also the election year. But before you accuse me of U.S. defaultism, I’m not only thinking of the Biden vs. Trump sequel: More than 60 countries are holding national elections, not to mention the EU Parliament’s.

Which way each of these votes swings could have an impact on tech companies; different parties tend to have different takes on AI regulation, for instance. But before elections even take place, tech will also have a role to play to guarantee their integrity.

Election integrity likely wasn’t on Mark Zuckerberg’s mind when he created Facebook, and perhaps not even when he bought WhatsApp. But 20 and 10 years later, respectively, trust and safety is now a responsibility that Meta and other tech giants can’t escape, whether they like it or not. This means working toward preventing disinformation, fraud, hate speech, CSAM (child sexual abuse material), self-harm and more.

However, AI will likely make the task more difficult, and not just because of deepfakes or from empowering larger numbers of bad actors. Says Lotan Levkowitz, a general partner at Grove Ventures:

All these trust and safety platforms have this hash-sharing database, so I can upload there what is a bad thing, share with all my communities, and everybody is going to stop it together; but today, I can train the model to try to avoid it. So even the more classic trust and safety work, because of Gen AI, is getting tougher and tougher because the algorithm can help bypass all these things.

From afterthought to the forefront

Although online forums had already learned a thing or two on content moderation, there was no social network playbook for Facebook to follow when it was born, so it is somewhat understandable that it would need a while to rise to the task. But it is disheartening to learn from internal Meta documents that as far back as 2017, there was still internal reluctance at adopting measures that could better protect children.

Zuckerberg was one of the five social media tech CEOs who recently appeared at a U.S. Senate hearing on children’s online safety. Testifying was not a first by far for Meta, but that Discord was included is also worth noting; while it has branched out beyond its gaming roots, it is a reminder that trust and safety threats can occur in many online places. This means that a social gaming app, for instance, could also put its users at risk of phishing or grooming.

Will newer companies own up faster than the FAANGs? That’s not guaranteed: Founders often operate from first principles, which is good and bad; the content moderation learning curve is real. But OpenAI is much younger than Meta, so it is encouraging to hear that it is forming a new team to study child safety — even if it may be a result of the scrutiny it’s subjected to.

OpenAI forms a new team to study child safety

Some startups, however, are not waiting for signs of trouble to take action. A provider of AI-enabled trust and safety solutions and part of Grove Ventures’ portfolio, ActiveFence is seeing more inbound requests, its CEO Noam Schwartz told me.

“I’ve seen a lot of folks reaching out to our team from companies that were just founded or even pre-launched. They’re thinking about the safety of their products during the design phase [and] adopting a concept called safety by design. They are baking in safety measures inside their products, the same way that today you’re thinking about security and privacy when you’re building your features.”

ActiveFence is not the only startup in this space, which Wired described as “trust and safety as a service.” But it is one of the largest, especially since it acquired Spectrum Labs in September, so it’s good to hear that its clients include not only big names afraid of PR crises and political scrutiny, but also smaller teams that are just getting started. Tech, too, has an opportunity to learn from past mistakes.

Happy employees, happy company?

I had an interesting chat with Atlassian DevOps evangelist Andrew Boyagi about developer experience not too long ago, so it was nice to see him turn some of these thoughts into a post on TechCrunch+.

Developer experience is more important than developer productivity

Boyagi’s main takeaway is that too many companies are obsessed with measuring developer productivity, when “developer productivity is a by-product of developer joy.” He also suggests steps to boost that joy, including a topic I’ve developed a sweet spot for, platform engineering.

Some of his learnings could also be food for thought beyond dev roles. “Happy employees are productive employees may seem like an obvious statement, but this gets lost in the developer productivity discussion.” In the era of bossware and return-to-office mandates, it’s a good reminder that morale matters, too.

United States capitol in Instagram colors

Lawmakers revise Kids Online Safety Act to address LGBTQ advocates' concerns

United States capitol in Instagram colors

Image Credits: Bryce Durbin / TechCrunch

The Kids Online Safety Act (KOSA) is getting closer to becoming a law, which would make social platforms significantly more responsible for protecting children who use their products. With 62 senators backing the bill, KOSA seems poised to clear the Senate and progress to the House.

KOSA creates a duty of care for social media platforms to limit addictive or harmful features that have demonstrably affected the mental health of children. The bill also requires platforms to develop more robust parental controls.

But under a previous version of KOSA, LGBTQ advocates pushed back on a part of the bill that would give individual state attorneys general the ability to decide what content is inappropriate for children. This rings alarm bells in a time when LGBTQ rights are being attacked on the state level, and books with LGBTQ characters and themes are being censored in public schools. Senator Marsha Blackburn (R-TN), who introduced the bill with Senator Richard Blumenthal (D-CT), said that a top priority for conservatives should be to “protect minor children from the transgender [sic] in this culture,” including on social media.

Jamie Susskind, Senator Blackburn’s legislative director, said in a statement, “KOSA will not — nor was it designed to — target or censor any individual or community.”

After multiple amendments, the new draft of KOSA has appeased some concerns from LGBTQ rights groups like GLAAD, the Human Rights Campaign and The Trevor Project; for one, the FTC will instead be responsible for nationwide enforcement of KOSA, rather than state-specific enforcement by attorneys general.

A letter to Senator Blumenthal from seven LGBTQ rights organizations said: “The considerable changes that you have proposed to KOSA in the draft released on February 15, 2024, significantly mitigate the risk of it being misused to suppress LGBTQ+ resources or stifle young people’s access to online communities. As such, if this draft of the bill moves forward, our organizations will not oppose its passage.”

Other privacy-minded activist groups like the Electronic Frontier Foundation (EFF) and Fight for the Future are still skeptical of the bill, even after the changes.

In a statement shared with TechCrunch, Fight for the Future said that these changes are promising, but don’t go far enough.

“As we have said for months, the fundamental problem with KOSA is that its duty of care covers content specific aspects of content recommendation systems, and the new changes fail to address that. In fact, personalized recommendation systems are explicitly listed under the definition of a design feature covered by the duty of care,” Fight for the Future said. “This means that a future Federal Trade Commission (FTC) could still use KOSA to pressure platforms into automated filtering of important but controversial topics like LGBTQ issues and abortion, by claiming that algorithmically recommending that content ’causes’ mental health outcomes that are covered by the duty of care like anxiety and depression.”

The Blumenthal and Blackburn offices said that the duty of care changes were made to regulate the business model and practices of social media companies, rather than the content that is posted on them.

KOSA was also amended last year to address earlier concerns about age-verification requirements for users of all ages that could endanger privacy and security. Jason Kelley, the EFF’s activism director, is concerned that these amendments aren’t enough to ward off dangerous interpretations of the bill.

“Despite these latest amendments, KOSA remains a dangerous and unconstitutional censorship bill which we continue to oppose,” Kelly said in a statement to TechCrunch. “It would still let federal and state officials decide what information can be shared online and how everyone can access lawful speech. It would still require an enormous number of websites, apps, and online platforms to filter and block legal, and important, speech. It would almost certainly still result in age verification requirements.”

The issue of children’s online safety has stayed at the forefront of lawmakers’ minds, especially after five big tech CEOs testified before the Senate a few weeks ago. With increasing support for KOSA, Blumenthal’s office told TechCrunch that it is intent on fast-tracking the bill forward.

Update, 2/16/23, 12:30 PM ET with statement from Jamie Susskind.

Microsoft, X throw their weight behind KOSA, the controversial kids online safety bill

Fan fiction writers rally fandoms against KOSA, the bill purporting to protect kids online

Yoel Roth

Twitter's former head of Trust & Safety, Yoel Roth, joins Tinder owner Match Group

Yoel Roth

Image Credits: Jerod Harris/Getty Images for Vox Media

Twitter’s former head of trust and safety Yoel Roth announced today that he is joining Match Group, the parent company of several popular dating apps, including Tinder and Hinge. Yoel, who shared the move on LinkedIn, is now the company’s vice president of Trust and Safety.

“As they say… some personal news! I swiped right on Match Group!,” Roth said in his announcement post. “15 years ago, I started studying what we now call ‘trust and safety’ because the then-new world of dating apps felt like the Wild West; it’s truly a dream come true to get to roll up my sleeves and work to protect the millions of people making connections on our apps worldwide.”

Roth was at Twitter, now X, for seven and a half years, and quit the company after just two weeks under Elon Musk’s leadership. Roth had faced dangerous and homophobic harassment after Musk had attacked him with baseless accusations in an attempt to damage his reputation. Roth also faced harassment following the release of the “Twitter Files,” a series of internal documents that demonstrated how Roth and other executives at Twitter handled content moderation. After an escalation of threats, Roth had to flee his home.

Roth is now taking his trust and safety expertise to Match’s family of dating apps, which includes Tinder, Match.com, Meetic, OkCupid, Hinge, Plenty of Fish, OurTime and more. Although dating apps have built-in features to keep users safe, there is still a lot of toxic behavior on these apps, and not everyone trusts them. A Pew Research study found that Americans are split on whether online dating is a safe way to meet new people, as the number of adults who believe online dating is generally safe has decreased since 2019, from 53% to 48%.

Roth, who wrote his PhD dissertation on safety and privacy in dating apps, told Wired in an interview that his new role at Match Group is a “dream job” that he jumped on after the company reached out to him. Roth says he will be responsible for policy and standards development across the company’s apps.

Last year, the Federal Trade Commission (FTC) reported that romance scams cost victims $1.3 billion in 2022, while the median reported loss was $4,400. Roth plans to tackle this issue, noting that he wants to build out protection features for things like scams and financial frauds. Although Match claims to remove 44 spam accounts every minute across its apps, Roth says he wants to further protect users by understanding the issue and implementing factors that will allow for cross-platform action.

In addition, Roth says that although Match Group works to identify underage users, he believes app stores should also play a part in protecting users, mirroring a position that Meta CEO Mark Zuckerberg also holds.