Here is what's illegal under California's 9 (and counting) new AI laws

Image Credits: Andrew Harnik / Getty Images

California Governor Gavin Newsom is currently considering 38 AI-related bills, including the highly contentious SB 1047, which the state’s legislature sent to his desk for final approval. These bills try to address the most pressing issues in artificial intelligence: everything from futuristic AI systems creating existential risk, deepfake nudes from AI image generators, to Hollywood studios creating AI clones of dead performers.

“Home to the majority of the world’s leading AI companies, California is working to harness these transformative technologies to help address pressing challenges while studying the risks they present,” said Governor Newsom’s office in a press release.

So far, Governor Newsom has signed nine of them into law, some of which are America’s most far reaching AI laws yet.

AI robocalls

On Friday, Governor Newsom signed a bill into law requiring robocalls to disclose whether they’ve use AI-generated voices. AB 2905 aims to prevent another instance of the deepfake robocall resembling Joe Biden’s voice that confused many New Hampshire voters earlier this year.

Deepfake nudes

Newsom signed two laws that address the creation and spread of deepfake nudes on Thursday. SB 926 criminalizes the act, making it illegal to blackmail someone with AI-generated nude images that resemble them.

SB 981, which also became law on Thursday, requires social media platforms to establish channels for users to report deepfake nudes that resemble them. The content must then be temporarily blocked while the platform investigates it, and permanently removed if confirmed.

Watermarks

Also on Thursday, Newsom signed a bill into law to help the public identify AI-generated content. SB 942 requires widely used generative AI systems to disclose they are AI-generated in their content’s provenance data. For example, all images created by OpenAI’s Dall-E now need a little tag in their metadata saying they’re AI generated.

Many AI companies already do this, and there are several free tools out there that can help people read this provenance data and detect AI-generated content.

Election deepfakes

Earlier this week, California’s governor signed three laws cracking down on AI deepfakes that could influence elections.

One of California’s new laws, AB 2655, requires large online platforms, like Facebook and X, to remove or label AI deepfakes related to elections, as well as create channels to report such content. Candidates and elected officials can seek injunctive relief if a large online platform is not complying with the act.

Another law, AB 2839, takes aim at social media users who post, or repost, AI deepfakes that could deceive voters about upcoming elections. The law went into effect immediately on Tuesday, and Newsom suggested Elon Musk may be at risk of violating it.

AI-generated political advertisements now require outright disclosures under California’s new law, AB 2355. That means moving forward, Trump may not be able to get away with posting AI deepfakes of Taylor Swift endorsing him on Truth Social (she endorsed Kamala Harris). The FCC has proposed a similar disclosure requirement at a national level and has already made robocalls using AI-generated voices illegal.

Actors and AI

Two laws that Newsom signed on Tuesday — which SAG-AFTRA, the nation’s largest film and broadcast actors union, was pushing for — create new standards for California’s media industry. AB 2602 requires studios to obtain permission from an actor before creating an AI-generated replica of their voice or likeness.

Meanwhile, AB 1836 prohibits studios from creating digital replicas of deceased performers without consent from their estates (e.g., legally cleared replicas were used in the recent “Alien” and “Star Wars” movies, as well as in other films).

What’s left?

Governor Newsom still has 29 AI-related bills to decide on before the end of September. During a chat with Salesforce CEO Marc Benioff on Tuesday during the 2024 Dreamforce conference, Newsom may have tipped his hat about SB 1047, and how he’s thinking about regulating the AI industry more broadly.

“There’s one bill that is sort of outsized in terms of public discourse and consciousness; it’s this SB 1047,” said Newsom onstage Tuesday. “What are the demonstrable risks in AI and what are the hypothetical risks? I can’t solve for everything. What can we solve for? And so that’s the approach we’re taking across the spectrum on this.”

Check back on this article for updates on what AI laws California’s governor signs, and what he doesn’t.

Here is what's illegal under California's 8 (and counting) new AI laws

Image Credits: Andrew Harnik / Getty Images

California Governor Gavin Newsom is currently considering 38 AI-related bills, including the highly contentious SB 1047, which the state’s legislature sent to his desk for final approval. These bills try to address the most pressing issues in artificial intelligence: everything from futuristic AI systems creating existential risk, deepfake nudes from AI image generators, to Hollywood studios creating AI clones of dead performers.

“Home to the majority of the world’s leading AI companies, California is working to harness these transformative technologies to help address pressing challenges while studying the risks they present,” said Governor Newsom’s office in a press release.

So far, Governor Newsom has signed eight of them into law, some of which are America’s most far reaching AI laws yet.

Deepfake nudes

Newsom signed two laws that address the creation and spread of deepfake nudes on Thursday. SB 926 criminalizes the act, making it illegal to blackmail someone with AI-generated nude images that resemble them.

SB 981, which also became law on Thursday, requires social media platforms to establish channels for users to report deepfake nudes that resemble them. The content must then be temporarily blocked while the platform investigates it, and permanently removed if confirmed.

Watermarks

Also on Thursday, Newsom signed a bill into law to help the public identify AI-generated content. SB 942 requires widely used generative AI systems to disclose they are AI-generated in their content’s provenance data. For example, all images created by OpenAI’s Dall-E now need a little tag in their metadata saying they’re AI generated.

Many AI companies already do this, and there are several free tools out there that can help people read this provenance data and detect AI-generated content.

Election deepfakes

Earlier this week, California’s governor signed three laws cracking down on AI deepfakes that could influence elections.

One of California’s new laws, AB 2655, requires large online platforms, like Facebook and X, to remove or label AI deepfakes related to elections, as well as create channels to report such content. Candidates and elected officials can seek injunctive relief if a large online platform is not complying with the act.

Another law, AB 2839, takes aim at social media users who post, or repost, AI deepfakes that could deceive voters about upcoming elections. The law went into effect immediately on Tuesday, and Newsom suggested Elon Musk may be at risk of violating it.

AI-generated political advertisements now require outright disclosures under California’s new law, AB 2355. That means moving forward, Trump may not be able to get away with posting AI deepfakes of Taylor Swift endorsing him on Truth Social (she endorsed Kamala Harris). The FCC has proposed a similar disclosure requirement at a national level and has already made robocalls using AI-generated voices illegal.

Actors and AI

Two laws that Newsom signed on Tuesday — which SAG-AFTRA, the nation’s largest film and broadcast actors union, was pushing for — create new standards for California’s media industry. AB 2602 requires studios to obtain permission from an actor before creating an AI-generated replica of their voice or likeness.

Meanwhile, AB 1836 prohibits studios from creating digital replicas of deceased performers without consent from their estates (e.g., legally cleared replicas were used in the recent “Alien” and “Star Wars” movies, as well as in other films).

What’s left?

Governor Newsom still has 30 AI-related bills to decide on before the end of September. During a chat with Salesforce CEO Marc Benioff on Tuesday during the 2024 Dreamforce conference, Newsom may have tipped his hat about SB 1047, and how he’s thinking about regulating the AI industry more broadly.

“There’s one bill that is sort of outsized in terms of public discourse and consciousness; it’s this SB 1047,” said Newsom onstage Tuesday. “What are the demonstrable risks in AI and what are the hypothetical risks? I can’t solve for everything. What can we solve for? And so that’s the approach we’re taking across the spectrum on this.”

Check back on this article for updates on what AI laws California’s governor signs, and what he doesn’t.

Fisker's electric Ocean SUV under investigation over braking loss complaints

Fisker Ocean SUV driving on dirt

Image Credits: Fisker

Federal safety regulators have opened an investigation into Fisker’s first electric vehicle over braking problems.

The National Highway Traffic Safety Administration’s (NHTSA) Office of Defects Investigation (ODI) issued a notice that it’s probing Fisker’s Ocean SUV for loss of braking performance. The agency is focusing on nine complaints about the issue thus far, including one incident involving a crash and an unspecified injury.

A Fisker spokesperson declined to comment.

The probe comes as the company grapples with lower-than-expected demand and a failure to meet internal sales goals, which TechCrunch exclusively reported earlier this month.

Fisker reported last month it delivered roughly 4,700 SUVs worldwide in 2023. The EV startup, which went public in 2020 via a merger with a special purpose acquisition company, began shipping the first Ocean SUVs in June, about six months after contract manufacturing partner Magna Steyr began building the vehicles. The SUV’s launch was delayed in part because its software was not ready at the time.

Since hitting the roads, owners have lodged 19 complaints with NHTSA on issues ranging from brake loss and problems with the gear shifter to a driver door failing to open from the interior and two instances of the vehicle’s hood suddenly flying up on the highway.

NHTSA says the Ocean can experience “partial loss of braking over low traction surfaces, without alerting the driver,” which “results in a sudden increase in stopping distance,” according to the braking complaints referenced by ODI, which were submitted between October and December 2023. The complaints also reference problems with the Ocean’s regenerative braking.

The complaint involving a crash was submitted in November. The owner reported they were driving from Washington, DC to Richmond, Virginia in slightly rainy conditions when another car swerved into their lane, according to the complaint. The owner said in the complaint the Ocean’s brake “vibrated and felt more plastic than elastic,” and that the car slid “as if the tires seized up.” The low-speed crash was mild enough that neither driver filed a police report, but the complaint states that the other driver has since filed an injury claim with the owner’s insurance agency.

There are four different types of investigation that the ODI can open: Defect Petition, Preliminary Evaluation, Recall Query and Engineering Analysis. NHTSA says it works to complete defect petitions in four months, preliminary evaluations and recall queries in eight months and engineering analysis probes in 18 months. The agency classified the Fisker probe as a preliminary evaluation.

Fisker collapsed under the weight of its founder's promises

Umbrella on rainy day - The raindrops falling on an umbrella which put on the ground and copy space, feeling lonely and sad concept. (Umbrella on rainy day - The raindrops falling on an umbrella which put on the ground and copy space, feeling lonely

Image Credits: Food Photographer (opens in a new window) / Getty Images

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here.

Over the past eight years, famed vehicle designer Henrik Fisker suggested his EV startup would deliver on a lot of promises, but none came true. As Fisker looks for an unlikely rescue, employees told TechCrunch that the blame rests largely on the shoulders of the husband-and-wife team who steer the company.

This week also had a major disturbance in the fintech space. After years of missteps and struggles, the banking-as-a-service fintech Synapse officially went bankrupt. Based on Synapse’s filings, as many as 100 fintechs and 10 million end customers could have been impacted by the company’s collapse.

Elon Musk just got a lot of cash for xAI. The AI startup raised $6 billion at an $18 billion pre-money valuation as it aims to compete with OpenAI, Microsoft and Alphabet.

In other big money moves, Google is investing nearly $350 million into Flipkart. The new investment gives the Walmart-owned Indian e-commerce startup a valuation of $36 billion. Google, which reaches more than half a billion people in India, identifies the South Asian nation as a key overseas market. 

News

Sam Altman walking away from a dissolving OpenAI logo
Image Credits: Darrell Etherington with files from Getty under license

Was Sam Altman fired from Y Combinator?: Paul Graham is setting the record straight. In a series of posts on X, the Y Combinator co-founder brushed off claims that Sam Altman was pressured to resign in 2019 due to potential conflicts of interest. Read More

Spotify doles out Car Thing refunds: Spotify is facing backlash over its decision to discontinue support for its in-car streaming device, Car Thing. Spotify has now instituted a refund process, but some users are asking the company not to brick their devices. Read More

Are earbuds the future of AI hardware?: Unlike generative AI gadgets like Humane’s Ai Pin and Rabbit’s R1, Iyo is aiming to build its technology into an already successful category: the Bluetooth earbud. Read More

Firefly forges on: On the heels of personal tragedy, the Tel Aviv-based startup has raised $23 million for its “infrastructure as code” solution to the growing issue of cloud asset management. Read More

Is Apple going to “sherlock” Arc?: Apple is reportedly planning to release a new technology called “smart recaps” in iOS 18, which appears to closely mimic Arc Search’s innovative “Browse for me” functionality. Read More

Misinformation is so back: A pair of new studies offers evidence that misinformation on social media has the power to change people’s minds. Find out who was most responsible for the vast majority of “fake news” in the studied time periods. Read More

AI models have favorite numbers too: Engineers at Gramener performed an experiment where they asked several major LLM chatbots to pick a random number between 0 and 100 — and the results were fascinating. Read More

Mistral unveils coding model: The French AI startup has released its first generative AI model for coding, dubbed Codestral, which is designed to help developers write and interact with code. Read More

Say hello to meme tech: Is it time to disrupt the meme industry? With Meme Depot, founder Alex Taub aspires to build a comprehensive archive of any meme imaginable with a crypto-focused business model. Read More

AI comes for tutors: The arrival of AI bots is posing a threat to long-established tutoring franchises and professional tutors, and the leading apps are from China. But do they actually help students learn? Read More

Analysis

onyx motorbikes-james Khatiblou
Image Credits: Bryce Durbin

What happens to a company when its founder dies?: Onyx Motorbikes was already in trouble — and then its 37-year-old owner passed away unexpectedly with no will or succession plan, leaving behind millions of dollars in debt. Rebecca Bellan reports on how the ensuing battle for control has put Onyx in legal limbo. Read More

The ‘edgelords’ at OpenAI: Meredith Whittaker has some candid thoughts about the current leadership at OpenAI. Mike Butcher sat down with the Signal president to discuss what she describes as a disrespectful “frat house” contingent of the tech industry in a wide-ranging conversation. Read More

Don’t expect IPOs from these startups: While 2024 is looking to be a better year for tech startups going public, there’s still a number of high-profile companies that are wanting to wait just a little bit longer. From Plaid to Figma, Rebecca Szkutak rounds up the companies that aren’t itching to go public just yet. Read More

Fisker's electric Ocean SUV under investigation over braking loss complaints

Fisker Ocean SUV driving on dirt

Image Credits: Fisker

Federal safety regulators have opened an investigation into Fisker’s first electric vehicle over braking problems.

The National Highway Traffic Safety Administration’s (NHTSA) Office of Defects Investigation (ODI) issued a notice that it’s probing Fisker’s Ocean SUV for loss of braking performance. The agency is focusing on nine complaints about the issue thus far, including one incident involving a crash and an unspecified injury.

A Fisker spokesperson declined to comment.

The probe comes as the company grapples with lower-than-expected demand and a failure to meet internal sales goals, which TechCrunch exclusively reported earlier this month.

Fisker reported last month it delivered roughly 4,700 SUVs worldwide in 2023. The EV startup, which went public in 2020 via a merger with a special purpose acquisition company, began shipping the first Ocean SUVs in June, about six months after contract manufacturing partner Magna Steyr began building the vehicles. The SUV’s launch was delayed in part because its software was not ready at the time.

Since hitting the roads, owners have lodged 19 complaints with NHTSA on issues ranging from brake loss and problems with the gear shifter to a driver door failing to open from the interior and two instances of the vehicle’s hood suddenly flying up on the highway.

NHTSA says the Ocean can experience “partial loss of braking over low traction surfaces, without alerting the driver,” which “results in a sudden increase in stopping distance,” according to the braking complaints referenced by ODI, which were submitted between October and December 2023. The complaints also reference problems with the Ocean’s regenerative braking.

The complaint involving a crash was submitted in November. The owner reported they were driving from Washington, DC to Richmond, Virginia in slightly rainy conditions when another car swerved into their lane, according to the complaint. The owner said in the complaint the Ocean’s brake “vibrated and felt more plastic than elastic,” and that the car slid “as if the tires seized up.” The low-speed crash was mild enough that neither driver filed a police report, but the complaint states that the other driver has since filed an injury claim with the owner’s insurance agency.

There are four different types of investigation that the ODI can open: Defect Petition, Preliminary Evaluation, Recall Query and Engineering Analysis. NHTSA says it works to complete defect petitions in four months, preliminary evaluations and recall queries in eight months and engineering analysis probes in 18 months. The agency classified the Fisker probe as a preliminary evaluation.

TikTok logo seen on an Android mobile device screen with the European Union (EU) flag in the background.

EU opens formal probe of TikTok under Digital Services Act, citing child safety, risk management and other concerns

TikTok logo seen on an Android mobile device screen with the European Union (EU) flag in the background.

Image Credits: Chukrut Budrul/SOPA Images/LightRocket / Getty Images

The European Union is formally investigating TikTok’s compliance with the bloc’s Digital Services Act (DSA), the Commission has announced.

Areas the Commission is focusing on in this investigation of TikTok are linked to the protection of minors, advertising transparency, data access for researchers, and the risk management of addictive design and harmful content, in said in a press release.

The DSA is the bloc’s online governance and content moderation rulebook, which, since Saturday, has applied broadly to — likely — thousands of platforms and services. But since last summer, larger platforms, such as TikTok, have faced a set of extra requirements in areas like algorithmic transparency and systemic risk, and it’s those rules the video-sharing platform is now being investigated under.

Penalties for confirmed breaches of the DSA can reach up to 6% of global annual turnover.

Today’s move follows several months of information gathering by the Commission, which enforces the DSA rules for larger platforms — including requests for information from TikTok in areas such as child protection and disinformation risks.

Although the EU’s concerns over TikTok’s approach to content governance and safety predate the DSA coming into force on larger platforms. And TikTok was forced to make some operational tweaks before, back in June 2022, after regional consumer protection authorities banded together to investigate child safety and privacy complaints.

The Commission will now step up its information requests to the video sharing platform as it investigates the string of suspected breaches. This could also include conducting interviews and inspections as well as asking it to send more data.

There’s no formal deadline for the EU to conclude these in-depth probes — its press release just notes the duration depends on several factors, such as “the complexity of the case, the extent to which the company concerned cooperates with the Commission and the exercise of the rights of defence.”

TikTok was contacted for comment on the formal investigation. A company spokesperson emailed us this statement:

TikTok has pioneered features and settings to protect teens and keep under 13s off the platform, issues the whole industry is grappling with. We’ll continue to work with experts and industry to keep young people on TikTok safe, and look forward to now having the opportunity to explain this work in detail to the Commission.

TikTok confirmed receipt of a document from the Commission setting out the EU’s decision to open an investigation. The company also said it has responded to all previous Commission requests for information but has yet to receive any feedback about its responses. Additionally, TikTok said an earlier offer it made for its internal child safety staff to meet with Commission officials has yet to be taken up.

In its press release, the Commission says the probe of TikTok’s compliance with DSA obligations in the area of systemic risks will look at “actual or foreseeable negative effects” stemming from the design of its system, including algorithms. The EU says it’s worried TikTok’s UX may “stimulate behavioural addictions and/or create so-called ‘rabbit hole effects.’”

“Such assessment is required to counter potential risks for the exercise of the fundamental right to the person’s physical and mental well-being, the respect of the rights of the child as well as its impact on radicalisation processes,” it further writes.

The Commission is also concerned that mitigation measures TikTok has put in place to protect kids from accessing inappropriate content — namely age verification tools — “may not be reasonable, proportionate and effective.”

The bloc will therefore look at whether TikTok is complying with “DSA obligations to put in place appropriate and proportionate measures to ensure a high level of privacy, safety and security for minors, particularly with regard to default privacy settings for minors as part of the design and functioning of their recommender systems.”

Elsewhere, the EU’s probe will assess whether TikTok is fulfilling the DSA requirement to provide “a searchable and reliable repository” for ads that run on its platform.

TikTok only launched an ads library last summer — ahead of the regulation’s compliance deadline for larger platforms.

Also on transparency, the Commission says its investigation concerns “suspected shortcomings” when it comes to TikTok providing researchers with access to publicly accessible data on its platform so they can study systemic risk in the EU — with such data access mandated by Article 40 of the DSA.

Again, TikTok announced an expansion to its research API last summer. But, evidently, the bloc is concerned neither of these measures have gone far enough to fulfill the platform’s legal requirements to ensure transparency.

Commenting in a statement, Margrethe Vestager, EVP for digital, said:

The safety and well-being of online users in Europe is crucial. TikTok needs to take a close look at the services they offer and carefully consider the risks that they pose to their users — young as well as old. The Commission will now carry out an in-depth investigation without prejudice to the outcome.

In another supporting statement internal market commissioner Thierry Breton emphasized that “the protection of minors is a top enforcement priority for the DSA.”

“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” he added. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans. We must spare no effort to protect our children.”

It’s the second such proceeding under the DSA, after the bloc opened a probe on Elon Musk–owned X (formerly Twitter) in December, also citing a string of concerns. That investigation remains ongoing.

Once an investigation has been opened, EU enforcers can also access a broader toolbox, such as being able to take interim measures prior to a formal proceeding being wrapped up.

The EU may also accept commitments offered by a platform under investigation if they are aimed at fixing the issues identified.

Around two dozen platforms are subject to the DSA’s algorithmic transparency and systemic risk rules. These are defined as platforms with more than 45 million regional monthly active users.

In TikTok’s case, the platform informed the bloc last year that it had 135.9 million monthly active users in the EU.

The Commission’s decision to open a child protection investigation on TikTok means Ireland’s media regulator, which is responsible for oversight of TikTok’s compliance with the rest of DSA rules, under the decentralized, “country of origin” enforcement structure the EU devised for enforcing the bulk of the regulation, won’t be able to step in and supervise the platform’s compliance in this area. It will be solely up to the Commission to assess whether or not TikTok has put in place “appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors.” 

In recent years, Ireland’s data protection authority, which oversees TikTok’s compliance with another major piece of EU digital law — aka, the bloc’s General Data Protection Regulation — has faced criticism from some EU lawmakers for not acting swiftly enough on concerns about how the platform processes minors’ data.

Elon Musk’s X faces first DSA probe in EU over illegal content risks, moderation, transparency and deceptive design

Meta and Snap latest to get EU request for info on child safety, as bloc shoots for ‘unprecedented’ transparency

yellow warning symbols with exclamation points on a patterned background

Researchers warn high-risk ConnectWise flaw under attack is 'embarrassingly easy' to exploit

yellow warning symbols with exclamation points on a patterned background

Image Credits: DBenitostock / Getty Images

Security experts are warning that a high-risk vulnerability in a widely used remote access tool is “trivial and embarrassingly easy” to exploit, as the software’s developer confirms malicious hackers are actively exploiting the flaw.

The maximum severity-rated vulnerability affects ConnectWise ScreenConnect (formerly ConnectWise Control), a popular remote access software that allows managed IT providers and technicians to provide real-time remote technical support on customer systems.

The flaw is described as an authentication bypass vulnerability that could allow an attacker to remotely steal confidential data from vulnerable servers or deploy malicious code, such as malware. The vulnerability was first reported to ConnectWise on February 13, and the company publicly disclosed details of the bug in a security advisory published on February 19.

ConnectWise initially said there was no indication of public exploitation, but noted in an update on Tuesday that ConnectWise confirmed it has “received updates of compromised accounts that our incident response team have been able to investigate and confirm.”

The company also shared three IP addresses which it says “were recently used by threat actors.”

When asked by TechCrunch, ConnectWise spokesperson Amanda Lee declined to say how many customers are affected but noted that ConnectWise has seen “limited reports” of suspected intrusions. Lee added that 80% of customer environments are cloud-based and were patched automatically within 48 hours.

When asked if ConnectWise is aware of any data exfiltration or whether it has the means to detect if any data was accessed, Lee said “there has been no data exfiltration reported to us.”

Florida-based ConnectWise provides its remote access technology to more than a million small to medium-sized businesses, its website says.

Cybersecurity company Huntress on Wednesday published an analysis of the actively exploited ConnectWise vulnerability. Huntress security researcher John Hammond told TechCrunch that Huntress is aware of “current and active” exploitation, and is seeing early signs of threat actors moving on to “more focused post-exploitation and persistence mechanisms.”

“We are seeing adversaries already deploy Cobalt Strike beacons and even install a ScreenConnect client onto the affected server itself,” said Hammond, referring to the popular exploitation framework Cobalt Strike, used both by security researchers for testing and abused by malicious hackers to break into networks. “We can expect more of these compromises in the very near future.”

Huntress CEO Kyle Hanslovan added that Huntress’ own customer telemetry shows visibility into more than 1,600 vulnerable servers.

“I can’t sugarcoat it — this shit is bad. We’re talking upwards of ten thousand servers that control hundreds of thousands of endpoints,” Hanslovan told TechCrunch, noting that upwards of 8,800 ConnectWise servers remain vulnerable to exploitation.

Hanslovan added that due to the “sheer prevalence of this software and the access afforded by this vulnerability signals we are on the cusp of a ransomware free-for-all.”

ConnectWise has released a patch for the actively exploited vulnerability and is urging on-premise ScreenConnect users to apply the fix immediately. ConnectWise also released a fix for a separate vulnerability affecting its remote desktop software. Lee told TechCrunch that the company has seen no evidence that this flaw has been exploited.

Earlier this year, U.S. government agencies CISA and the National Security Agency warned that they had observed a “widespread cyber campaign involving the malicious use of legitimate remote monitoring and management (RMM) software” — including ConnectWise SecureConnect — to target multiple federal civilian executive branch agencies.

The U.S. agencies also observed hackers abusing remote access software from AnyDesk, which was earlier this month forced to reset passwords and revoke certificates after finding evidence of compromised production systems.

In response to inquiries by TechCrunch, Eric Goldstein, CISA executive assistant director for cybersecurity, said: “CISA is aware of a reported vulnerability impacting ConnectWise ScreenConnect and we are working to understand potential exploitation in order to provide necessary guidance and assistance.”


Are you affected by the ConnectWise vulnerability? You can contact Carly Page securely on Signal at +441536 853968 or by email at [email protected]. You can also contact TechCrunch via SecureDrop.

Waymo's robotaxis under investigation after crashes and traffic mishaps

waymo driverless jaguar i pace

Image Credits: Kirsten Korosec

Waymo’s autonomous vehicle software is under investigation after federal regulators received 22 reports of the robotaxis crashing or potentially violating traffic safety laws by driving in the wrong lane or into construction zones.

The National Highway Traffic Safety Administration’s Office of Defects Investigation (ODI) says the probe is intended to evaluate the software and its ability to avoid collisions with stationary objects, and how well it detects and responds to “traffic safety control devices” like cones. The investigation is designated as a “preliminary evaluation,” which the ODI tries to resolve within eight months.

“NHTSA plays a very important role in road safety and we will continue to work with them as part of our mission to become the world’s most trusted driver,” Waymo said in a statement to TechCrunch.

It’s the second investigation into autonomous vehicles that ODI has publicly announced in the last two days. On Monday, ODI opened a probe into Amazon-backed Zoox’s AVs after receiving two reports of the company’s autonomous-equipped Toyota Highlanders being rear-ended by motorcycles after the SUVs unexpectedly triggered the brakes.

The investigation into Waymo’s software also comes just three months after Waymo issued its first-ever recall of its autonomous software, after two of its vehicles crashed into the same towed pickup truck in Phoenix, Arizona.

The company’s robotaxis have had enough trouble with construction sites that videos of these mishaps have regularly gone viral. Some of these are cited in ODI’s report, like when one of Waymo’s robotaxis drove off a paved roadway into a construction zone in Phoenix last October and sustained underbody damage.

There are more typical fender-benders cited, too. In San Francisco, California last year, one of Waymo’s AVs was waiting to merge into traffic when it decided to re-route. As a result, one of its exterior sensors clipped an SUV. In a May incident in San Francisco, a Waymo AV ran into the bumper of a parked car while trying to execute a “pullover maneuver.”

Many of the crashes cited in ODI’s report, though, tend to cite more mundane encounters.

There are multiple examples where Waymo’s robotaxis had trouble navigating automatic gates at parking complexes. At times, Waymo’s AVs crashed into the gates. In a February incident in Arizona, the Waymo AV encountered a closed gate and, when turning to leave the area, backed into parking spikes and popped its tire. In another from November, a Waymo AV crashed into a chain separating part of a parking lot.

While these aren’t life-and-death scenarios, they help illustrate the hard — and hard-to-predict — corner cases that stand in the way of truly autonomous vehicles.