These startups are trying to prevent another CrowdStrike-like outage, according to VCs

SPAIN - JULY 19: Passengers at Madrid-Barajas airport during the crash of Microsoft's security system that has caused failures at major companies around the world, July 19, 2024, in Madrid, Spain. An update problem of the cybersecurity company Crowdstrike causes the fall of Microsoft, Aena and other companies in the energy sector, banking and media. Microsoft says it is investigating the situation. (Photo By Diego Radames/Europa Press via Getty Images)

Image Credits: Diego Radames/Europa Press / Getty Images

Windows users around the globe woke up on Friday morning to “blue screens of death” (BSOD) thanks to a faulty software update from CrowdStrike. The bug caused outages around the world, bringing airlines, boats, hospitals, and banks to a grinding halt. But some see opportunity in the rubble.

The global outage is a perfect reminder how much of the world relies on technological infrastructure. In the midst of disaster, some venture capitalists see a chance for new technologies to prevent this from ever happening again. In 2024, one buggy software update should probably not be allowed to take down so many of the globe’s most important computer systems. Some would say this is exactly why startups, and venture capital, exist: to innovate in the face of a widespread issue.

The CrowdStrike outage is drawing attention to cybersecurity companies, but CRV general partner Reid Christian says this wasn’t a cybersecurity event; the real problem is that a massive vendor deployed software that wasn’t properly tested, debugged or deployed in a staged rollout. CRV is investing in a cybersecurity and IT management startup called Fleet that monitors vendor instances on your endpoint.

It’s not clear how well additional mobile device management-type software, like Fleet, would have worked with this particular CrowdStrike issue. The problem appeared to be caused by a faulty Windows kernel-level driver, which is software installed at the deepest levels of a computer. (Companies that had MDM software in addition to CloudStrike still experienced the BSOD.) But Christian points out that when granting that level of access and trust to a software vendor, more protections are necessary.

“We need to have people watching the watchers in the cyber world,” Christian said. “You can have your main vendors, but you must have ancillary vendors as well, people who are sitting alongside and are there to support.”

Fleet co-founder and CTO Zach Wasserman tells TechCrunch his security software operates outside the kernel to not compromise the stability of the system.

Though this wasn’t a cybersecurity incident caused by a malicious hacker, Friday’s outage may have been so severe due to CrowdStrike’s unique access to kernels, the core of the operating system. Lightspeed Venture Partners’ Guru Chahal suspects cybersecurity applications, such as Wiz, that sit outside the kernel may become more popular after this disaster.

“Once you give access to the kernel (as in this case), it’s hard to stop these issues,” Chahal said in an email to TechCrunch. “But avoiding by using non-invasive approaches is definitely possible and companies such as Wiz (Cloud Security) and Oligo Security (run time security) take these alternative approaches for this reason.”

Oligo Security is security observability software for open source software that uses sandboxing, not direct access to the kernel. Given that this was a Windows problem, it couldn’t have prevented this issue. But the point of a sandboxed system is something the Windows security industry may want to better pursue.

Meanwhile, Wiz is not doing a victory lap just yet. Despite all the buzz around the cybersecurity company now that Google is negotiating a $23 billion acquisition deal, Wiz board member Gili Raanan says Friday’s event upped the pressure on everyone. He expects that the entire security ecosystem will face greater scrutiny around products and deployment due to this event.

“It’s a bad day not just for CrowdStrike. It’s a bad day for everyone involved in cybersecurity,” Raanan said. “There are no winners and losers, there are only losers.”

Fin Capital founder Logan Allin, who invests in B2B financial services companies, sees a greater need for cloud observability companies in light of Friday’s outage. Outside of cybersecurity, he says companies are becoming increasingly dependent on external APIs as they integrate more AI solutions, which are prone to buggy software updates like this.

“There’s companies in our portfolio, like Middleware, that ensure API integrations between your cybersecurity, your cloud orchestration, and all the moving packets of data within the architecture don’t break,” Allin said.

Though Friday’s outage was jarring, VCs like Allin and Chahal predict this is only the beginning of an outdated, crumbling infrastructure layer. Especially in older sectors, such as finance or healthcare, these outages highlight the need for updated technology.

“Going forward, I suspect there’ll be a number of startups that avoid this issue of sitting in the kernel while still providing runtime security,” Chahal said.

Reporting contributed by Marina Temkin.

California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic

dome of California State Capitol Building, Sacramento

Image Credits: Melinda Podor / Getty Images

California’s bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. Today, California lawmakers bent slightly to that pressure, adding in several amendments suggested by AI firm Anthropic and other opponents.

On Thursday the bill passed through California’s Appropriations Committee, a major step toward becoming law, with several key changes, Senator Wiener’s office told TechCrunch.

“We accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry,” said Senator Wiener in a statement to TechCrunch. “These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation.”

SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California’s government less power to hold AI labs to account.

What does SB 1047 do now?

Most notably, the bill no longer allows California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic.

Instead, California’s attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.

Further, SB 1047 no longer creates the Frontier Model Division (FMD), a new government agency formerly included in the bill. However, the bill still creates the Board of Frontier Models — the core of the FMD — and places them inside the existing Government Operations Agency. In fact, the board is bigger now, with nine people instead of five. The Board of Frontier Models will still set compute thresholds for covered models, issue safety guidance and issue regulations for auditors.

Senator Wiener also amended SB 1047 so that AI labs no longer need to submit certifications of safety test results “under penalty of perjury.” Now, these AI labs are simply required to submit public “statements” outlining their safety practices, but the bill no longer imposes any criminal liability.

SB 1047 also now includes more lenient language around how developers ensure AI models are safe. Now, the bill requires developers to provide “reasonable care” AI models do not pose a significant risk of causing catastrophe, instead of the “reasonable assurance” the bill required before.

Further, lawmakers added a protection for open source fine-tuned models. If someone spends less than $10 million fine-tuning a covered model, they are explicitly not considered a developer by SB 1047. The responsibility will still be on the original, larger developer of the model.

Why all the changes now?

While the bill has faced significant opposition from U.S. congressmen, renowned AI researchers, Big Tech and venture capitalists, the bill has flown through California’s legislature with relative ease. These amendments are likely to appease SB 1047 opponents and present Governor Newsom with a less controversial bill he can sign into law without losing support from the AI industry.

While Newsom has not publicly commented on SB 1047, he’s previously indicated his commitment to California’s AI innovation.

Anthropic tells TechCrunch it’s reviewing SB 1047’s changes before it takes a position. Not all of Anthropic’s suggested amendments were adopted by Senator Wiener.

“The goal of SB 1047 is—and has always been—to advance AI safety, while still allowing for innovation across the ecosystem,” said Nathan Calvin, senior policy counsel for the Center for AI Safety Action Fund. “The new amendments will support that goal.”

That said, these changes are unlikely to appease staunch critics of SB 1047. While the bill is notably weaker than before these amendments, SB 1047 still holds developers liable for the dangers of their AI models. That core fact about SB 1047 is not universally supported, and these amendments do little to address it.

“The edits are window dressing,” said Andreessen Horowitz general partner Martin Casado in a tweet. “They don’t address the real issues or criticisms of the bill.”

In fact, moments after SB 1047 passed on Thursday, eight United States Congress members representing California wrote a letter asking Governor Newsom to veto SB 1047. They write the bill “would not be good for our state, for the start-up community, for scientific development, or even for protection against possible harm associated with AI development.”

What’s next?

SB 1047 is now headed to California’s Assembly floor for a final vote. If it passes there, it will need to be referred back to California’s Senate for a vote due to these latest amendments. If it passes both, it will head to Governor Newsom’s desk, where it could be vetoed or signed into law.

California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic

dome of California State Capitol Building, Sacramento

Image Credits: Melinda Podor / Getty Images

California’s bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. Today, California lawmakers bent slightly to that pressure, adding in several amendments suggested by AI firm Anthropic and other opponents.

On Thursday the bill passed through California’s Appropriations Committee, a major step toward becoming law, with several key changes, Senator Wiener’s office told TechCrunch.

“We accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry,” said Senator Wiener in a statement to TechCrunch. “These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation.”

SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California’s government less power to hold AI labs to account.

What does SB 1047 do now?

Most notably, the bill no longer allows California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic.

Instead, California’s attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.

Further, SB 1047 no longer creates the Frontier Model Division (FMD), a new government agency formerly included in the bill. However, the bill still creates the Board of Frontier Models — the core of the FMD — and places them inside the existing Government Operations Agency. In fact, the board is bigger now, with nine people instead of five. The Board of Frontier Models will still set compute thresholds for covered models, issue safety guidance and issue regulations for auditors.

Senator Wiener also amended SB 1047 so that AI labs no longer need to submit certifications of safety test results “under penalty of perjury.” Now, these AI labs are simply required to submit public “statements” outlining their safety practices, but the bill no longer imposes any criminal liability.

SB 1047 also now includes more lenient language around how developers ensure AI models are safe. Now, the bill requires developers to provide “reasonable care” AI models do not pose a significant risk of causing catastrophe, instead of the “reasonable assurance” the bill required before.

Further, lawmakers added a protection for open source fine-tuned models. If someone spends less than $10 million fine-tuning a covered model, they are explicitly not considered a developer by SB 1047. The responsibility will still be on the original, larger developer of the model.

Why all the changes now?

While the bill has faced significant opposition from U.S. congressmen, renowned AI researchers, Big Tech and venture capitalists, the bill has flown through California’s legislature with relative ease. These amendments are likely to appease SB 1047 opponents and present Governor Newsom with a less controversial bill he can sign into law without losing support from the AI industry.

While Newsom has not publicly commented on SB 1047, he’s previously indicated his commitment to California’s AI innovation.

Anthropic tells TechCrunch it’s reviewing SB 1047’s changes before it takes a position. Not all of Anthropic’s suggested amendments were adopted by Senator Wiener.

“The goal of SB 1047 is—and has always been—to advance AI safety, while still allowing for innovation across the ecosystem,” said Nathan Calvin, senior policy counsel for the Center for AI Safety Action Fund. “The new amendments will support that goal.”

That said, these changes are unlikely to appease staunch critics of SB 1047. While the bill is notably weaker than before these amendments, SB 1047 still holds developers liable for the dangers of their AI models. That core fact about SB 1047 is not universally supported, and these amendments do little to address it.

“The edits are window dressing,” said Andreessen Horowitz general partner Martin Casado in a tweet. “They don’t address the real issues or criticisms of the bill.”

In fact, moments after SB 1047 passed on Thursday, eight United States Congress members representing California wrote a letter asking Governor Newsom to veto SB 1047. They write the bill “would not be good for our state, for the start-up community, for scientific development, or even for protection against possible harm associated with AI development.”

What’s next?

SB 1047 is now headed to California’s Assembly floor for a final vote. If it passes there, it will need to be referred back to California’s Senate for a vote due to these latest amendments. If it passes both, it will head to Governor Newsom’s desk, where it could be vetoed or signed into law.

These startups are trying to prevent another CrowdStrike-like outage, according to VCs

SPAIN - JULY 19: Passengers at Madrid-Barajas airport during the crash of Microsoft's security system that has caused failures at major companies around the world, July 19, 2024, in Madrid, Spain. An update problem of the cybersecurity company Crowdstrike causes the fall of Microsoft, Aena and other companies in the energy sector, banking and media. Microsoft says it is investigating the situation. (Photo By Diego Radames/Europa Press via Getty Images)

Image Credits: Diego Radames/Europa Press / Getty Images

Windows users around the globe woke up on Friday morning to “blue screens of death” (BSOD) thanks to a faulty software update from CrowdStrike. The bug caused outages around the world, bringing airlines, boats, hospitals, and banks to a grinding halt. But some see opportunity in the rubble.

The global outage is a perfect reminder how much of the world relies on technological infrastructure. In the midst of disaster, some venture capitalists see a chance for new technologies to prevent this from ever happening again. In 2024, one buggy software update should probably not be allowed to take down so many of the globe’s most important computer systems. Some would say this is exactly why startups, and venture capital, exist: to innovate in the face of a widespread issue.

The CrowdStrike outage is drawing attention to cybersecurity companies, but CRV general partner Reid Christian says this wasn’t a cybersecurity event; the real problem is that a massive vendor deployed software that wasn’t properly tested, debugged or deployed in a staged rollout. CRV is investing in a cybersecurity and IT management startup called Fleet that monitors vendor instances on your endpoint.

It’s not clear how well additional mobile device management-type software, like Fleet, would have worked with this particular CrowdStrike issue. The problem appeared to be caused by a faulty Windows kernel-level driver, which is software installed at the deepest levels of a computer. (Companies that had MDM software in addition to CloudStrike still experienced the BSOD.) But Christian points out that when granting that level of access and trust to a software vendor, more protections are necessary.

“We need to have people watching the watchers in the cyber world,” Christian said. “You can have your main vendors, but you must have ancillary vendors as well, people who are sitting alongside and are there to support.”

Fleet co-founder and CTO Zach Wasserman tells TechCrunch his security software operates outside the kernel to not compromise the stability of the system.

Though this wasn’t a cybersecurity incident caused by a malicious hacker, Friday’s outage may have been so severe due to CrowdStrike’s unique access to kernels, the core of the operating system. Lightspeed Venture Partners’ Guru Chahal suspects cybersecurity applications, such as Wiz, that sit outside the kernel may become more popular after this disaster.

“Once you give access to the kernel (as in this case), it’s hard to stop these issues,” Chahal said in an email to TechCrunch. “But avoiding by using non-invasive approaches is definitely possible and companies such as Wiz (Cloud Security) and Oligo Security (run time security) take these alternative approaches for this reason.”

Oligo Security is security observability software for open source software that uses sandboxing, not direct access to the kernel. Given that this was a Windows problem, it couldn’t have prevented this issue. But the point of a sandboxed system is something the Windows security industry may want to better pursue.

Meanwhile, Wiz is not doing a victory lap just yet. Despite all the buzz around the cybersecurity company now that Google is negotiating a $23 billion acquisition deal, Wiz board member Gili Raanan says Friday’s event upped the pressure on everyone. He expects that the entire security ecosystem will face greater scrutiny around products and deployment due to this event.

“It’s a bad day not just for CrowdStrike. It’s a bad day for everyone involved in cybersecurity,” Raanan said. “There are no winners and losers, there are only losers.”

Fin Capital founder Logan Allin, who invests in B2B financial services companies, sees a greater need for cloud observability companies in light of Friday’s outage. Outside of cybersecurity, he says companies are becoming increasingly dependent on external APIs as they integrate more AI solutions, which are prone to buggy software updates like this.

“There’s companies in our portfolio, like Middleware, that ensure API integrations between your cybersecurity, your cloud orchestration, and all the moving packets of data within the architecture don’t break,” Allin said.

Though Friday’s outage was jarring, VCs like Allin and Chahal predict this is only the beginning of an outdated, crumbling infrastructure layer. Especially in older sectors, such as finance or healthcare, these outages highlight the need for updated technology.

“Going forward, I suspect there’ll be a number of startups that avoid this issue of sitting in the kernel while still providing runtime security,” Chahal said.

Reporting contributed by Marina Temkin.

HoundDog.ai helps developers prevent personal information from leaking

Dog with glasses, working at laptop.

Image Credits: olaser / Getty Images

HoundDog.ai, a startup that helps developers ensure their code doesn’t leak personally identifiable information (PII), came out of stealth Wednesday and announced a $3.1 million seed round lead by E14, Mozilla Ventures and ex/ante, in addition to a number of angel investors. Unlike other scanning tools, HoundDog actually looks at the code a developer is writing, using both traditional pattern matching and large language models (LLMs) to find potential issues.

HoundDog was founded by Amjad Afanah, who previously co-founded DCHQ, which was later acquired by Gridstore (which, to complicate things, then changed its name to HyperGrid) in 2016. Afanah also co-founded apisec.ai, which is still up and running, and worked at self-driving startup Cruise. The inspiration for HoundDog came during his time at data security startup Cyral and talking to privacy teams there, he told me.

Image Credits: HoundDog.ai

“When I was at Cyral, we had a lot of data,” he said. “What Cyral does — like many others in the data security space — is they focus on production systems. They help you discover, classify your structured data and your databases, and then help you apply access controls. But the overwhelming feedback that I kept hearing from security and privacy teams alike was: ‘You know, it’s a little too reactive and it doesn’t keep up with the changes in the code base.’”

So HoundDog shifts this process even further left. While it still sits in the continuous integration flow and not yet in the development environment (though that may happen in the future), the idea here is to find potential data leaks before the code is merged. And most importantly, HoundDog does so by looking at the actual code, not the data flow it produces. “Our source of truth is the code base,” Afanah said.

Image Credits: HoundDog.ai

Thanks to this, if a development team starts collecting Social Security numbers, for example, HoundDog would raise a flag and warn the team about that before the code is ever merged; it would also alert the security team. That could potentially be a major — and costly issue — after all.

The service currently supports code written in Java, C#, JavaScript and TypeScript, as well as SQL, GraphQL and OpenAPI/Swagger queries. Support for Python is imminent, the company says.

Afanah noted that a tool like this is becoming especially important in this age of AI-generated code, something Replit CEO (and HoundDog angel investor) Amjad Masad also echoed.

“As an increasing number of companies turn to AI-generated code to accelerate development, embedding security best practices and ensuring the security of the generated code becomes essential,” Masad said. “HoundDog.ai is leading the way in securing PII data early in the development cycle, making it an indispensable component of any AI code generation workflow. This is the reason I chose to invest in this company.”

HoundDog itself does use AI, though, too. It currently relies on OpenAI’s models to do so, but it’s important to stress that this is optional. Users who worry about their code leaving their private repositories can also choose to only rely on the company’s more traditional code scanner.

A major part of HoundDog’s value proposition is that it can cut compliance costs for startups thanks to its automated reporting capabilities. The service can automatically generate a record of processing activities (RoPA). To do this, HoundDog uses generative AI to generate these reports and sends that data to OpenAI. The team does stress that only the tokens the service has discovered through its regular scanner are shared with OpenAI and that the actual source code isn’t shared.

The company offers a limited free plan, with paid plans starting at $200/month for scanning up to two repos.