Former NSA head joins OpenAI board and safety committee

Commander of the US Cyber Command Army General Paul Nakasone prepares to testify at a House Select Committee on the Chinese Communist Party

Image Credits: Julia Nikhinson/AFP / Getty Images

Former head of the National Security Agency, retired Gen. Paul Nakasone, will join OpenAI’s board of directors, the AI company announced Thursday afternoon. He will also sit on the board’s “security and safety” subcommittee.

The high-profile addition is likely intended to satisfy critics who think that OpenAI is moving faster than is wise for its customers and possibly humanity, putting out models and services without adequately evaluating their risks or locking them down.

Nakasone brings decades of experience from the Army, U.S. Cyber Command and the NSA. Whatever one may feel about the practices and decision-making at these organizations, he certainly can’t be accused of a lack of expertise.

As OpenAI increasingly establishes itself as an AI provider not just to the tech industry but government, defense and major enterprises, this kind of institutional knowledge is valuable both for itself and as a pacifier for worried shareholders. (No doubt the connections he brings in the state and military apparatus are also welcome.)

“OpenAI’s dedication to its mission aligns closely with my own values and experience in public service,” Nakasone said in a press release.

That certainly seems true: Nakasone and the NSA recently defended the practice of buying data of questionable provenance to feed its surveillance networks, arguing that there was no law against it. OpenAI, for its part, has simply taken, rather than buying, large swathes of data from the internet, arguing when it is caught that there is no law against it. They seem to be of one mind when it comes to asking forgiveness rather than permission, if indeed they ask either.

NSA is buying Americans’ internet browsing records without a warrant

The OpenAI release also states:

Nakasone’s insights will also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats. We believe AI has the potential to deliver significant benefits in this area for many institutions frequently targeted by cyber attacks like hospitals, schools, and financial institutions.

So this is a new market play, as well.

Nakasone will join the board’s safety and security committee, which is “responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations.” What this newly created entity actually does and how it will operate is still unknown, as several of the senior people working on safety (as far as AI risk) have left the company, and the committee is itself in the middle of a 90-day evaluation of the company’s processes and safeguards.

Former Tesla humanoid head launches a robotics startup

Image Credits: Mytra

Robotics startup Mytra has been quietly operating behind the scenes since its May 2022 founding in a bid to rethink warehouse automation. The outfit brings a solid pedigree to the table, founded by automotive vets from EV firms, including Tesla and Rivian.

Warehouse/fulfillment has been a white-hot category for automation since the pandemic hamstrung the global supply chain in ways that are still being felt. It’s a highly competitive space as well, with big names like Amazon, Locus and Zebra/Fetch making headway and setting the stage for an explosion of interest in the bipedal humanoid form factor.

Even as the world has come back to life post-pandemic, labor shortages remain a major sticking point for the industry, as with so many others. There’s still plenty of room for players to make an impact, however. Estimates suggest that between 5% and 10% of global warehouses are automated in any meaningful sense.

Like so many others, Mytra co-founder and CEO Chris Walti discovered automation’s shortcomings the hard way. He was previously at Tesla, where the hard way tends to be par for the course. Walti spent seven years at the carmaker, first in engineering, then mobile robotics and ultimately as the senior manager/lead for what would become Optimus.

He describes his journey through Tesla as an ongoing cycle of looking for solutions, determining that nothing in market was suitable for its specifics needs, and then going and building the things themselves. That started with autonomous mobile robot (AMR) solutions.

“I was pulled into manufacturing and automation through the Model 3 ramp,” Walti told TechCrunch. “Tesla was struggling to get our automation systems up and running, so we ended up setting up a manual warehouse as a pressure release valve for the manufacturing system. About six months later, they were like, ‘Can you just take over the automation system that’s causing a lot of these challenges?’”

Image Credits: Mytra

Among the industry shortcomings that emerged for Tesla’s specific needs was an inability to find AMRs that could move around payloads as heavy as 3,000 pounds. Those are the sorts of demands one bumps up against when making cars. So the team went to work building their own solutions in-house.

“And then Elon [Musk] was like, ‘We should build a humanoid,” Walti said. “My team was tapped to lead that. I led the internal hiring effort for that team. Everything you saw on AI Day was a product of those efforts.” He added that “at some point, [Optimus] became the number one effort in the company. It ended up not really being a fit for what I ended up wanting to do.”

Walti remains bullish about the long-term impact of humanoid robots across a variety of sectors, but he noted that he “think[s] it’s going to be a while before humanoids are truly moving the needle on a production floor.”

Mytra’s solution shares a lot of common DNA with vertical robotic storage solutions produced by companies like AutoStore. Two of the primary differentiators between the startup and existing solutions, according to Walti, are its ability manage heavy payloads and its dynamism.

“There are literally trillions of different ways that I can move one of these pallets or bookshelves from point A to point B within the system,” he explained. “Which is fundamentally unique. This is the most kinematically free system that has been conceived.”

In spite of maintaining stealth until now, Mytra has already drummed up interest with big names. The startup has a pilot with grocery giant Albertsons, along with “another half-dozen Fortune 50 customers that are in varying stages in the pipeline.”

Mytra also recently closed a $50 million Series B, bringing its total funding up to $78 million. Investors include Greenoaks and Eclipse.

Former NSA head joins OpenAI board and safety committee

Commander of the US Cyber Command Army General Paul Nakasone prepares to testify at a House Select Committee on the Chinese Communist Party

Image Credits: Julia Nikhinson/AFP / Getty Images

Former head of the National Security Agency, retired Gen. Paul Nakasone, will join OpenAI’s board of directors, the AI company announced Thursday afternoon. He will also sit on the board’s “security and safety” subcommittee.

The high-profile addition is likely intended to satisfy critics who think that OpenAI is moving faster than is wise for its customers and possibly humanity, putting out models and services without adequately evaluating their risks or locking them down.

Nakasone brings decades of experience from the Army, U.S. Cyber Command and the NSA. Whatever one may feel about the practices and decision-making at these organizations, he certainly can’t be accused of a lack of expertise.

As OpenAI increasingly establishes itself as an AI provider not just to the tech industry but government, defense and major enterprises, this kind of institutional knowledge is valuable both for itself and as a pacifier for worried shareholders. (No doubt the connections he brings in the state and military apparatus are also welcome.)

“OpenAI’s dedication to its mission aligns closely with my own values and experience in public service,” Nakasone said in a press release.

That certainly seems true: Nakasone and the NSA recently defended the practice of buying data of questionable provenance to feed its surveillance networks, arguing that there was no law against it. OpenAI, for its part, has simply taken, rather than buying, large swathes of data from the internet, arguing when it is caught that there is no law against it. They seem to be of one mind when it comes to asking forgiveness rather than permission, if indeed they ask either.

NSA is buying Americans’ internet browsing records without a warrant

The OpenAI release also states:

Nakasone’s insights will also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats. We believe AI has the potential to deliver significant benefits in this area for many institutions frequently targeted by cyber attacks like hospitals, schools, and financial institutions.

So this is a new market play, as well.

Nakasone will join the board’s safety and security committee, which is “responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations.” What this newly created entity actually does and how it will operate is still unknown, as several of the senior people working on safety (as far as AI risk) have left the company, and the committee is itself in the middle of a 90-day evaluation of the company’s processes and safeguards.

Former Tesla humanoid head launches a robotics startup

Image Credits: Mytra

Robotics startup Mytra has been quietly operating behind the scenes since its May 2022 founding in a bid to rethink warehouse automation. The outfit brings a solid pedigree to the table, founded by automotive vets from EV firms, including Tesla and Rivian.

Warehouse/fulfillment has been a white-hot category for automation since the pandemic hamstrung the global supply chain in ways that are still being felt. It’s a highly competitive space as well, with big names like Amazon, Locus and Zebra/Fetch making headway and setting the stage for an explosion of interest in the bipedal humanoid form factor.

Even as the world has come back to life post-pandemic, labor shortages remain a major sticking point for the industry, as with so many others. There’s still plenty of room for players to make an impact, however. Estimates suggest that between 5% and 10% of global warehouses are automated in any meaningful sense.

Like so many others, Mytra co-founder and CEO Chris Walti discovered automation’s shortcomings the hard way. He was previously at Tesla, where the hard way tends to be par for the course. Walti spent seven years at the carmaker, first in engineering, then mobile robotics and ultimately as the senior manager/lead for what would become Optimus.

He describes his journey through Tesla as an ongoing cycle of looking for solutions, determining that nothing in market was suitable for its specifics needs, and then going and building the things themselves. That started with autonomous mobile robot (AMR) solutions.

“I was pulled into manufacturing and automation through the Model 3 ramp,” Walti told TechCrunch. “Tesla was struggling to get our automation systems up and running, so we ended up setting up a manual warehouse as a pressure release valve for the manufacturing system. About six months later, they were like, ‘Can you just take over the automation system that’s causing a lot of these challenges?’”

Image Credits: Mytra

Among the industry shortcomings that emerged for Tesla’s specific needs was an inability to find AMRs that could move around payloads as heavy as 3,000 pounds. Those are the sorts of demands one bumps up against when making cars. So the team went to work building their own solutions in-house.

“And then Elon [Musk] was like, ‘We should build a humanoid,” Walti said. “My team was tapped to lead that. I led the internal hiring effort for that team. Everything you saw on AI Day was a product of those efforts.” He added that “at some point, [Optimus] became the number one effort in the company. It ended up not really being a fit for what I ended up wanting to do.”

Walti remains bullish about the long-term impact of humanoid robots across a variety of sectors, but he noted that he “think[s] it’s going to be a while before humanoids are truly moving the needle on a production floor.”

Mytra’s solution shares a lot of common DNA with vertical robotic storage solutions produced by companies like AutoStore. Two of the primary differentiators between the startup and existing solutions, according to Walti, are its ability manage heavy payloads and its dynamism.

“There are literally trillions of different ways that I can move one of these pallets or bookshelves from point A to point B within the system,” he explained. “Which is fundamentally unique. This is the most kinematically free system that has been conceived.”

In spite of maintaining stealth until now, Mytra has already drummed up interest with big names. The startup has a pilot with grocery giant Albertsons, along with “another half-dozen Fortune 50 customers that are in varying stages in the pipeline.”

Mytra also recently closed a $50 million Series B, bringing its total funding up to $78 million. Investors include Greenoaks and Eclipse.

Former NSA head joins OpenAI board and safety committee

Commander of the US Cyber Command Army General Paul Nakasone prepares to testify at a House Select Committee on the Chinese Communist Party

Image Credits: Julia Nikhinson/AFP / Getty Images

Former head of the National Security Agency, retired Gen. Paul Nakasone, will join OpenAI’s board of directors, the AI company announced Thursday afternoon. He will also sit on the board’s “security and safety” subcommittee.

The high-profile addition is likely intended to satisfy critics who think that OpenAI is moving faster than is wise for its customers and possibly humanity, putting out models and services without adequately evaluating their risks or locking them down.

Nakasone brings decades of experience from the Army, U.S. Cyber Command and the NSA. Whatever one may feel about the practices and decision-making at these organizations, he certainly can’t be accused of a lack of expertise.

As OpenAI increasingly establishes itself as an AI provider not just to the tech industry but government, defense and major enterprises, this kind of institutional knowledge is valuable both for itself and as a pacifier for worried shareholders. (No doubt the connections he brings in the state and military apparatus are also welcome.)

“OpenAI’s dedication to its mission aligns closely with my own values and experience in public service,” Nakasone said in a press release.

That certainly seems true: Nakasone and the NSA recently defended the practice of buying data of questionable provenance to feed its surveillance networks, arguing that there was no law against it. OpenAI, for its part, has simply taken, rather than buying, large swathes of data from the internet, arguing when it is caught that there is no law against it. They seem to be of one mind when it comes to asking forgiveness rather than permission, if indeed they ask either.

NSA is buying Americans’ internet browsing records without a warrant

The OpenAI release also states:

Nakasone’s insights will also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats. We believe AI has the potential to deliver significant benefits in this area for many institutions frequently targeted by cyber attacks like hospitals, schools, and financial institutions.

So this is a new market play, as well.

Nakasone will join the board’s safety and security committee, which is “responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations.” What this newly created entity actually does and how it will operate is still unknown, as several of the senior people working on safety (as far as AI risk) have left the company, and the committee is itself in the middle of a 90-day evaluation of the company’s processes and safeguards.

US Capitol building

CEOs from Meta, TikTok, Snap, X and Discord head to Congress for kids' online safety hearing

US Capitol building

Image Credits: Bryce Durbin/TechCrunch

CEOs from some of the biggest social platforms will appear before Congress on Wednesday to defend their companies against mounting criticism that they have done too little to protect kids and teens online.

The hearing, set to begin at 10 a.m. ET, is the latest in a long string of congressional tech hearings stretching back for years, with little in the way of new regulation or policy change to show for the efforts.

The Senate Judiciary Committee will host the latest hearing, which is notable mostly for dragging five chief executives across the country to face a barrage of questions from lawmakers. Tech companies often placate Congress by sending legal counsel or a policy executive, but the latest hearing will feature a slate of CEOs: Meta’s Mark Zuckerberg, X (formerly Twitter) CEO Linda Yaccarino, TikTok’s ​​Shou Chew, Discord’s Jason Citron and Evan Spiegel of Snap. Zuckerberg and Chew are the only executives who agreed to appear at the hearing voluntarily without a subpoena.

While Zuckerberg is a veteran of these often lengthy, meandering attempts to hold tech companies to account, Wednesday’s televised hearing will be a first for Yaccarino, Spiegel and Citron. Snap and X have sent other executives (or their former chief executive) in the past, but Discord — a chat app originally designed for gamers — is making its first appearance in the hot seat. All three first-timers could produce some interesting off-script moments, particularly Yaccarino. In recent interviews as X’s top executive, Elon Musk’s pick to lead the company has appeared flustered and combative — a world apart from her media overtrained peers like Zuckerberg and Chew.

Discord is a very popular app among young people, but it’s still an unusual name to come up in one of these hearings. The committee’s decision to include Discord is likely a result of a report last year from NBC News exploring sextortion and child sexual abuse material (CSAM) on the chat platform. The company’s inclusion is notable, particularly in light of the absence of more prominent algorithm-powered social networks like YouTube — often inexplicably absent from these events — and the absence of Amazon-owned livestreaming giant Twitch.

Wednesday’s hearing, titled “Big Tech and the Online Child Sexual Exploitation Crisis,” will cover much more ground than its narrow naming would suggest. Lawmakers will likely dig into an array of concerns — both recent and ongoing — about how social platforms fail to protect their young users from harmful content. That includes serious concerns around Instagram openly connecting sexual predators with sellers advertising CSAM, as the WSJ previously reported, and the NBC News investigation revealing that Discord has facilitated dozens of instances of grooming, kidnapping and other instances of sexual exploitation in recent years.

Beyond concerns that social platforms don’t do enough to protect kids from sexual predation, expect lawmakers to press the five tech CEOs on other online safety concerns, like fentanyl sellers on Snapchat, booming white supremacist extremism on X and the prevalence of self harm and suicide content on TikTok. And given the timing of X’s embarrassing failure to prevent a recent explosion of explicit AI-generated Taylor Swift imagery and the company’s amateurish response, expect some Taylor Swift questions too.

The tech companies are likely to push back, pointing lawmakers to platform and policy changes in some cases designed to make these apps safer, and in others engineered mostly to placate Congress in time for this hearing. In Meta’s case, that looks like an update to Instagram and Facebook last week that prevents teens from receiving direct messages from users they don’t know. Like many of these changes from companies like Meta, it raises the question of why these safeguards continue to be added on the fly instead of being built into the product before it was offered to young users.

KOSA looms large

This time around, the hearing is part of a concerted push to pass the Kids Online Safety Act (KOSA), a controversial piece of legislation that ostensibly forces tech platforms to take additional measures to shield children from harmful content online. In spite of some revisions, the bill’s myriad critics caution that KOSA would aggressively sanitize the internet, promote censorship and imperil young LGBTQ people in the process. Some of the bill’s conservative supporters — including co-sponsor Sen. Marsha Blackburn — have stated outright that KOSA should be used to effectively erase transgender content for young people online.

The LGBTQ advocacy group GLAAD expressed its concerns about the hearing and related legislation in a statement provided to TechCrunch, urging lawmakers to ensure that “proposed solutions be carefully crafted” to avoid negatively impacting the queer community.

“The US Senate Judiciary Committee’s hearing is likely to feature anti-LGBTQ lawmakers baselessly attempting to equate age-appropriate LGBTQ resources and content with inappropriate material,” GLAAD said. “… Parents and youth do need action to address Big Tech platforms’ harmful business practices, but age-appropriate information about the existence of LGBTQ people should not be grouped in with such content.”

The ACLU and digital rights organization the EFF have also opposed the legislation, as have other groups concerned about the bill’s implications for encryption. Similar concerns have followed the Children and Teens’ Online Privacy Protection Act (now known as “COPPA 2.0“), the STOP CSAM Act and the EARN IT Act, adjacent bills purporting to protect children online.

The bill’s proponents aren’t all conservative. KOSA enjoys bipartisan support at the moment and the misgivings expressed by its critics haven’t broken through to the many Democratic lawmakers who are on board. The bill is also backed by organizations that promote children’s safety online, including the American Academy of Pediatrics, the National Center on Sexual Exploitation and Fairplay, a nonprofit focused on protecting kids online.

“KOSA is a needed corrective to social media platforms’ toxic business model, which relies on maximizing engagement by any means necessary, including sending kids down deadly rabbit holes and implementing features that make young people vulnerable to exploitation and abuse,” Josh Golin, executive director of Fairplay, said in a statement provided to TechCrunch. Fairplay has also organized a pro-KOSA coalition of parents who have lost children in connection with cyberbullying, drugs purchased on social platforms and other online harms.

As of last week, KOSA’s unlikeliest supporter is one of the companies that the bill seeks to regulate. Snap split from its peers last week to throw its support behind KOSA, a move likely intended to endear the company to regulators that could steer its fate — or perhaps more importantly, the fate of TikTok, Snap’s dominant rival, which sucks up the lion’s share of screen time among young people.

Snap’s decision to break rank with its tech peers and even its own industry group on KOSA echoes a similar move by Meta, then Facebook, to support a controversial pair of laws known as FOSTA-SESTA back in 2018. That legislation, touted as a solution to online sex trafficking, went on to become law, but years later FOSTA-SESTA is better known for driving sex workers away from safe online spaces than it is for disrupting sex trafficking.

Fan fiction writers rally fandoms against KOSA, the bill purporting to protect kids online

illustration of Irene Solaiman

Women in AI: Irene Solaiman, head of global policy at Hugging Face

illustration of Irene Solaiman

Image Credits: Bryce Durbin / TechCrunch

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Irene Solaiman began her career in AI as a researcher and public policy manager at OpenAI, where she led a new approach to the release of GPT-2, a predecessor to ChatGPT. After serving as an AI policy manager at Zillow for nearly a year, she joined Hugging Face as the head of global policy. Her responsibilities there range from building and leading company AI policy globally to conducting socio-technical research.

Solaiman also advises the Institute of Electrical and Electronics Engineers (IEEE), the professional association for electronics engineering, on AI issues, and is a recognized AI expert at the intergovernmental Organization for Economic Co-operation and Development (OECD).

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

A thoroughly nonlinear career path is commonplace in AI. My budding interest started in the same way many teenagers with awkward social skills find their passions: through sci-fi media. I originally studied human rights policy and then took computer science courses, as I viewed AI as a means of working on human rights and building a better future. Being able to do technical research and lead policy in a field with so many unanswered questions and untaken paths keeps my work exciting.

What work are you most proud of in the AI field?

I’m most proud of when my expertise resonates with people across the AI field, especially my writing on release considerations in the complex landscape of AI system releases and openness. Seeing my paper on an AI Release Gradient frame technical deployment prompt discussions among scientists and used in government reports is affirming — and a good sign I’m working in the right direction! Personally, some of the work I’m most motivated by is on cultural value alignment, which is dedicated to ensuring that systems work best for the cultures in which they’re deployed. With my incredible co-author and now dear friend Christy Dennison working on a Process for Adapting Language Models to Society was a whole of heart (and many debugging hours) project that has shaped safety and alignment work today.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

I’ve found, and am still finding, my people — from working with incredible company leadership who care deeply about the same issues that I prioritize to great research co-authors with whom I can start every working session with a mini therapy session. Affinity groups are hugely helpful in building community and sharing tips. Intersectionality is important to highlight here; my communities of Muslim and BIPOC researchers are continually inspiring.

What advice would you give to women seeking to enter the AI field?

Have a support group whose success is your success. In youth terms, I believe this is a “girl’s girl.” The same women and allies I entered this field with are my favorite coffee dates and late-night panicked calls ahead of a deadline. One of the best pieces of career advice I’ve read was from Arvind Narayanan on the platform formerly known as Twitter establishing the “Liam Neeson principle” of not being the smartest of them all, but having a particular set of skills.

What are some of the most pressing issues facing AI as it evolves?

The most pressing issues themselves evolve, so the meta answer is: International coordination for safer systems for all peoples. People who use and are affected by systems, even in the same country, have varying preferences and ideas of what is safest for themselves. And the issues that arise will depend not only on how AI evolves, but [also] on the environment into which they’re deployed; safety priorities and our definitions of capability differ regionally, such as a higher threat of cyberattacks to critical infrastructure in more digitized economies.

What are some issues AI users should be aware of?

Technical solutions rarely, if ever, address risks and harms holistically. While there are steps users can take to increase their AI literacy, it’s important to invest in a multitude of safeguards for risks as they evolve. For example, I’m excited about more research into watermarking as a technical tool, and we also need coordinated policymaker guidance on generated content distribution, especially on social media platforms.

What is the best way to responsibly build AI?

With the people affected and constantly reevaluating our methods for assessing and implementing safety techniques. Both beneficial applications and potential harms constantly evolve and require iterative feedback. The means by which we improve AI safety should be collectively examined as a field. The most popular evaluations for models in 2024 are much more robust than those I was running in 2019. Today, I’m much more bullish about technical evaluations than I am about red-teaming. I find human evaluations extremely high utility, but as more evidence arises of the mental burden and disparate costs of human feedback, I’m increasingly bullish about standardizing evaluations.

How can investors better push for responsible AI?

They already are! I’m glad to see many investors and venture capital companies actively engaging in safety and policy conversations, including via open letters and Congressional testimonies. I’m eager to hear more investors’ expertise on what stimulates small businesses across sectors, especially as we’re seeing more AI use from fields outside the core tech industries.