Ex-Blue Origin leaders want to mine the moon

Image Credits: TechCrunch

Hello and welcome back to TechCrunch Space. Happy Monday, everyone!

Want to reach out with a tip? Email Aria at [email protected] or send me a message on Signal at 512-937-3988. You can also send a note to the whole TechCrunch crew at [email protected]For more secure communicationsclick here to contact us, which includes SecureDrop (instructions here) and links to encrypted messaging apps. 

Story of the week

With its incredible mass and lift, SpaceX’s Starship is already transforming mission planning. Case in point: Voyager & Airbus will launch their private space station Starlab on Starship — in a single mission.

The two companies announced the launch deal last week, though neither party disclosed the financial terms. In some ways, it isn’t much of a surprise: Starship is the only heavy-lift rocket under development that will be capable of accommodating the station’s eight-meter-diameter in one go. But it’s nevertheless a welcome sign of healthy development, both for Starlab and Starship.

Voyager Airbus Starlab
Voyager/Airbus Starlab. Image Credits: Starlab Space LLC

Scoop of the week

I uncovered more details about a secretive moon startup headed by ex-Blue Origin leaders. Interlune, a startup that’s been around for at least three years but has made almost zero public announcements about its tech, raised $15.5 million in new funding and aims to close another $2 million. It’s headed by Rob Meyerson, an aerospace executive and investor who was president of Blue Origin for 15 years.

What little is known of Interlune’s tech mostly comes from an abstract of a small SBIR the startup was awarded last year from the National Science Foundation. Under that award, the company said it will aim to “develop a core enabling technology for lunar in situ resource utilization: the ability to sort ‘moon dirt’ (lunar regolith) by particle size.”

“By enabling raw lunar regolith to be sorted into multiple streams by particle size, the technology will provide appropriate feedstocks for lunar oxygen extraction systems, lunar 3-dimensional printers, and other applications,” the abstract says.

NASAs Artemis I Moon rocket sits at Launch Pad Complex 39B at Kennedy Space Center
NASA’s Artemis I Moon rocket sits at Launch Pad Complex 39B at Kennedy Space Center, in Cape Canaveral, Florida, on June 15, 2022. NASA is aiming for June 18 for the beginning of the next wet dress rehearsal test of the agency’s Space Launch System (SLS) at the Kennedy Space Center, with tanking operations on June 20. (Photo by EVA MARIE UZCATEGUI/AFP via Getty Images)

Launch highlights

SpaceX teamed up with Northrop Grumman to deliver more than 8,000 pounds of cargo, fresh food and scientific experiments to astronauts on the International Space Station.

The NG-20 resupply mission took off from the Space Force’s Cape Canaveral in Florida on a SpaceX Falcon 9 rocket on January 30 and arrived at the ISS on February 1.

Northrop has been launching Cygnus to the ISS for resupply missions using its own Antares rocket since 2013, with the exception of just two missions that used a United Launch Alliance Atlas 5. But Northrop retired that version of Antares last year, and the next version — an all-American launch vehicle called Antares 330, which it is developing with Firefly Aerospace — will not be ready to fly until around mid-2025.

Both Northrop and SpaceX have multibillion-dollar contracts with NASA to deliver cargo resupply missions to the ISS. Under its contract, SpaceX uses its Dragon capsule; this is the first time the company flew a Cygnus.

Rewatch the launch here:



What we’re reading

Last week, I had a great time diving into this story predicting SpaceX’s 2024 revenue authored by Payload co-founder Mo Islam and Jack Kuhr, Payload’s research director.

The TL;DR is that Payload is projecting SpaceX’s revenue will climb from $8.7 billion in 2023 to $13.3 billion in 2024, chiefly due to higher demand for Falcon 9 launches and more Starlink customers. But there’s tons more discussion on SpaceX’s business at the link above, and it’s worth checking out.

This week in space history

On February 5, 1971, Alan Shepard became the fifth astronaut to walk on the moon. Ad astra!

Alan Shepard on the moon NASA
Image Credits: NASA

5 steps board members and startup leaders can take to prepare for a future shaped by GenAI

Image Credits: Roy Scott / Getty Images

Beena Ammanath

Contributor

Beena Ammanath is a global and U.S. Technology Trust Ethics Leader at Deloitte AI Institute.

AI is on the minds of nearly every enterprise and startup leader today, challenging human decision-makers with a constant stream of “what if” scenarios for how we will work and live in the future. Generative AI, especially, is redefining what business can do with artificial intelligence — and presenting thorny questions about what business should do.

Managing risks and ensuring effective oversight of AI will need to become a central focus of boards, yet many organizations can struggle when it comes to helping their top leaders become more intelligent about artificial intelligence.

The urgency to educate board members is growing. Over the last decade, the use cases for machine learning and other types of AI have multiplied. So have the risks. For boards, the AI era has exposed new challenges when it comes to governance and risk management. A recent Deloitte survey found that most boards (72%) have at least one committee responsible for risk oversight, and more than 80% have at least one risk management expert. For all the attention and investment in managing other kinds of business risk, AI demands the same treatment.

AI risks abound. AI security risks, for example, can compromise sensitive data. Biased outputs can raise compliance problems. Irresponsible deployment of AI systems can have significant ramifications for the enterprise, consumers and society at large. All of these potential impacts should cause concern for  board members — and prompt them to play a greater role in helping their organizations address AI risks.

A growing sense of urgency

The rise of generative AI makes the AI-risk challenge even more complex and urgent. Its capabilities have stunned users and opened the door to transformative use cases. Generative AI, including large language models (LLMs), image and audio generators and code-writing assistants, is giving more users tools that can boost productivity, generate previously overlooked insights and create opportunities to increase revenue. And almost anyone can use these tools. You do not need to have a PhD in data science to use an LLM-powered chatbot trained on enterprise data. And because the barriers to AI usage are quickly crumbling at the same time AI capabilities are rapidly growing, there’s a tremendous amount of work to be done when it comes to risk management.

Not only does generative AI amplify the risks associated with AI, but it also shortens the timeline for developing strategies that support AI risk mitigation. Today’s risks are real, and they will only grow as generative AI matures and its adoption grows. Boards have no time to spare in getting more savvy about generative AI and how it will influence risk management. The following five steps can help board members prepare their organizations for a future that will be shaped by generative AI.

1. Build the board’s AI literacy

Establishing a solid understanding of AI is essential. If board members are to become advocates and guides for AI risk management, they will have to know how to ask the right questions. That means they will need a certain level of AI literacy — beyond what they already know about AI. With generative AI, the need for AI literacy is even more crucial, given the new types of risk that the technology presents. Board members will need to understand new terminology (such as “hallucinated” outputs that are factually false), as well as how generative AI magnifies existing risks due to its scale. A GenAI-enabled call center, for example, could give biased outputs to a greater number of people.

To build a stronger foundation in generative AI risk management, board members can increase their AI literacy through traditional methods, such as bringing in speakers and subject matter experts and pursuing independent learning through classes, lectures and reading. But generative AI itself could also help. For example, an LLM could leverage simple prompts to then summarize and help explain, in natural language, the complexities of how generative AI works, its limits and its capabilities.

2. Promote AI fluency in the C-suite

Boards and C-suites should be on the same page when it comes to generative AI and risk management. Having a common language, understanding and set of goals is essential. And while generative AI literacy in the boardroom is important, fluency in the C-suite will be even more so. Board members should use their position to urge executives to build generative AI fluency around not only the value and opportunities, but also the risks.

The power and allure of generative AI will continue to grow, along with the use cases. Business leaders will need knowledge and familiarity with the technology so they can responsibly shape AI programs. There are big decisions to be made around AI ethics, safety and security and accountability. All the factors that influence trust in AI flow from a baseline understanding of what generative AI is and what it can do. Board members have a responsibility to drive that understanding within the enterprise, encouraging others to build AI fluency and making it clear why it’s important.

3. Consider recruiting board members with AI experience

In many organizations, board members come from fields that are focused on finance and business management. That background allows them to be informed leaders on fiscal and competitiveness issues. But given that AI is a technical and complex field with its own unique collection of challenges and risks, boards should expand their in-house subject matter expertise. One way they can do that is by recruiting an AI professional to the board. Such a person should bring experience as an operational AI leader, with a track record of implementing successful AI projects in similar organizations.

Keep in mind that generative AI is a relatively new area. Some of the earliest use cases are only now being deployed. Adding board expertise sooner rather than later can help your organization get ahead of the game, and a professional with operational AI experience can provide essential insights boards will need for oversight and governance.

4. Orient the board for the future

Governance is a continuous need, not a one-time exercise. Boards will have to implement controls to guide the ethical and trustworthy use of generative AI. They already may stand up subcommittees to oversee vital enterprise activities, such as for audits, succession planning and risk management related to finance and operations. And they should support generative AI governance with a similar tactic.

The future of generative AI is still in flux. The capabilities, risks, trajectory and even the lexicon for generative AI are all evolving as the technology matures. With a subcommittee or dedicated group for AI, a board can remain highly focused and informed on this complex, fast-changing technology. Another way boards can rise to the challenge is by extending the mandate for existing subcommittees to include generative AI components. For example, an audit committee’s mandate might include planning for algorithmic auditing.

5. Guide the organization as generative AI matures

Board members are important stakeholders with essential responsibilities, even though they may not work directly with generative AI. As enterprise leadership and lines of business explore how generative AI can enhance productivity and drive innovation, the board can take a higher-level, big-picture view of AI programs. It can focus on guiding the enterprise in the ethical and trustworthy deployment of generative AI. One way to do that is by leveraging a framework for assessing risk and trust, and understanding how those areas affect compliance and governance.

Deloitte’s Trustworthy AITM framework is just one example, providing a way to help organizations assess risk and trust in any AI deployment. By deploying such a framework, organizations can help their board members make clear-eyed evaluations and guide the business toward the most valuable use of generative AI.

Entering new territory

The generative AI landscape is still new and exciting. And it will likely continue to be exciting, even though its future remains unwritten. No organization has been here before. All organizations are experiencing the early days of a new technology that will have a profound impact on business and society.

While these five important steps can help businesses prepare for the future, there’s even more that board members can do to position their organizations for the new era of generative AI. There’s no shortage of advisers that boards can turn to for assistance and guidance. Such advisers are already helping develop essential tactics and standards for generative AI governance and oversight, and they can provide critical insight that educates and informs boards.

Risk management will always be a moving target, but with greater literacy, focus, professional experience and a vision for the future, boards can guide their organizations through the uncertainty ahead and position their businesses to thrive in this new era of AI.

Interlune lunar mining

Ex-Blue Origin leaders' secretive lunar startup Interlune has moonshot mining plans

Interlune lunar mining

Image Credits: Interlune

Interlune, a stealth startup headed by ex-Blue Origin executives, is focused on mining the moon for a rare isotope of helium that could be used to scale quantum computing and eventually even fusion power, TechCrunch has learned.

Regulatory filings reported here last week showed that the company recently closed $15.5 million in new capital; before that, Interlune had raised a $2.69 million pre-seed round. But the rationale for raising the capital was poorly understood — until now.

Two of Interlune’s confidential pitch decks, dated spring 2022 and fall 2023 and viewed by TechCrunch, reveal that the startup was seeking that funding to build and test resource extraction hardware for lunar helium-3 (He-3). A representative for Interlune declined to comment.

Interlune says in the most recent pitch deck that it has developed a “breakthrough extraction method” for He-3 from lunar regolith, though the slides don’t go into greater detail. According to one slide, the startup is developing sedan-sized extractors combined with other hardware to effectively make scalable physical plants. There is no explanation of how the helium might be stored, or how it might be transported back to Earth, however.

He-3 is a stable isotope of helium; while Earth is shielded from solar wind by its magnetic field, the moon is bombarded with with it, and high-energy particles like He-3 are deposited onto the lunar surface. On Earth, the most common source of He-3 is from the decay of tritium, a man-made element used in nuclear weapons. Interlune predicts an “exponential” rise in demand for He-3 in the coming years, driven by areas like quantum computing, medical imaging, in-space propellant and fusion, to the extent that it projects an annual demand of 4,000 kilograms by 2040 (versus just 5 kilograms now).

The good news is that He-3 is as abundant on the moon as it is scarce on Earth. Mining the moon for He-3 is not a new concept: Data collected since the Apollo mission shows the isotope is plentiful there. But for years, it’s long been considered the stuff of science fiction: Scientists have never come close to developing the kind of extraction technology necessary to make such an endeavor worthwhile. He-3 could be used to power fusion reactors — an especially enticing concept, as the byproducts would not be radioactive — but while nuclear fusion research has made major gains in the past few years, it will take many more steps to make fusion a commercially viable energy source here on Earth (let alone in space).

Other countries have already started to look to our moon to resolve this problem. Most notably, China announced in 2022 that its Chang’e-5 robotic mission had collected a new moon mineral that contained He-3, suggesting even greater reserves than previously thought.

China’s interest in He-3 mining creates a national security imperative to securing these vast tonnages of the resource on the moon — which could mean promising traction for Interlune from both government agencies doling out non-dilutive government contracts and investors looking for a defense-focused angle.

Interlune’s executive team includes CEO Rob Meyerson, a prolific space industry investor and previous president of Blue Origin; CTO Gary Lai, former chief architect at Blue Origin; and COO Indra Hornsby, with industry experience at Rocket Lab, BlackSky and Spaceflight Industries. The startup has been in existence for at least three years, but beyond a few brief public statements, this is the first time the public has learned about its plans with any detail.

Image Credits: Interlune

The deck also says that Interlune is planning to demonstrate the tech on the moon as early as 2026, with a pilot plant extracting He-3 in 2028. Should the plans work out as the company hopes, it told investors it could see $500 million in annual recurring revenue from He-3 recovery by the start of the next decade — and only going up from there.

But still, this is an expensive plan: The company will need to pay for launch, secure a resource return partner and build out all of the hardware necessary to start mining at scale. The economics, like the cost to mine a gram of helium, are also unclear. But if Interlune manages to pull it off, it will be in a category of its own: There are other startups focused on in-space resource extraction, but they are either focused on using lunar resources purely for on-orbit applications (like Argo Space Corporation) or they are focused on minerals only (like AstroForge).