OpenAI tempers expectations with less bombastic, GPT-5-less DevDay this fall

SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: OpenAI CEO Sam Altman smiles during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first-ever Open AI DevDay conference.(Photo by Justin Sullivan/Getty Images)

Image Credits: Justin Sullivan / Getty Images

Last year, OpenAI held a splashy press event in San Francisco during which the company announced a bevy of new products and tools, including the ill-fated App Store-like GPT Store.

This year will be a quieter affair, however. On Monday, OpenAI said it’s changing the format of its DevDay conference from a tentpole event into a series of on-the-road developer engagement sessions. The company also confirmed that it won’t release its next major flagship model during DevDay, instead focusing on updates to its APIs and developer services.

“We’re not planning to announce our next model at DevDay,” an OpenAI spokesperson told TechCrunch. “We’ll be focused more on educating developers about what’s available and showcasing dev community stories.”

OpenAI’s DevDay events this year will take place in San Francisco on October 1, London on October 30, and Singapore on November 21. All will feature workshops, breakout sessions, demos with the OpenAI product and engineering staff and developer spotlights. Registration will cost $450 (or $0 through scholarships available for eligible attendees), with applications to close on August 15.

OpenAI has in recent months taken more incremental steps than monumental leaps in generative AI, opting to hone and fine-tune its tools as it trains the successor to its current leading models GPT-4o and GPT-4o mini. The company has refined approaches to improving the overall performance of its models and preventing those models from going off the rails as often as they previously did, but OpenAI appears to have lost its technical lead in the generative AI race — at least according to some benchmarks.

One of the reasons could be the increasing challenge of finding high-quality training data.

OpenAI’s models, like most generative AI models, are trained on massive collections of web data — web data that many creators are choosing to gate over fears that their data will be plagiarized or that they won’t receive credit or pay. More than 35% of the world’s top 1,000 websites now block OpenAI’s web crawler, according to data from Originality.AI. And around 25% of data from “high-quality” sources has been restricted from the major datasets used to train AI models, a study by MIT’s Data Provenance Initiative found.

Should the current access-blocking trend continue, the research group Epoch AI predicts that developers will run out of data to train generative AI models between 2026 and 2032. That — and fear of copyright lawsuits — has forced OpenAI to enter costly licensing agreements with publishers and various data brokers.

OpenAI is said to have developed a reasoning technique that could improve its models’ responses on certain questions, particularly math questions, and the company’s CTO Mira Murati has promised a future model with “Ph.D.-level” intelligence. (OpenAI revealed in a blog post in May that it had begun training its next “frontier” model.) That’s pledging a lot — and there’s high pressure to deliver. OpenAI’s reportedly hemorrhaging billions of dollars training its models and hiring top-paid research staff.

OpenAI still faces many controversies, such as using copyrighted data for training, restrictive employee NDAs, and effectively pushing out safety researchers. The slower product cycle might have the beneficial side effect of countering the narrative that OpenAI has deprioritized work on AI safety in the pursuit of more capable, powerful generative AI technologies.

TechCrunch Space: Spending less

Image Credits: TechCrunch

Hello, and welcome back to TechCrunch Space. Did you hear? Bridgit Mendler will be joining me onstage at this year’s TechCrunch Disrupt to talk all things ground stations. She’s just one of the incredible space industry entrepreneurs who will be coming this year. Find out more information here. October 28-30 — see you there!

Want to reach out with a tip? Email Aria at [email protected] or send a message on Signal at 512-937-3988. You also can send a note to the whole TechCrunch crew at [email protected]For more secure communications, click here to contact us, which includes SecureDrop instructions and links to encrypted messaging apps.

Story of the week

A pair of Rocket Lab-made spacecraft are about to embark on a two-step journey. The first step is the 55-hour, 2,500-mile stretch from California to the launch site at Cape Canaveral. The second step? Just 11 months and 230 million miles to Mars. 

Even more exciting — the mission, commissioned by NASA, with scientific payloads from UC Berkeley’s Space Sciences Laboratory and satellite buses provided by Rocket Lab, will end up costing just one-tenth of other orbiter missions to the red planet. ESCAPADE will launch on Blue Origin’s New Glenn in October, but it will be the first launch of that vehicle, so the date could get pushed back. If they miss the window, we’ll have to wait 26 months for ESCAPADE to launch.

Image Credits: Rocket Lab (opens in a new window)

Scoop of the week

OK, this is NOT a scoop. But I didn’t see anyone else covering the topic, so you might say it’s a bit of an exclusive. And that’s about the draft environmental assessment for Stoke’s launch pad that the U.S. Space Force released last month. These regulatory documents are long and can be difficult to get through, but they do provide unique insights into a company’s near-term plans.

The major gist of the document is that Stoke is pursuing a “phased program approach,” whereby the company first operates a totally expendable vehicle at a relatively low launch cadence (10 launches/year). Phase 2, which would require a supplemental environmental analysis and is not considered in this draft document, would involve the fully reusable rocket. 

Stoke Space hopper2
Stoke Space Hopper2
Image Credits: Stoke Space (opens in a new window)

What we’re reading

Now that the Starliner astronauts have been in orbit for over 70 days (even though the original mission was targeting just around a week), it can be hard to keep all the facts straight. I found this short explainer from CNBC’s Michael Sheetz really handy — send it to your inquiring friends and everyone else who is confused about just what is happening way up on the International Space Station.

Boeing Starliner docked to ISS
Image Credits: NASA (opens in a new window)

This week in space history

Instead of looking back, I want to call your attention to some contemporary astronomy news: The next full moon will be a Blue Supermoon. “Blue” because it’s the third full moon in a season of four full moons, and “super” because the moon will be within 90% of its closest approach to Earth. That means it’s going to be HUGE.

The full moon sets over Homestead National Historic Park in Nebraska.
Image Credits: National Park Service/Homestead

OpenAI tempers expectations with less bombastic, GPT-5-less DevDay this fall

SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: OpenAI CEO Sam Altman smiles during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first-ever Open AI DevDay conference.(Photo by Justin Sullivan/Getty Images)

Image Credits: Justin Sullivan / Getty Images

Last year, OpenAI held a splashy press event in San Francisco during which the company announced a bevy of new products and tools, including the ill-fated App Store-like GPT Store.

This year will be a quieter affair, however. On Monday, OpenAI said it’s changing the format of its DevDay conference from a tentpole event into a series of on-the-road developer engagement sessions. The company also confirmed that it won’t release its next major flagship model during DevDay, instead focusing on updates to its APIs and developer services.

“We’re not planning to announce our next model at DevDay,” an OpenAI spokesperson told TechCrunch. “We’ll be focused more on educating developers about what’s available and showcasing dev community stories.”

OpenAI’s DevDay events this year will take place in San Francisco on October 1, London on October 30, and Singapore on November 1. All will feature workshops, breakout sessions, demos with the OpenAI product and engineering staff and developer spotlights. Registration will cost $450 (or $0 through scholarships available for eligible attendees), with applications to close on August 15.

OpenAI has in recent months taken more incremental steps than monumental leaps in generative AI, opting to hone and fine-tune its tools as it trains the successor to its current leading models GPT-4o and GPT-4o mini. The company has refined approaches to improving the overall performance of its models and preventing those models from going off the rails as often as much as they previously did, but OpenAI appears to have lost its technical lead in the generative AI race — at least according to some benchmarks.

One of the reasons could be the increasing challenge of finding high-quality training data.

OpenAI’s models, like most generative AI models, are trained on massive collections of web data — web data that many creators are choosing to gate over fears that their data will be plagiarized or that they won’t receive credit or pay. More than 35% of the world’s top 1,000 websites now block OpenAI’s web crawler, according to data from Originality.AI. And around 25% of data from “high-quality” sources has been restricted from the major data sets used to train AI models, a study by MIT’s Data Provenance Initiative found.

Should the current access-blocking trend continue, the research group Epoch AI predicts that developers will run out of data to train generative AI models between 2026 and 2032. That — and fear of copyright lawsuits — has forced OpenAI to enter costly licensing agreements with publishers and various data brokers.

OpenAI is said to have developed a reasoning technique that could improve its models’ responses on certain questions, particularly math questions, and the company’s CTO Mira Murati has promised a future model with “Ph.D.-level” intelligence. (OpenAI revealed in a blog post in May that it had begun training its next “frontier” model.) That’s pledging a lot — and there’s high pressure to deliver. OpenAI’s reportedly hemorrhaging billions of dollars training its models and hiring top-paid research staff.

OpenAI still faces many controversies, such as using copyrighted data for training, restrictive employee NDAs, and effectively pushing out safety researchers. The slower product cycle might have the beneficial side effect of countering the narrative that OpenAI has deprioritized work on AI safety in the pursuit of more capable, powerful generative AI technologies.

Automatic mass production line with robots and automated machines running by itself. which there is no human to control. Business and automation technology and industry concept. 3D illustration rendering

Robots can make jobs less meaningful for human colleagues

Automatic mass production line with robots and automated machines running by itself. which there is no human to control. Business and automation technology and industry concept. 3D illustration rendering

Image Credits: Thamrongpat Theerathammakorn / Getty Images

Much has been (and will continue to be) written about automation’s impact on the jobs market. In the short-term, many employers have complained of an inability to fill roles and retain workers, further accelerating robotic adoption. The long-term impact these sorts of sweeping changes will have on the job market going forward remains to be seen.

One aspect of the conversation that is oft neglected, however, is how human workers feel about their robotic colleagues. There’s a lot to be said for systems that augment or remove the more backbreaking aspects of blue-collar work. But could the technology also have a negative impact on worker morale? Both things can certainly be true at once.

The Brookings Institution this week issued results gleaned from several surveys conducted over the past decade and a half to evaluate the impact that robotics has on job “meaningfulness.” The think tank defines the admittedly abstract notion thus:

In exploring what makes work meaningful, we rely on self-determination theory. According to this theory, satisfying three innate psychological needs — competence, autonomy, and relatedness — is key for motivating workers and enabling them to experience purpose through their work.

Data was culled from worker surveys carried out in 14 industries across 20 countries in Europe, cross-referenced with robot deployment data issued by the International Federation of Robotics. Industries surveyed included automotive, chemical products, food and beverage and metal production, among others.

The institute reports a negative impact to worker-perceived meaningfulness and autonomy levels.

“If robot adoption in the food and beverages industry were to increase to match that of the automotive industry,” Brookings notes, “we estimate a staggering 6.8% decrease in work meaningfulness and a 7.5% decrease in autonomy.” The autonomy aspect speaks to an ongoing concern over whether the implementation of robotics in industrial settings will make the roles carried out by their human counterparts more robotic as well. Of course, the counterpoint has often been made that these systems effectively remove many of the most repetitive aspects of these roles.

The institute goes on to suggest that these sorts of impacts are felt across roles and demographics. “We find that the negative consequences of robotization for work meaningfulness are the same, regardless of workers’ education level, skill level, or the tasks they perform,” the paper notes.

As for how to address this shift, the answer likely isn’t going to be simply saying no to automation. As long as robots have a positive impact on a corporation’s bottom line, adoption will continue at a rapidly increasing clip.

Brookings resident Milena Nikolova does offer a seemingly straightforward solution, writing, “If firms have mechanisms in place to ensure that humans and machines cooperate, rather than compete, for tasks, machines can help improve workers’ well-being.”

This is one of the defining pushes behind those automation firms touting collaborative robotics, rather than outright worker replacement. Pitting humans against their robotic counterparts will almost certainly be a losing battle.