Apple iPad event 2024: Watch Apple unveil new iPads right here

Apple CEO Tim Cook introduces the latest iPad and iPad mini to the iPad lineup during a special event at Apple Park.

Image Credits: Apple

We’re still well over a month out from WWDC, but Apple went ahead and snuck in another event. On Tuesday, May 7 at 7 a.m. PT/10 a.m. ET, the company is set to unveil the latest additions to the iPad line. According to the rumor mill, that list includes: a new iPad Pro, iPad Air, Apple Pencil and a keyboard case.

More surprisingly, the event may also see the launch of the new M4 chip, a little over six months after the company unveiled three new M3 chips in one fell swoop. Why the quick silicon refresh? Well, for starters, word on the street is that Apple launched the M3 later than expected (likely owing to supply chain issues), forcing the company to launch all three chips at the same event.

Image Credits: Apple
Image Credits: Apple

Couple that with the fact that Microsoft is rumored to be launching its own third-party silicon at Build at the end of May, and you start to understand why the company opted not to wait. An announcement may be even more pressing, given that the Microsoft/ARM chips are said to offer “industry-leading performance” — an apparent shot across Apple’s bow. Could a new chip also mean new Macs? That would be a short refresh cycle for the current crop, but it’s certainly not out of the realm of possibility.

What does seem certain, however, is a new iPad Pro with an OLED display, a 12.9-inch iPad Air and new gestures for the Apple Pencil. Also, expect plenty of AI chatter. It’s 2024, after all. You can watch along live at the link below, and stay tuned to TechCrunch for news as it breaks.

Sam Altman OpenAI DSC02880

OpenAI's ChatGPT announcement: Watch the GPT-4o reveal and demo here

Sam Altman OpenAI DSC02880

Image Credits: TechCrunch

OpenAI’s livestreamed GPT announcement event happened at 10 a.m. PT Monday, but you can still catch up on the reveals.

The company described the event as “a chance to demo some ChatGPT and GPT-4 updates.” CEO Sam Altman, meanwhile, promoted the event with the message, “not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me.”

As it turned out, the announcement was a new model called GPT-4o — the “o” stands for “omni”– which offers greater responsiveness to voice prompts, as well as better vision capabilities.

“GPT-4o reasons across voice, text and vision,” OpenAI CTO Mira Murati said during a keynote presentation at OpenAI’s offices in San Francisco. “And this is incredibly important, because we’re looking at the future of interaction between ourselves and machines.”

OpenAI also followed up on Monday’s event by showcasing a number of additional demos of GPT-4o’s capabilities on its YouTube channel, from improving visual accessibility through Be My Eyes, its ability to harmonize with itself, and further translation capabilities.

You can watch a replay on the OpenAI website or here:

OpenAI's ChatGPT announcement: Watch the GPT-4o reveal and demo here

Sam Altman OpenAI DSC02880

Image Credits: TechCrunch

OpenAI’s livestreamed GPT announcement event happened at 10 a.m. PT Monday, but you can still catch up on the reveals.

The company described the event as “a chance to demo some ChatGPT and GPT-4 updates.” CEO Sam Altman, meanwhile, promoted the event with the message, “not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me.”

As it turned out, the announcement was a new model called GPT-4o — the “o” stands for “omni”– which offers greater responsiveness to voice prompts, as well as better vision capabilities.

“GPT-4o reasons across voice, text and vision,” OpenAI CTO Mira Murati said during a keynote presentation at OpenAI’s offices in San Francisco. “And this is incredibly important, because we’re looking at the future of interaction between ourselves and machines.”

OpenAI also followed up on Monday’s event by showcasing a number of additional demos of GPT-4o’s capabilities on its YouTube channel, from improving visual accessibility through Be My Eyes, its ability to harmonize with itself, and further translation capabilities.

You can watch a replay on the OpenAI website or here: