Made by Google 2024: How to watch Google unveil the Pixel 9, a new foldable and more

Sundar Pichai, CEO of Google Inc. speaks during an event in New Delhi

Image Credits: SAJJAD HUSSAIN/AFP / Getty Images

Google’s been on a consumer hardware jag in August. On Tuesday, the company announced the long-awaited new Nest Thermostat, along with Google TV Streamer, which replaces the 11-year-old Chromecast line. Next Tuesday at Made by Google 2024, the software giant is refreshing the Pixel line with a slew of new smartphones.

Previous years’ Made by Google events have occurred on the East Coast in October. This one is happening in the company’s Mountain View headquarters, kicking off at 10 a.m. PT/1 p.m. ET on August 13.

As ever, the easiest way to watch is through Google’s own livestream, coincidentally embedded below.

As always, the gang at TechCrunch will be bringing you the news as it breaks.

Along with announcing new home devices and an upgrade to Google Assistant, the company has teased two new mobile devices: the Pixel 9 Pro and the Pixel 9 Fold. It also removed any doubt that its generative AI platform will take center stage, with the taglines, “A (foldable) phone built for the Gemini era.”

We should be seeing a lot more of Android 15 at the event. The mobile operating system should debut on the Pixel 9 line, along with an upgraded in-house processor. The latest version of the Pixel Watch and Pixel Buds Pro are also rumored for release.

As AI becomes standard, watch for these 4 DevSecOps trends

Image of a magnifying glass above balls to represent identifying bias in AI.

Image Credits: Hiroshi Watanabe (opens in a new window) / Getty Images

David DeSanto

ContributorDavid DeSanto is the chief product officer at GitLab Inc., where he leads GitLab’s product division to define and execute GitLab’s product vision and roadmap. David is responsible for ensuring the company builds, ships, and supports the platform that reinforces GitLab’s leadership in the DevSecOps platform market.

AI’s role in software development is reaching a pivotal moment — one that will compel organizations and their DevSecOps leaders to be more proactive in advocating for effective and responsible AI utilization.

Simultaneously, developers and the wider DevSecOps community must prepare to address four global trends in AI: the increased use of AI in code testing, ongoing threats to IP ownership and privacy, a rise in AI bias, and — despite all of these challenges — an increased reliance on AI technologies. Successfully aligning with these trends will position organizations and DevSecOps teams for success. Ignoring them could stifle innovation or, worse, derail your business strategy.

From luxury to standard: Organizations will embrace AI across the board

Integrating AI will become standard, not a luxury, across all industries of products and services, leveraging DevSecOps to build AI functionality alongside the software that will leverage it. Harnessing AI to drive innovation and deliver enhanced customer value will be critical to staying competitive in the AI-driven marketplace.

From my conversations with GitLab customers and monitoring industry trends, with organizations pushing the boundaries of efficiency through AI adoption, more than two-thirds of businesses will embed AI capabilities within their offerings by the end of 2024. Organizations are evolving from experimenting with AI to becoming AI-centric.

To prepare, organizations must invest in revising software development governance and emphasizing continuous learning and adaptation in AI technologies. This will require a cultural and strategic shift. It demands rethinking business processes, product development, and customer engagement strategies. And it requires training — which DevSecOps teams say they want and need. In our latest Global DevSecOps Report, 81% of respondents said they would like more training on how to use AI effectively.

As AI becomes more sophisticated and integral to business operations, companies will need to navigate the ethical implications and societal impacts of their AI-driven solutions, ensuring that they contribute positively to their customers and communities.

AI will dominate code-testing workflows

The evolution of AI in DevSecOps is already transforming code testing, and the trend is expected to accelerate. GitLab’s research found that only 41% of DevSecOps teams currently use AI for automated test generation as part of software development, but that number is expected to reach 80% by the end of 2024 and approach 100% within two years.

As organizations integrate AI tools into their workflows, they are grappling with the challenges of aligning their current processes with the efficiency and scalability gains that AI can provide. This shift promises a radical increase in productivity and accuracy — but it also demands significant adjustments to traditional testing roles and practices. Adapting to AI-powered workflows requires training DevSecOps teams in AI oversight and fine-tuning AI systems to facilitate its integration into code testing to enhance software products’ overall quality and reliability.

Additionally, this trend will redefine the role of quality assurance professionals, requiring them to evolve their skills to oversee and enhance AI-based testing systems. It’s impossible to overstate the importance of human oversight, as AI systems will require continuous monitoring and guidance to be highly effective.

AI’s threat to IP and privacy in software security will accelerate

The growing adoption of AI-powered code creation increases the risk of AI-introduced vulnerabilities and the chance of widespread IP leakage and data privacy breaches affecting software security, corporate confidentiality, and customer data protection.

To mitigate those risks, businesses must prioritize robust IP and privacy protections in their AI adoption strategies and ensure that AI is implemented with full transparency about how it’s being used. Implementing stringent data governance policies and employing advanced detection systems will be crucial to identifying and addressing AI-related risks. Fostering heightened awareness of these issues through employee training and encouraging a proactive risk management culture is vital to safeguarding IP and data privacy.

The security challenges of AI also underscore the ongoing need to implement DevSecOps practices throughout the software development life cycle, where security and privacy are not afterthoughts but are integral parts of the development process from the outset. In short, businesses must keep security at the forefront when adopting AI — similar to the shift left concept within DevSecOps — to ensure that innovations leveraging AI do not come at the cost of security and privacy.

​​Brace for a rise in AI bias before we see better days

While 2023 was AI’s breakout year, its rise put a spotlight on bias in algorithms. AI tools that rely on internet data for training inherit the full range of biases expressed across online content. This development poses a dual challenge: exacerbating existing biases and creating new ones that impact the fairness and impartiality of AI in DevSecOps.

To counteract pervasive bias, developers must focus on diversifying their training datasets, incorporating fairness metrics, and deploying bias-detection tools in AI models, as well as explore AI models designed for specific use cases. One promising avenue to explore is using AI feedback to evaluate AI models based on a clear set of principles, or a “constitution,” that establishes firm guidelines about what AI will and won’t do. Establishing ethical guidelines and training interventions are crucial to ensure unbiased AI outputs.

Organizations must establish robust data governance frameworks to ensure the quality and reliability of the data in their AI systems. AI systems are only as good as the data they process, and bad data can lead to inaccurate outputs and poor decisions.

Developers and the broader tech community should demand and facilitate the development of unbiased AI through constitutional AI or reinforcement learning with human feedback aimed at reducing bias. This requires a concerted effort across AI providers and users to ensure responsible AI development that prioritizes fairness and transparency.

Preparing for the AI revolution in DevSecOps

As organizations ramp up their shift toward AI-centric business models, it’s not just about staying competitive — it’s also about survival. Business leaders and DevSecOps teams will need to confront the anticipated challenges amplified by using AI — whether they be threats to privacy, trust in what AI produces, or issues of cultural resistance.

Collectively, these developments represent a new era in software development and security. Navigating these changes requires a comprehensive approach encompassing ethical AI development and use, vigilant security and governance measures, and a commitment to preserving privacy. The actions organizations and DevSecOps teams take now will set the course for the long-term future of AI in DevSecOps, ensuring its ethical, secure, and beneficial deployment.

The molluSCAN-eye system

Smart molluscs – yes, smart molluscs – could watch our waterways 24/7 for pollution

The molluSCAN-eye system

Image Credits: molluSCAN

If the clams could speak, what would they say? Surely we all ask ourselves this question every day. But a French startup is going further, allowing bivalves like clams, mussels and oysters to act as all-natural water quality inspectors. MolluSCAN was showing off its tech this year at CES 2024 in Las Vegas.

The company began as a research project some 15 years ago at the University of Bordeaux. CEO and co-founder Ludovic Quinault and his team were looking into monitoring the health of bivalves, a category of marine animals found all over the world in both fresh and salt water. As largely stationary filter feeders, they are quite in tune with their surroundings, and their habits are affected by things like temperature, pollution and so on.

Quinault found that a simple, non-invasive sensor attached to the clam or oyster’s shell can monitor everything from feeding to reproduction and stress responses like suddenly shutting or failing to open at the normal time. These in turn are excellent predictors of various qualities in the water, and can act as an early warning system for problems like toxic substances. The mollusc doesn’t know whether it’s closing because of crude oil residue or an algae bloom, but it intuits that the water is unsafe for life and shuts up. In fact, Ludovic has found that they are extremely sensitive to small changes that chemical analysis may not even pick up reliably.

That’s one signal among many that can be told by monitoring bivalves, and after more than a decade of research, Quinault and his team are aiming to commercialize the technology, forming molluSCAN in March of 2023.

Water quality is of course very important to governments, park rangers and many industries, but the process of sampling and testing it is rarely convenient. It’s usually impractical to put testing apparatus at multiple places in a body of water, so usually people have to go out and collect samples, then bring them to a central location to be analyzed.

Image Credits: molluSCAN

The molluSCAN-eye system won’t replace traditional water monitoring, but as a living part of the water ecosystem, its health and the health of its surroundings are closely linked. So oysters doing well in one branch of a river but not another, or mussels suddenly snapping shut in some places after a spill — both these are complementary signals to ordinary testing and could also help direct resources to places where they are particularly needed. The system that monitors clusters of animals is totally self-contained and can operate without any maintenance for more than three years, he said.

Since their debut last spring, molluSCAN has landed two regular customers and has three more in talks, though they also have over a dozen science-focused installations around Europe. Quinault is hoping that municipalities and natural resource authorities will shell out for the tech as a totally natural, harmless and low-touch way to watch their waterways.

Read more about CES 2024 on TechCrunch

Apple Watch Series 9, blue

Apple's fix for the Apple Watch Series 9 and Ultra 2 sales ban could be disabling a useless feature

Apple Watch Series 9, blue

Image Credits: Darrell Etherington/TechCrunch

Apple is readying a more permanent fix for the ITC ruling that ended up temporarily blocking sales of its Apple Watch Ultra 2 and Series 9 models in the U.S. (which are both now back on sale — though again, potentially temporarily). The proposed solution, discovered in a brief legal filing (via 9to5Mac) by the lawyers representing Apple’s opposition Masimo in the dispute, involves disabling via software entirely the pulse oximetry features on models of the devices going forward — a change which should honestly have nil impact on anyone who ends up buying one of these in the future.

Note: Apple Watch Ultra 2 and Series 9 continue to be offered for sale with the pulse oximeter feature included, and that will remain the case until at least the U.S. Court of Appeals for the Federal Circuit rules on Apple’s filing for a stay that covers their entire appeal period — and could apply afterwards, too, if Apple wins the appeal. This proposed software fix seems to be something discussed in the event that none of that goes Apple’s way.

Even if you have an Apple Watch model that includes the pulse oximeter feature, which was introduced way back in 2020, you’d be forgiven for not knowing it was there. The feature ostensibly provides a reading of your blood oxygen levels, though anyone who’s had much experience with Apple’s implementation of the sensor knows that it’s hardly accurate, and not something that you can really use for deriving any genuinely useful insights about your health.

Pulse oximetry, including consumer blood oxygen monitors (typically ones that clip to the end of your fingertip, which you may have encountered in a drug store or medical setting) have been used for a long time and can indeed provide crucial, even life-saving information about the level of oxygen found in your blood. If it dips dangerously low, that’s a good indicator that there’s something seriously wrong and that you need to seek immediate help. Blood oxygen levels arguably got their breakout moment in the general public consciousness as a key indicator of when COVID cases went from bad to worse, requiring emergency medical intervention.

To be fair to Apple, it has never marketed the blood oxygen detection features of the Apple Watch as designed for any “medical” use, and instead bills it as strictly one of the many “wellness” features of the Apple Watch. And it’s also entirely plausible that by watching your measurements over time for upward or downward trends, you could combine that info with other wellness signals to be made aware of some change to your well-being that is impacting you negatively. But in general, the Apple Watch’s pulse oximetry feature is a gilding of the lily that definitely isn’t worth suffering through a U.S.-wide device sales ban for, or even suffering a modest patent-licensing agreement with Masimo over.

Yes, users will have one less graph in their Health app dashboards, but it’s one that was hardly useful in isolation anyways — especially now that the pandemic-fueled fascination with blood oxygen levels specifically has mostly subsided. Apple, unlike Samsung, also isn’t shy about rolling back features once they prove unpopular with users or questionably useful, so this isn’t even all that unusual — except that Masimo forced its hand, of course.

Image of a magnifying glass above balls to represent identifying bias in AI.

As AI becomes standard, watch for these 4 DevSecOps trends

Image of a magnifying glass above balls to represent identifying bias in AI.

Image Credits: Hiroshi Watanabe (opens in a new window) / Getty Images

David DeSanto

Contributor

David DeSanto is the chief product officer at GitLab Inc., where he leads GitLab’s product division to define and execute GitLab’s product vision and roadmap. David is responsible for ensuring the company builds, ships, and supports the platform that reinforces GitLab’s leadership in the DevSecOps platform market.

AI’s role in software development is reaching a pivotal moment — one that will compel organizations and their DevSecOps leaders to be more proactive in advocating for effective and responsible AI utilization.

Simultaneously, developers and the wider DevSecOps community must prepare to address four global trends in AI: the increased use of AI in code testing, ongoing threats to IP ownership and privacy, a rise in AI bias, and — despite all of these challenges — an increased reliance on AI technologies. Successfully aligning with these trends will position organizations and DevSecOps teams for success. Ignoring them could stifle innovation or, worse, derail your business strategy.

From luxury to standard: Organizations will embrace AI across the board

Integrating AI will become standard, not a luxury, across all industries of products and services, leveraging DevSecOps to build AI functionality alongside the software that will leverage it. Harnessing AI to drive innovation and deliver enhanced customer value will be critical to staying competitive in the AI-driven marketplace.

From my conversations with GitLab customers and monitoring industry trends, with organizations pushing the boundaries of efficiency through AI adoption, more than two-thirds of businesses will embed AI capabilities within their offerings by the end of 2024. Organizations are evolving from experimenting with AI to becoming AI-centric.

To prepare, organizations must invest in revising software development governance and emphasizing continuous learning and adaptation in AI technologies. This will require a cultural and strategic shift. It demands rethinking business processes, product development, and customer engagement strategies. And it requires training — which DevSecOps teams say they want and need. In our latest Global DevSecOps Report, 81% of respondents said they would like more training on how to use AI effectively.

As AI becomes more sophisticated and integral to business operations, companies will need to navigate the ethical implications and societal impacts of their AI-driven solutions, ensuring that they contribute positively to their customers and communities.

AI will dominate code-testing workflows

The evolution of AI in DevSecOps is already transforming code testing, and the trend is expected to accelerate. GitLab’s research found that only 41% of DevSecOps teams currently use AI for automated test generation as part of software development, but that number is expected to reach 80% by the end of 2024 and approach 100% within two years.

As organizations integrate AI tools into their workflows, they are grappling with the challenges of aligning their current processes with the efficiency and scalability gains that AI can provide. This shift promises a radical increase in productivity and accuracy — but it also demands significant adjustments to traditional testing roles and practices. Adapting to AI-powered workflows requires training DevSecOps teams in AI oversight and fine-tuning AI systems to facilitate its integration into code testing to enhance software products’ overall quality and reliability.

Additionally, this trend will redefine the role of quality assurance professionals, requiring them to evolve their skills to oversee and enhance AI-based testing systems. It’s impossible to overstate the importance of human oversight, as AI systems will require continuous monitoring and guidance to be highly effective.

AI’s threat to IP and privacy in software security will accelerate

The growing adoption of AI-powered code creation increases the risk of AI-introduced vulnerabilities and the chance of widespread IP leakage and data privacy breaches affecting software security, corporate confidentiality, and customer data protection.

To mitigate those risks, businesses must prioritize robust IP and privacy protections in their AI adoption strategies and ensure that AI is implemented with full transparency about how it’s being used. Implementing stringent data governance policies and employing advanced detection systems will be crucial to identifying and addressing AI-related risks. Fostering heightened awareness of these issues through employee training and encouraging a proactive risk management culture is vital to safeguarding IP and data privacy.

The security challenges of AI also underscore the ongoing need to implement DevSecOps practices throughout the software development life cycle, where security and privacy are not afterthoughts but are integral parts of the development process from the outset. In short, businesses must keep security at the forefront when adopting AI — similar to the shift left concept within DevSecOps — to ensure that innovations leveraging AI do not come at the cost of security and privacy.

​​Brace for a rise in AI bias before we see better days

While 2023 was AI’s breakout year, its rise put a spotlight on bias in algorithms. AI tools that rely on internet data for training inherit the full range of biases expressed across online content. This development poses a dual challenge: exacerbating existing biases and creating new ones that impact the fairness and impartiality of AI in DevSecOps.

To counteract pervasive bias, developers must focus on diversifying their training datasets, incorporating fairness metrics, and deploying bias-detection tools in AI models, as well as explore AI models designed for specific use cases. One promising avenue to explore is using AI feedback to evaluate AI models based on a clear set of principles, or a “constitution,” that establishes firm guidelines about what AI will and won’t do. Establishing ethical guidelines and training interventions are crucial to ensure unbiased AI outputs.

Organizations must establish robust data governance frameworks to ensure the quality and reliability of the data in their AI systems. AI systems are only as good as the data they process, and bad data can lead to inaccurate outputs and poor decisions.

Developers and the broader tech community should demand and facilitate the development of unbiased AI through constitutional AI or reinforcement learning with human feedback aimed at reducing bias. This requires a concerted effort across AI providers and users to ensure responsible AI development that prioritizes fairness and transparency.

Preparing for the AI revolution in DevSecOps

As organizations ramp up their shift toward AI-centric business models, it’s not just about staying competitive — it’s also about survival. Business leaders and DevSecOps teams will need to confront the anticipated challenges amplified by using AI — whether they be threats to privacy, trust in what AI produces, or issues of cultural resistance.

Collectively, these developments represent a new era in software development and security. Navigating these changes requires a comprehensive approach encompassing ethical AI development and use, vigilant security and governance measures, and a commitment to preserving privacy. The actions organizations and DevSecOps teams take now will set the course for the long-term future of AI in DevSecOps, ensuring its ethical, secure, and beneficial deployment.

The molluSCAN-eye system

Smart molluscs – yes, smart molluscs – could watch our waterways 24/7 for pollution

The molluSCAN-eye system

Image Credits: molluSCAN

If the clams could speak, what would they say? Surely we all ask ourselves this question every day. But a French startup is going further, allowing bivalves like clams, mussels and oysters to act as all-natural water quality inspectors. MolluSCAN was showing off its tech this year at CES 2024 in Las Vegas.

The company began as a research project some 15 years ago at the University of Bordeaux. CEO and co-founder Ludovic Quinault and his team were looking into monitoring the health of bivalves, a category of marine animals found all over the world in both fresh and salt water. As largely stationary filter feeders, they are quite in tune with their surroundings, and their habits are affected by things like temperature, pollution and so on.

Quinault found that a simple, non-invasive sensor attached to the clam or oyster’s shell can monitor everything from feeding to reproduction and stress responses like suddenly shutting or failing to open at the normal time. These in turn are excellent predictors of various qualities in the water, and can act as an early warning system for problems like toxic substances. The mollusc doesn’t know whether it’s closing because of crude oil residue or an algae bloom, but it intuits that the water is unsafe for life and shuts up. In fact, Ludovic has found that they are extremely sensitive to small changes that chemical analysis may not even pick up reliably.

That’s one signal among many that can be told by monitoring bivalves, and after more than a decade of research, Quinault and his team are aiming to commercialize the technology, forming molluSCAN in March of 2023.

Water quality is of course very important to governments, park rangers and many industries, but the process of sampling and testing it is rarely convenient. It’s usually impractical to put testing apparatus at multiple places in a body of water, so usually people have to go out and collect samples, then bring them to a central location to be analyzed.

Image Credits: molluSCAN

The molluSCAN-eye system won’t replace traditional water monitoring, but as a living part of the water ecosystem, its health and the health of its surroundings are closely linked. So oysters doing well in one branch of a river but not another, or mussels suddenly snapping shut in some places after a spill — both these are complementary signals to ordinary testing and could also help direct resources to places where they are particularly needed. The system that monitors clusters of animals is totally self-contained and can operate without any maintenance for more than three years, he said.

Since their debut last spring, molluSCAN has landed two regular customers and has three more in talks, though they also have over a dozen science-focused installations around Europe. Quinault is hoping that municipalities and natural resource authorities will shell out for the tech as a totally natural, harmless and low-touch way to watch their waterways.

Read more about CES 2024 on TechCrunch

Axiom Space Ax-3 mission crew

Watch SpaceX launch Axiom Space's third private astronaut mission live

Axiom Space Ax-3 mission crew

Image Credits: Axiom Space (opens in a new window)

Update: The launch is now scheduled for January 19. The below text has been updated to reflect the new launch date.

Axiom Space is gearing up to launch its third fully private astronaut mission to the International Space Station. Here’s all you need to know.

The crew of four will take off from NASA’s Kennedy Space Center on board a SpaceX Falcon 9 at 4:49 PM EST on Friday, January 19. 

The crew is notable for being so international: it includes NASA astronaut and Axiom employee Michael López-Alegria; Italian Air Force Colonel Walter Villadei; Alper Gezeravci, Turkey’s first astronaut; and Marcus Wandt, an astronaut with the European Space Agency. They will be traveling in a Crew, Dragon spacecraft, the same capsule built by SpaceX that ferries NASA astronauts to and from the ISS.

It will be López-Alegria’s six spaceflight and his second time traveling to the station with Axiom. Per NASA rules, all private missions to the ISS must be led by a former NASA astronaut. Villadei previously flew on Virgin Galactic’s first sub-orbital flight, Galactic 01, last summer.

The Dragon capsule is scheduled to autonomously dock with the ISS on Saturday. The crew will stay on the station for 14 days, where they’ll conduct more than 30 scientific experiments and demonstrations. Axiom’s fourth mission is scheduled for as soon as October of this year.

The mission, called Ax-3, was originally scheduled for November 2023, but slipped due to weather and other scheduling issues with SpaceX. Houston-based Axiom’s first private mission launched in April 2022 and the second followed in May 2023. 

But Axiom is not stopping at private astronaut missions — as if that wasn’t ambitious enough. Instead, the company aims to eventually attach commercial modules to the ISS, that Axiom owns and operates, which would detach by the end of the decade to become a free-flying Axiom Space Station. The first section, which is being developed by European aerospace manufacturer Thales Alenia Space, is scheduled to launch in 2026. While there are other private space station projects under development, notably by Blue Origin and Voyager Space, Axiom’s is the only one that will connect with the station before it is decommissioned in 2030. 

Apple Watch Series 9, blue

Apple's fix for the Apple Watch Series 9 and Ultra 2 sales ban could be disabling a useless feature

Apple Watch Series 9, blue

Image Credits: Darrell Etherington/TechCrunch

Apple is readying a more permanent fix for the ITC ruling that ended up temporarily blocking sales of its Apple Watch Ultra 2 and Series 9 models in the U.S. (which are both now back on sale — though again, potentially temporarily). The proposed solution, discovered in a brief legal filing (via 9to5Mac) by the lawyers representing Apple’s opposition Masimo in the dispute, involves disabling via software entirely the pulse oximetry features on models of the devices going forward — a change which should honestly have nil impact on anyone who ends up buying one of these in the future.

Note: Apple Watch Ultra 2 and Series 9 continue to be offered for sale with the pulse oximeter feature included, and that will remain the case until at least the U.S. Court of Appeals for the Federal Circuit rules on Apple’s filing for a stay that covers their entire appeal period — and could apply afterwards, too, if Apple wins the appeal. This proposed software fix seems to be something discussed in the event that none of that goes Apple’s way.

Even if you have an Apple Watch model that includes the pulse oximeter feature, which was introduced way back in 2020, you’d be forgiven for not knowing it was there. The feature ostensibly provides a reading of your blood oxygen levels, though anyone who’s had much experience with Apple’s implementation of the sensor knows that it’s hardly accurate, and not something that you can really use for deriving any genuinely useful insights about your health.

Pulse oximetry, including consumer blood oxygen monitors (typically ones that clip to the end of your fingertip, which you may have encountered in a drug store or medical setting) have been used for a long time and can indeed provide crucial, even life-saving information about the level of oxygen found in your blood. If it dips dangerously low, that’s a good indicator that there’s something seriously wrong and that you need to seek immediate help. Blood oxygen levels arguably got their breakout moment in the general public consciousness as a key indicator of when COVID cases went from bad to worse, requiring emergency medical intervention.

To be fair to Apple, it has never marketed the blood oxygen detection features of the Apple Watch as designed for any “medical” use, and instead bills it as strictly one of the many “wellness” features of the Apple Watch. And it’s also entirely plausible that by watching your measurements over time for upward or downward trends, you could combine that info with other wellness signals to be made aware of some change to your well-being that is impacting you negatively. But in general, the Apple Watch’s pulse oximetry feature is a gilding of the lily that definitely isn’t worth suffering through a U.S.-wide device sales ban for, or even suffering a modest patent-licensing agreement with Masimo over.

Yes, users will have one less graph in their Health app dashboards, but it’s one that was hardly useful in isolation anyways — especially now that the pandemic-fueled fascination with blood oxygen levels specifically has mostly subsided. Apple, unlike Samsung, also isn’t shy about rolling back features once they prove unpopular with users or questionably useful, so this isn’t even all that unusual — except that Masimo forced its hand, of course.

NBCUniversal's Peacock will let you watch 4 livestreams at once for 2024 Paris Olympics

Image Credits: Peacock

Today, during NBCUniversal’s annual technology conference, One24, the company revealed a slew of features coming to its streaming service Peacock ahead of the 2024 Paris Olympics in July.

The most notable feature to launch on Peacock is multiview, which allows subscribers to view up to four simultaneous matches at once. Next to picture-in-picture mode, many sports fans agree that multiview has been one of the greatest advancements to sports streaming tech in years, since it offers a more convenient way to follow multiple games simultaneously instead of constantly switching streams.

The company also announced a new interactive “Live Actions” button to let fans choose which events they want to follow, a new way to search for specific athletes, and other features designed to help subscribers navigate over 5,000 hours of live coverage for the upcoming Summer Olympics.

Some subscribers have complained about the way Peacock has broadcast the Olympics in the past, so it’s critical that the streamer provides an adequate viewing experience this year. For instance, during the 2022 Winter Olympic Games, Peacock made a questionable choice of revealing some of the winners in the Highlights rail. This will be the first time the service has done a full livestream of all Summer Olympic events, so we bet Peacock is feeling the pressure to get it right.

Peacock reported 31 million subscribers as of the fourth quarter of 2023.

Image Credits: Peacock

Two multiview options

Although YouTube TV and Apple both offer multiview features, Peacock told TechCrunch during a briefing on Tuesday that it’s the first stand-alone streaming service to offer web support for multiview. Google-owned YouTube TV rolled out a multiview feature last year that’s only available on smart TVs. In May 2023, Apple began offering multiview on the Apple TV 4K for select sports content, such as Major League Soccer and Major League Baseball streams.

Peacock also hopes to stand out among its competitors by offering two multiview options: “Discovery Multiview,” which gives fans a four-screen overview of the live events currently happening, and a more traditional multiview experience where viewers can choose which four matches they want to watch (this second option is only available for Olympic sports with multiple simultaneous streams, such as soccer, wrestling and track and field.) Both options are customizable, meaning viewers can move around the screens and seamlessly switch between audio feeds.

Since up to 40 Olympic events will be happening simultaneously, the unique offering helps viewers determine which four events are the most important. Plus, the feature will showcase tags and descriptions for each match to inform fans which ones have a first-time Olympian or defending champion or if there’s an elimination risk.

“With up to 40 events happening at the same time, we want to avoid users having decision paralysis.… [Peacock Discovery Multiview is] the perfect option for fans who want to lean back and let Peacock be their guide to the best of the Olympic Games,” Peacock SVP of Product John Jelley told TechCrunch.

Peacock’s multiview feature is available on the web, as well as on smart TVs, streaming devices, and tablets. However, the company explained to us that it isn’t rolling out multiview to mobile devices because the smaller screen size makes it difficult to navigate between events.

“It ultimately comes down to the screen size, and we’ve found that multiview on mobile doesn’t deliver the best viewing experience,” Jelley said. “For users who want to watch on the go, multiview is available on tablets, and of course across all other platforms.”

YouTube TV recently confirmed to 9to5Google that it’s launching support on iOS devices, but it’s likely the feature will be less advanced compared to the TV version.

Peacock will begin testing multiview during select events this spring.

Image Credits: Peacock

In addition to multiview, the streaming service’s “Live Actions” will prompt fans to select a “Keep Watching” button if they want to continue viewing live coverage or switch to whip-around coverage. They can also add events to their “My Stuff” list to watch later.

A new “Search by Star Athlete” feature allows viewers to narrow down their search to their favorite athletes. Previously, they could only search by sport, event, team and country.

Peacock is also expanding its “Catch Up with Key Plays” feature to basketball, golf and soccer. The feature lets fans watch highlights of a game to quickly catch up without having to exit out of the main screen. It initially launched as a feature for Premier League games.

The company noted that multiview and Live Actions will extend to other live sporting events after the Olympics.