Google releases new 'open' AI models with a focus on safety

The Google Inc. logo

Image Credits: David Paul Morris/Bloomberg / Getty Images

Google has released a trio of new, “open” generative AI models that it’s calling “safer,” “smaller” and “more transparent” than most — a bold claim, to be sure.

They’re additions to Google’s Gemma 2 family of generative models, which debuted back in May. The new models, Gemma 2 2B, ShieldGemma and Gemma Scope, are designed for slightly different applications and use cases, but share in common a safety bent.

Google’s Gemma series of models are different from its Gemini models in that Google doesn’t make the source code available for Gemini, which is used by Google’s own products as well as being available to developers. Rather, Gemma is Google’s effort to foster goodwill within the developer community, much like Meta is attempting to do with Llama.

Gemma 2 2B is a lightweight model for generating analyzing text that can run on a range of hardware, including laptops and edge devices. It’s licensed for certain research and commercial applications and can be downloaded from sources such as Google’s Vertex AI model library, the data science platform Kaggle and Google’s AI Studio toolkit.

As for ShieldGemma, it’s a collection of “safety classifiers” that attempt to detect toxicity like hate speech, harassment and sexually explicit content. Built on top of Gemma 2, ShieldGemma can be used to filter prompts to a generative model as well as content that the model generates.

Lastly, Gemma Scope allows developers to “zoom in” on specific points within a Gemma 2 model and make its inner workings more interpretable. Here’s how Google describes it in a blog post: “[Gemma Scope is made up of] specialized neural networks that help us unpack the dense, complex information processed by Gemma 2, expanding it into a form that’s easier to analyze and understand. By studying these expanded views, researchers can gain valuable insights into how Gemma 2 identifies patterns, processes information and ultimately makes predictions.”

The release of the new Gemma 2 models comes shortly after the U.S. Commerce Department endorsed open AI models in a preliminary report. Open models broaden generative AI’s availability to smaller companies, researchers, nonprofits and individual developers, the report said, while also highlighting the need for capabilities to monitor such models for potential risks.

After nine years, Google's Nest Learning Thermostat gets an AI makeover

Clock Face Silver Nest Learning Thermostat

Image Credits: Google Nest

After nine long years, Google is finally refreshing the device that gave Nest its name. The company on Tuesday announced the launch of the Nest Learning Thermostat 4, 13 years after the release of the original and nearly a decade after the Learning Thermostat 3 and ahead of the Made by Google 2024 event next week.

Google hopes this release will usher in a new era for its smart home play. The last several years saw a marked slowdown from the company, leading many to believe the category was all but dead in the water. The Nest line’s stasis coincided with a period of relative quiet for Amazon’s Echo line.

It’s no coincidence that the new Learning Thermostat arrives as Google is amping up work on its generative AI model, Gemini. While the system appears to replace Google Assistant on Pixel and other Android devices, the branding is sticking around for the smart home line — albeit powered by many of Google’s new LLM-based models.

Gemini will effectively boost Assistant’s conversational capabilities. Generative AI is capable of powering the kinds of more natural language interactions Google and Amazon have been working for more than a decade to achieve.

Google notes in a release, “We’re thrilled to unveil how we’re using Gemini models to make our devices smarter and simpler to use than ever, starting with cameras and home automation. We’re also using Gemini models to make Google Assistant much more natural and helpful on your Nest speakers and displays.”

Image Credits: Google Nest

The fourth-generation Learning Thermostat refines the line’s familiar design with thinner and sleeker hardware. The always-on display is more customizable, launching with a choice of four faces that offer up more contextual information once someone comes closer. Each features a combination of time, temperature and air quality.

Google opted to keep touch functionality off the display, instead maintaining the familiar turning radial hardware. The screen itself is 60% larger than the gen 3’s, with an edge to edge design that finally ditches the thick black bezel.

In addition to a more conversational Assistant, new AI models are being leveraged for what Google calls “micro-adjustments,” based on the user’s habits. That’s the whole “learning” part of the product name. The refinements also utilize outside temperature to determine adjustments, all in a bid to save on energy consumption.

The $280 smart thermostat comes with an additional Temperature Sensor in-box. The pebble-like piece of hardware can be placed in any key spot in the home to give the system a better overall notion of average temperature. Additional sensors can be purchased at $40 apiece or $99 for a three-pack.

The third-gen Learning Thermostat will remain on shelves until the stock is fully depleted. The more budget-focused Thermostat E, which is currently priced at $130, is staying put.

Preorders open today for the new Nest Learning Thermostat. It hits shelves August 20.

Breaking up Google would offer a chance to remodel the web

a young sundar pichai

Image Credits: AFP

Just for a minute, as we digest the information that Google has been found to operate an illegal monopoly, can you imagine a web without Google? An internet without Google Search, Chrome, Gmail, Maps and so on would — very obviously — be a different place. But would such a change have implications for utility — or something else? Something bigger?

Alternatives to Google’s popular freemium products exist. You can use DuckDuckGo for search, for example, Brave to browse the web and Proton Mail for webmail to name a few of the non-Google options for key digital tools out there. There’s even a web beta of Apple Maps these days. Or — hey — why not switch straight to the community mapping open data project OpenStreetMap? All of these are also services that can be accessed for free, too.

What would be different in a web without Google is absolutely much bigger than mere utility.

The real issue here is about the business model underpinning service delivery. And the opportunity, if we can imagine for a minute a web that’s not dominated by Google, for different models of service delivery — ones that prioritize the interests of web users and the public infosphere — to achieve scale and thrive.

Such alternatives do already exist, as the list above shows. But on a web dominated by Google’s model of tracking-based advertising it’s extremely hard for pro-user approaches to thrive. That’s the real harm flowing from Google’s monopoly.

Google likes to paint its company “mission” as “organizing the world’s information and making it universally accessible and useful,” as its marketing puts it. But this grandiose claim has always been a fig-leaf atop a business model that makes vast amounts of money by organizing data — most especially information about people — so it can make money from microtargeted advertising.

Tracking web users’ activity feeds Google’s ability to profile the online population and profit from services related to selling highly targeted advertising. And it makes truly staggering amounts of money from this business: Alphabet, the brand Google devised almost a decade ago to pop a corporate wrapper around Google, reported full-year revenue of $307.39 billion for 2023. The vast majority of which is earned from ads.

Whether from pay-per-click ads displayed on Google search or YouTube; or through ads Google displays elsewhere on publishers’ websites; or other programmatic ad services it offers, including via its AdX exchange; or its mobile advertising platform for app developers; or through Google’s ad campaign management, marketing and analytics tools, that’s all revenue flowing to Google.

The simple truth is Google is making your information “useful” so it can feed Google’s bottom line because it’s in the advertising business. Put another way, its “mission” is chain-linked to a business model that’s based on tracking and profiling web users. Organizing the world’s information doesn’t sound so benign now does it?

Consider how Google’s incentives to structure data to mesh with its commercial priorities extend to making user-hostile changes to how it displays information. See, for example, endless dark pattern design tricks it’s used to make it harder for users of Google Search to distinguish between organic search results and ads.

Every confused user clicking an ad thinking it’s genuine information drives Google’s revenue engine. Useful to Google, obviously, but frustrating (at best) to web users trying to find a particular piece of information (tl;dr: your time being wasted is precious to Google’s profits).

Consider, also, a more recent example: Just last month Google was accused by Italy’s competition and consumer watchdog of “misleading and aggressive” commercial practices. Including providing users with “inadequate, incomplete and misleading information” (emphasis ours) about decisions they should be able to exercise — thanks to a variety of EU laws — over the company’s ability to track and profile them by denying its ability to link their activity across different Google-owned accounts.

Organizing this type of “information” — about the legal rights European users have to choose not to be tracked and profiled for Google’s profit — and making this info about how you can avoid being tracked “universally accessible and useful” does not appear to be a priority for Google, the adtech giant. Quite the opposite: Google stands accused of impeding users’ legal right to information that could help them protect themselves from Google’s surveillance. Oh.

Google’s market power

Google’s market power is linked to its ownership of so much information about user intention which flows from its dominance of online search.

Its market share of search in Europe is consistently above 90%. In the U.S., Google tends to hold a slightly lower but still dominant share. And — critically — on mobile it’s been able to ensure its search engine (or, from an ads perspective, its user intention data funnel) remains the default on Apple’s rival mobile platform because it pays the iPhone maker billions for the placement every year.

A New York Times report last fall suggested Google pays Apple $18 billion a year. During the antitrust trial Google also disclosed it shares a whopping 36% — more than a third! — of search ad revenue from Safari with Apple.

This is a core grievance of the U.S. antitrust ruling finding Google operates an illegal monopoly, as we reported earlier. By paying Apple to be the default search on iOS, the judge decided Google had blocked competitors from being able to build up their own search engines to a scale that would enable them to access enough data and reach to compete with Google Search.

Such placement is important to Google because Apple’s iOS holds a dominant share of the mobile device market in the U.S. versus Google’s own Android platform (where Google typically gets to set all its own services as the default). Add to that, iOS users are generally more valuable targets for advertisers — so being able to keep accessing information about iPhone users’ intentions is strategically important to Google’s ad business.

No surprise, then, that Google is willing to fork over such a major chunk of revenue to Apple so it can keep squatting on iOS as the default search choice. But buying this spot is also about shielding its tracking-based business model.

Because Google pays Apple so much, Apple has little incentive to develop its own search engine to rival Google’s — meaning web users have missed out on the chance to try a web search product made in Cupertino. Given Apple puts such a premium on marketing privacy as a core brand value, you could at least imagine an Apple-designed search engine would do things differently and wouldn’t have to concern itself with perpetuating the mass tracking and profiling of web users as Google Search does.

It’s true Apple does have an advertising business of its own. But the device maker is not, as Google is, also the owner and operator of core adtech infrastructure that’s been used to bake tracking and profiling into the mainstream web for decades.

Add to that, if other search engines had the chance to gain more users because Google didn’t own the default iOS placement, there would be an opportunity for pro-privacy competitors, such as DuckDuckGo, to get in front of more humans and build greater momentum for alternative non-tracking-based business models.

Instead, we have a web that’s locked to tracking as the default because it’s in Google’s business interests.

Google’s ownership of Chrome gives it another key piece of infrastructure. Google’s browser holds a majority share of the market worldwide (currently around 65% per Statista). Its Chromium browser engine also underpins multiple rival browsers — such as Microsoft’s Edge browser, for example — meaning even lots of rival browsers to Google’s Chrome still use an engine that’s developed by Google. And the decisions it makes about browser infrastructure determine the business models that can fly.

In recent years, Google has been working on reformulating its adtech stack under a project it dubbed “Privacy Sandbox.” The effort is intended to shift the current adtech model that Chrome supports from cookie-based microtargeting of web users, so individual-level tracking and profiling, to a new form of browser-level interest-based targeting that Google claims would be less bad for privacy.

We can debate whether Privacy Sandbox would actually be a positive evolution of the tracking ads business model — the technical solution Google has devised may, technically, be less harmful to individual privacy, if it ends the mass insecure sharing of data about web users that currently takes place via real-time programmatic ad auctions. But the alternative infrastructure it’s devised is still designed to allow targeted manipulation of web users at scale — just based on organizing browser users’ into interest-based buckets for targeting. Regardless, one thing is crystal clear: It’s Google’s dominance that’s driving decisions about the future of web business models.

Other mainstream browsers have already blocked tracking cookies. Google hasn’t, as yet, not only because of its commercial interests over the years but because its browser is also dominant. Which means all sorts of other players (publishers, advertisers, smaller adtechs etc.) are attached to the tracking data flows involved — dependent on Google’s infrastructure continuing to allow this spice through. This is why Google’s Privacy Sandbox has been closely supervised by regulators in Europe.

Principally, the U.K.’s Competition and Markets Authority (CMA) stepped in. In early 2022, it accepted a series of commitments on how Google would undertake the planned migration from tracking-cookie-based adtech to the reformulated interest-based targeting alternative, following complaints that the end of support for tracking cookies would be harmful to online publishers and advertisers reliant on the tracking ads business model.

What’s happened as a result of this close regulatory scrutiny led by a competition authority? Google’s timeline to deprecate cookies got delayed. And then, just last month, it announced it was abandoning the move — saying it was instead proposing that regulators accept an alternative whereby Chrome users would be shown some form of a choice screen. (Presumably this would let them decide whether to accept cookie-based tracking or choose Google’s interest-based alternative but Google hasn’t shared further details yet.)

Google’s self-interested approach to displaying information might be one reason not to trust the design of any such consent pop-up it devised. But the wider point here is that Google’s dominance of web infrastructure is so trenchant — the company’s model is so utterly baked into the mainstream web — that even Google can’t just make a change which might allow web users to get slightly more privacy. Because in flicking such levers the knock-on impact on other businesses that are dependent on its adtech infrastructure risks being a competition harm in itself.

An alternative approach

If there’s ever a definition of a company that got too big — so big it basically owns and operates the web — then surely it’s Google.

We can dream what a web without Google would look like. But it’s not easy to imagine, given how thoroughly it’s ingrained in web infrastructure. Not so much Mountain View as the whole mountain.

Writing in the wake of the Google antitrust decision, Matt Stoller, author of the antitrust-focused newsletter Big, has a go at imagining a post-Google web in the latest edition of his publication.

“I think there’s a vision tucked in an April speech by Federal Trade Commission consumer protection chief Sam Levine on how the internet didn’t have to become the cesspool that it is today,” Stoller writes. “He sketched out what the internet could become if well-regulated, a place where we have zones of privacy, where not everything operates like a casino, and where AI works for us. This [Google antitrust] case brings us a step closer to Levine’s vision, because it means that people who want to build better safer products now have the chance to compete.”

I think you can also see glimpses of the better web that’s possible in some of the great alternative products of our age. The private messaging provided by Signal, for example. Or the strongly encrypted email, calendar, collaborative documents and other privacy-safe productivity tools being developed by Proton. Though it’s notable that both have had to be structured as nonprofit foundations in a bid to ensure they can keep providing free access to pro-user products that don’t generate revenue by data-mining their users.

In an age of monopoly power driving wall-to-wall digital surveillance that unpleasant reality remains the mainstream web rule.

“I believe our digital economy can get better,” wrote Levine. “Not because our tech giants will voluntarily change their ways, or because markets will magically fix themselves. But because, at long last, there is momentum across government — state and federal, Republicans and Democrats — to push back against unchecked surveillance.”

The decision Monday by Judge Amit P. Mehta of the U.S. District Court for the District of Columbia to find Google a monopolist could be the first brick ripped out of the surveillance wall. If Google’s appeal fails, and remedies are imposed — just imagine! — a corporate break-up that forces the fig-leaf Alphabet to divest key Google infrastructure. Such an outcome could finally upend Google’s decades-long grip on web data flows and reboot the default model, setting this place free for users, startups and communities to reimagine and rebuild anew.

Gemini stage presentation at Made by Google 24

Google says it's fixed Gemini's people-generating feature

Gemini stage presentation at Made by Google 24

Image Credits: Maxwell Zeff

Back in February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people after users complained of historical inaccuracies. Told to depict “a Roman legion,” for example, Gemini would show an anachronistic group of racially diverse soldiers while rendering “Zulu warriors” as stereotypically Black.

Google CEO Sundar Pichai apologized, and Demis Hassabis, the co-founder of Google’s AI research division DeepMind, said that a fix should arrive “in very short order” — within the next couple of weeks. It ended up taking much, much longer than that (despite some Googlers pulling 120-hour workweeks!). But in the coming days, Gemini will once again be able to create pics showing people.

Well… sort of.

Only certain users — specifically those signed up for one of Google’s paid Gemini plans, Gemini Advanced, Business or Enterprise — will regain Gemini’s people-generating feature as part of an early access, English-language-only test.

Google wouldn’t say when the test will expand to the free Gemini tier and other languages.

“Gemini Advanced gives our users priority access to our latest features,” a Google spokesperson told TechCrunch. “This helps us gather valuable feedback while delivering a highly anticipated feature first to our premium subscribers.”

So what fixes did Google implement for people generation? According to the company, Imagen 3, the latest image-generating model built into Gemini, contains mitigations to make the people images Gemini produces more “fair.” For example, Imagen 3 was trained on AI-generated captions designed to “improve the variety and diversity of concepts associated with images in [its] training data,” according to a technical paper shared with TechCrunch. And the model’s training data was filtered for “safety,” plus “review[ed] … with consideration to fairness issues,” claims Google.

We asked for more details about Imagen 3’s training data, but the spokesperson would only say that the model was trained on “a large dataset comprising images, text and associated annotations.”

“We’ve significantly reduced the potential for undesirable responses through extensive internal and external red-teaming testing, collaborating with independent experts to ensure ongoing improvement,” the spokesperson continued. “Our focus has been on rigorously testing people generation before turning it back on.”

Imagen 3 and Gems

In a spot of better news, all Gemini users will get Imagen 3 within the week — minus people generation for those not subscribed to the premium Gemini tiers.

Google says that Imagen 3 can more accurately understand the text prompts that it translates into images versus its predecessor, Imagen 2, and is more “creative and detailed” in its generations. In addition, the model produces fewer artifacts and errors, Google claims, and is the best Imagen model yet for rendering text.

Google Imagen 3
A sample from Google’s Imagen 3.
Image Credits: Google

To allay concerns about the potential for deepfakes, Imagen 3 will use SynthID, an approach developed by DeepMind to apply invisible, cryptographic watermarks to various forms of AI-originated media. Google previously announced Imagen 3 would use SynthID, so this doesn’t come as much surprise. But I’ll note that the contrast between how Google’s treating image generation in Gemini versus other products, like its Pixel Studio, is a bit curious.

Google Imagen 3
Another sample from Imagen 3.
Image Credits: Google

Alongside Imagen 3, Google’s rolling out Gems for Gemini — albeit only for Gemini Advanced, Business and Enterprise users. Like OpenAI’s GPTs, Gems are custom-tailored versions of Gemini that can act as “experts” on particular topics (e.g. vegetarian cooking).

Here’s how Google describes them in a blog post: “With Gems, you can create a team of experts to help you think through a challenging project, brainstorm ideas for an upcoming event, or write the perfect caption for a social media post. Your Gem can also remember a detailed set of instructions to help you save time on tedious, repetitive, or difficult tasks.”

To create a Gem, users write instructions, give it a name and they’re off to the races.

Gems are available on desktop and mobile in 150 countries and “most languages,” Google says (but not supported in Gemini Live just yet). There are several examples at launch, including a “learning coach,” a “career guide,” a “brainstormer” and a “coding partner.”

Gemini Gems
Image Credits: Google

We asked Google if it had any plans for ways to let users publish and use other users’ Gems, similar to GPTs on OpenAI’s GPT Store. The answer was “no,” basically.

“Right now, we’re focused on learning how people will use Gems for creativity and productivity,” the spokesperson said. “Nothing further to share at this time.”

Google DeepMind develops a ‘solidly amateur’ table tennis robot

Image Credits: Google DeepMind Robotics

Sports have long served as an important test for robots. The best-known example of the phenomenon may be the annual RoboCup soccer competition, which dates back to the mid-1990s. Table tennis has played a key role in benchmarking robot arms since a decade prior. The sport requires speed, responsiveness and strategy, among other things.

In a newly published paper titled “Achieving Human Level Competitive Robot Table Tennis,” Google’s DeepMind Robotics team is showcasing its own work on the game. The researchers have effectively developed a “solidly amateur human-level player” when pitted against a human component.

During testing, the table tennis bot was able to beat all of the beginner-level players it faced. With intermediate players, the robot won 55% of matches. It’s not ready to take on pros, however. The robot lost every time it faced an advanced player. All told, the system won 45% of the 29 games it played.

“This is the first robot agent capable of playing a sport with humans at human level and represents a milestone in robot learning and control,” the paper claims. “However, it is also only a small step towards a long-standing goal in robotics of achieving human level performance on many useful real world skills. A lot of work remains in order to consistently achieve human-level performance on single tasks, and then beyond, in building generalist robots that are capable of performing many useful tasks, skillfully and safely interacting with humans in the real world.”

The system’s biggest shortcoming is its ability to react to fast balls. DeepMind suggests the key reasons for this are system latency, mandatory resets between shots and a lack of useful data.

Image Credits: Google DeepMind Robotics

“To address the latency constraints that hinder the robot’s reaction time to fast balls, we propose investigating advanced control algorithms and hardware optimizations,” the researchers note. “These could include exploring predictive models to anticipate ball trajectories or implementing faster communication protocols between the robot’s sensors and actuators.”

Other exploitable issues with the system are high and low balls, backhand and the ability to read the spin on an incoming ball.

As far as how such research could affect robotics beyond the very limited usefulness of table tennis, DeepMind cites policy architecture, its use of simulation to operate in real games, and its ability to adapt its strategy in real time.

How to ask Google to remove deepfake porn results from Google Search

Shot of a group of shocked-looking young man staring at a monitor in a dark room

Image Credits: Getty Images

The internet is full of deepfakes — and most of them are nudes.

According to a report from Home Security Heroes, deepfake porn makes up 98% of all deepfake videos online. Thanks to easy-to-use and freely available generative AI tools, the number of deepfakes online — many of which aren’t consensual — skyrocketed 550% from 2019 to 2023.

While laws against nonconsensual deepfakes are lagging behind, at least in the U.S., it’s becoming a little bit easier to get deepfakes removed, thanks to new tools in Google Search.

Google recently introduced changes to Search to combat deepfake porn, including adjustments to the Search ranking algorithm designed to lower deepfake content in searches. The company also rolled out an expedited way to process requests for removal of nonconsenual deepfake porn results from Search.

Here’s how to use it.

Requesting a request for removal

The easiest way to request that a deepfake nonconsensual porn result — a webpage, image or video — be removed from Google Search is using this web form. Note that there’s a separate form for child sexual abuse imagery, and the target content has to meet Google’s criteria for removal, as follows:

It’s nude, intimate, or sexually explicit (for example, images or videos of you) and is distributed without permission; ORIt’s fake or falsely depicts you as nude or in a sexually explicit situation; ORIt incorrectly associates you or your name with sex work.

Click on “Content contains nudity or sexual material option,” then proceed to the next page.

Google Search deepfakes removal
Image Credits: Google

At this stage, select “Content falsely portrays me in a sexual act, or in an intimate state. (This is sometimes known as a ‘deep fake’ or ‘fake pornography.’):”

Google Search deepfakes removal
Image Credits: Google

At the final page in the form, after entering your name, country of residence and contact email, you’ll have to indicate whether it’s you or someone else depicted in the deepfake content to be removed. Google allows others to remove content on someone’s behalf, but only if that person is an “authorized representative” who explains how they have that authority.

Google Search deepfakes removal
Image Credits: Google

Next is the content information section. Here, you’ll need to provide the URLs to the deepfake results to be removed (up to a maximum of 1,000), the URLs to the Google Search results where the content appears (again, up to a maximum of 1,000) and search terms that return the deepfakes. Lastly, you’ll have to upload one or more screenshots of the content you’re reporting and any additional info that might help explain the situation.

Steps after submitting a request

After submitting a request, you’ll get an automated email confirmation. The request will be reviewed, after which Google may request more information (like additional URLs). You’ll get a notification of any action taken, and, if the request didn’t meet Google’s requirements for removal, a follow-up message explaining why.

Requests that are denied can be re-submitted with new supporting materials.

Google says that when someone successfully requests the removal of nonconsensual deepfake porn results in Search, the company’s systems will also aim to filter explicit results on all similar searches about that person. In addition, Google says, when an image is removed from Search under Google’s policies, its systems will scan for — and remove — any duplicates of that image they find.

“These protections have already proven to be successful in addressing other types of non-consensual imagery, and we’ve now built the same capabilities for fake explicit images as well,” Google writes in a blog post. “These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.”

Google Gemini is the Pixel 9’s default assistant

James Manyika Google Gemini onstage at Made by Google 2024

Image Credits: Google

Tuesday’s Made by Google event cemented Gemini’s place as the Pixel’s default assistant. The company had previously allowed users to opt in to replace Google Assistant with the generative AI platform, and now the newly announced Pixel 9 phones are the first devices to ship that way by default.

Google notes that if users are unsatisfied with a hallucination-prone platform that might not yet be fully baked, they can roll their new handset back to what the company has taken to calling its “legacy assistant.” The title isn’t entirely apt, however. Google recently reconfirmed that Assistant will live on as part of its Nest/Home operations.

That side of Google’s hardware division recently received its own shot in the arm, courtesy of an overdue update to the Nest Learning Thermostat, the Chromecast-replacing Google TV Streamer and new AI capabilities under the hood.

Gemini currently has both the higher ceiling and the lower floor. The last few generations of neural networks have proven to be extremely impressive for a wide range of tasks, from natural language conversations to image generation. The black-box model is still prone to hiccups, however, leading some to question whether the current hype cycle has made companies like Google overly aggressive in rolling out their solutions.

Google has already showcased a number of extremely impressive AI tools, particularly on the imaging side, including features like Magic Eraser and the new Add Me editing feature. The Pixel 9’s arrival marks additional AI features, including the Gemini 1.5 Pro-powered Live, which brings more human-like conversations to the handset.

Following the new Pixels’ arrival, Google is no doubt eyeing a broader Android-wide Gemini assistant. That adoption will come down to Google’s update timeframe and the device-makers themselves. Some, like Samsung, have been working on their own take on generative AI, though there’s no evidence that offerings like Galaxy AI have a hope of eclipsing Gemini in a meaningful way.

Also key to the timeline is whether or not Google intends to continue supporting the “legacy” Assistant indefinitely on mobile devices — and whether lower-end devices will be up to the task of adopting Gemini.

Pixel 9 devices start shipping August 22.

Google’s Pixel Watch 3 comes in two sizes

Image Credits: Google

Choice is good — especially when it comes to wearables. Human bodies come in all shapes and sizes, and there’s no such thing as one size fits all. Until Tuesday’s Made by Google 2024 event, the Pixel Watch has only been available in one size: 41mm.

Announced Tuesday, the Pixel Watch 3 adds some much-welcomed choice to the line. In addition to the 41mm model, the smartwatch will also be available in 45mm. Both versions sport larger screens than the Pixel Watch 2, owing in part to smaller bezels.

The display is now brighter as well, jumping from a peak of 1,000 to 2,000 nits — a nice improvement for a device designed to be checked in daylight. The AMOLED display packs a 320 ppi density, with a refresh rate up to 60 Hz.

The chip remains unchanged from last year’s model. It’s a Qualcomm Snapdragon Wear 5100, with a Cortex M33 co-processor. The battery is the same size as well, on the 41mm at 306 mAh, whereas the 45mm version’s is 420 mAh. Google is claiming the same 24 hours of battery life with the always-on display enabled. With Battery Saver mode, the life jumps to 36 hours.

That’s a nice bump over the Apple Watch’s stated 18 hours of life. Battery continues to be that product’s biggest sticking point. The OnePlus Watch 2, meanwhile, is on the other end of the spectrum at up to 100 hours. That comes courtesy of a dual-engine architecture, which switches processors to dramatically decrease power consumption.

Image Credits: Google

The other noteworthy bits are on the software side. Fitness is a core feature, as Google’s 2021 Fitbit acquisition continues to be foundational for the watch. The company is getting more serious about appealing to the running community with the Watch 3. It uses a combination of motion sensing and machine learning to form a fuller picture of things like cadence, stride length and vertical oscillation.

A new running dashboard maintains all of those metrics in a single spot.

“Create a variety of running routines — add timed warmups and cool downs, set target pace, heart rate, times, and distances, or even set up interval routines with repeats,” Google writes. “Plan, execute, and reflect to beat your best. Then execute your saved run routines with real-time on-wrist guidance via audio and haptic cues.”

The company is still trying to upsell “serious” runners on the $10 a month Fitbit Premium membership. That upgrade leverages Google AI, combined with past runs to create workout goals.

The Fitbit app now offers a Morning Brief feature as well. That includes sleep metrics, a “readiness score,” weekly goals and other health numbers. Weather’s in there as well, for a better picture of what the morning run will look like.

The 41mm starts at $350 for the Wi-Fi model and $450 for LTE. The 45mm version runs $400 for Wi-Fi and $500 for LTE.

Google takes on OpenAI with Gemini Live

Gemini Live

Image Credits: Google

Made by Google was this week, featuring a full range of reveals from Google’s biggest hardware event. Google unveiled its new lineup of Pixel 9 phones, including the $1,799 Pixel 9 Fold Pro; advanced AI-powered photo-editing tools; and the new Pixel Buds Pro 2, which are infused with Gemini AI. The company also announced Gemini Live, a conversational AI voice assistant to compete with OpenAI’s Advanced Voice Mode, though the live demo had a few hiccups.

Epic Games launched its rival iOS app store in the European Union. It’s launching with games like Fortnite, Rocket League, Sideswipe and Fall Guys, and is working with developers to launch their games on the Epic Games Store in the future. Fortnite’s return to iOS comes over four years after Apple first removed the game from its App Store, years of legal battles, and regulatory changes brought by EU’s Digital Markets Act.

X launched Grok-2 and Grok-2 mini in beta with improved reasoning. The new Grok AI model can now generate images on X, though access is currently limited to the social network’s Premium and Premium+ users. However, Grok’s image-generation feature doesn’t seem to have any guardrails around creating images of political figures like similar products do — and many users are taking advantage of it. 


This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.


News

CrowdStrike president Michael Sentonas accepting a pwnie award at Def Con in Vegas in 2024
Image Credits: Lorenzo Franceschi-Bicchierai / TechCrunch

The “most epic fail” award goes to…: CrowdStrike accepted the award for Most Epic Fail at Def Con’s Pwnie Awards, just a few weeks after its software update triggered a global IT meltdown. At least they were a good sport about it. Read more

Waymo takes its driverless robotaxis to the freeway: Waymo will start testing its fully autonomous robotaxis on freeways in the San Francisco Bay Area after getting approval from California regulators to charge for autonomous rides on the freeway. Read more

20 years of competing with Google Maps: OpenStreetMap is a community-driven platform that serves companies and software developers with maps so they can rely a little less on proprietary products like Google — and it just celebrated its 20th birthday. Read more

Productivity your way: If you want to stay productive while distancing yourself from the usual Big Tech players, we put together some open source alternatives to popular productivity apps like Calendly, Zoom and Substack. Read more

The FBI goes after Radar: The FBI seized the servers of a ransomware and extortion gang called Radar (aka Dispossessor). It’s a rare win for the FBI, which has struggled to contain and curtail the rising threat from ransomware. Read more

Score shuts down: The dating app for people with good to excellent credit shut down in early August, the company told TechCrunch. What was only supposed to be a pop-up app received so much user interest that it stayed live for six months before finally shutting down. Read more

Apple goes after Patreon: Apple threatened to remove Patreon from the App Store if creators use unsupported third-party billing options or disable transactions on iOS, instead of using its in-app purchasing system for Patreon’s subscriptions. Read more

California supports digital IDs: Residents of California will soon be able to store their driver’s license or state ID in their Apple Wallet or Google Wallet apps, as the state works to launch support for digital IDs in the coming months. Read more

More bad news for Byju’s: India’s top court has put on hold a tribunal ruling that halted Byju’s insolvency proceedings — a win for U.S. creditors that are seeking to recover $1 billion from the once-celebrated edtech startup that has since fallen from grace. Read more

Make money on Telegram: Telegram announced new ways for creators to make money on its platform, including monthly paid subscriptions that users can purchase using the app’s digital currency in order to get access to a creator’s extra content. Read more

That’s a yikes from me: Palo Alto Networks is getting a lot of grief for a recent trade show event in which two women posed with lampshades on their heads. CEO Nikesh Arora apologized in a LinkedIn post, saying it was not “consistent with our values.” Read more

Analysis

Wooden old movie clapperboard pattern with hard shadow on pink background. Concept of film industry, cinema, entertainment, and Hollywood.
Image Credits: DBenitostock (opens in a new window) / Getty Images

Will AI change art as we know it? The latest AI models can produce great demos, but will they really change how people make movies and TV? A panel at SIGGRAPH explored the potential of generative AI and other systems to change the way media is created today. While filmmakers and VFX experts think the usefulness of these tools could pave the way for film in the short term, it could also change the medium beyond recognition in the long term. Read more

Pour one out for CrowdTangle: Journalists, researchers and politicians are mourning Meta’s shutdown of CrowdTangle, a tool used for tracking the spread of disinformation on Facebook and Instagram. Its replacement is less accessible and has fewer features, critics say, leading many people to question why the company axed the useful tool just three months before a contentious U.S. election that is already threatened by AI and misinformation. Read more

Google Gemini is the Pixel 9’s default assistant

James Manyika Google Gemini onstage at Made by Google 2024

Image Credits: Google

Tuesday’s Made by Google event cemented Gemini’s place as the Pixel’s default assistant. The company had previously allowed users to opt in to replace Google Assistant with the generative AI platform, and now the newly announced Pixel 9 phones are the first devices to ship that way by default.

Google notes that if users are unsatisfied with a hallucination-prone platform that might not yet be fully baked, they can roll their new handset back to what the company has taken to calling its “legacy assistant.” The title isn’t entirely apt, however. Google recently reconfirmed that Assistant will live on as part of its Nest/Home operations.

That side of Google’s hardware division recently received its own shot in the arm, courtesy of an overdue update to the Nest Learning Thermostat, the Chromecast-replacing Google TV Streamer and new AI capabilities under the hood.

Gemini currently has both the higher ceiling and the lower floor. The last few generations of neural networks have proven to be extremely impressive for a wide range of tasks, from natural language conversations to image generation. The black-box model is still prone to hiccups, however, leading some to question whether the current hype cycle has made companies like Google overly aggressive in rolling out their solutions.

Google has already showcased a number of extremely impressive AI tools, particularly on the imaging side, including features like Magic Eraser and the new Add Me editing feature. The Pixel 9’s arrival marks additional AI features, including the Gemini 1.5 Pro-powered Live, which brings more human-like conversations to the handset.

Following the new Pixels’ arrival, Google is no doubt eyeing a broader Android-wide Gemini assistant. That adoption will come down to Google’s update timeframe and the device-makers themselves. Some, like Samsung, have been working on their own take on generative AI, though there’s no evidence that offerings like Galaxy AI have a hope of eclipsing Gemini in a meaningful way.

Also key to the timeline is whether or not Google intends to continue supporting the “legacy” Assistant indefinitely on mobile devices — and whether lower-end devices will be up to the task of adopting Gemini.

Pixel 9 devices start shipping August 22.