Google’s Pixel Watch 3 comes in two sizes

Image Credits: Google

Choice is good — especially when it comes to wearables. Human bodies come in all shapes and sizes, and there’s no such thing as one size fits all. Until Tuesday’s Made by Google 2024 event, the Pixel Watch has only been available in one size: 41mm.

Announced Tuesday, the Pixel Watch 3 adds some much-welcomed choice to the line. In addition to the 41mm model, the smartwatch will also be available in 45mm. Both versions sport larger screens than the Pixel Watch 2, owing in part to smaller bezels.

The display is now brighter as well, jumping from a peak of 1,000 to 2,000 nits — a nice improvement for a device designed to be checked in daylight. The AMOLED display packs a 320 ppi density, with a refresh rate up to 60 Hz.

The chip remains unchanged from last year’s model. It’s a Qualcomm Snapdragon Wear 5100, with a Cortex M33 co-processor. The battery is the same size as well, on the 41mm at 306 mAh, whereas the 45mm version’s is 420 mAh. Google is claiming the same 24 hours of battery life with the always-on display enabled. With Battery Saver mode, the life jumps to 36 hours.

That’s a nice bump over the Apple Watch’s stated 18 hours of life. Battery continues to be that product’s biggest sticking point. The OnePlus Watch 2, meanwhile, is on the other end of the spectrum at up to 100 hours. That comes courtesy of a dual-engine architecture, which switches processors to dramatically decrease power consumption.

Image Credits: Google

The other noteworthy bits are on the software side. Fitness is a core feature, as Google’s 2021 Fitbit acquisition continues to be foundational for the watch. The company is getting more serious about appealing to the running community with the Watch 3. It uses a combination of motion sensing and machine learning to form a fuller picture of things like cadence, stride length and vertical oscillation.

A new running dashboard maintains all of those metrics in a single spot.

“Create a variety of running routines — add timed warmups and cool downs, set target pace, heart rate, times, and distances, or even set up interval routines with repeats,” Google writes. “Plan, execute, and reflect to beat your best. Then execute your saved run routines with real-time on-wrist guidance via audio and haptic cues.”

The company is still trying to upsell “serious” runners on the $10 a month Fitbit Premium membership. That upgrade leverages Google AI, combined with past runs to create workout goals.

The Fitbit app now offers a Morning Brief feature as well. That includes sleep metrics, a “readiness score,” weekly goals and other health numbers. Weather’s in there as well, for a better picture of what the morning run will look like.

The 41mm starts at $350 for the Wi-Fi model and $450 for LTE. The 45mm version runs $400 for Wi-Fi and $500 for LTE.

How to ask Google to remove deepfake porn results from Google Search

Image Credits: Getty Images

The internet is full of deepfakes — and most of them are nudes.

According to a report from Home Security Heroes, deepfake porn makes up 98% of all deepfake videos online. Thanks to easy-to-use and freely available generative AI tools, the number of deepfakes online — many of which aren’t consensual — skyrocketed 550% from 2019 to 2023.

While laws against nonconsensual deepfakes are lagging behind, at least in the U.S., it’s becoming a little bit easier to get deepfakes removed, thanks to new tools in Google Search.

Google recently introduced changes to Search to combat deepfake porn, including adjustments to the Search ranking algorithm designed to lower deepfake content in searches. The company also rolled out an expedited way to process request for removals of nonconsenual deepfake porn results from Search.

Here’s how to use it.

Requesting a request for removal

The easiest way to request that a deepfake nonconsensual porn result — a webpage, image or video — be removed from Google Search is using this web form. Note that there’s a separate form for child sexual abuse imagery, and the target content has to meet Google’s criteria for removal, as follows:

It’s nude, intimate, or sexually explicit (for example, images or videos of you) and is distributed without permission; ORIt’s fake or falsely depicts you as nude or in a sexually explicit situation; ORIt incorrectly associates you or your name with sex work.

Click on “Content contains nudity or sexual material option,” then proceed to the next page.

Google Search deepfakes removal
Image Credits: Google

At this stage, select “Content falsely portrays me in a sexual act, or in an intimate state. (This is sometimes known as a “deep fake” or “fake pornography.”):”

Google Search deepfakes removal
Image Credits: Google

At the final page in the form, after entering your name, country of residence and contact email, you’ll have to indicate whether it’s you or someone else depicted in the deepfake content to be removed. Google allows others to remove content on someone’s behalf, but only if that person is an “authorized representative” who explains how they have that authority.

Google Search deepfakes removal
Image Credits: Google

Next is the content information section. Here, you’ll need to provide the URLs to the deepfake results to be removed (up to a maximum of 1,000), the URLs to the Google Search results where the content appears (again, up to a maximum of 1,000) and search terms that return the deepfakes. Lastly, you’ll have to upload one or more screenshots of the content you’re reporting and any additional info that might help explain the situation.

Steps after submitting a request

After submitting a request, you’ll get an automated email confirmation. The request will be reviewed, after which Google may request more information (like additional URLs). You’ll get a notification of any action taken, and, if the request didn’t meet Google’s requirements for removal, a follow-up message explaining why.

Requests that are denied can be re-submitted with new supporting materials.

Google says that when someone successfully requests the removal of nonconsensual deepfake porn results in Search, the company’s systems will also aim to filter explicit results on all similar searches about that person. In addition, Google says, when an image is removed from Search under Google’s policies, its systems will scan for — and remove — any duplicates of that image they find.

“These protections have already proven to be successful in addressing other types of non-consensual imagery, and we’ve now built the same capabilities for fake explicit images as well,” Google writes in a blog post. “These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.”

Google DeepMind develops a ‘solidly amateur’ table tennis robot

Image Credits: Google DeepMind Robotics

Sports have long served as an important test for robots. The best-known example of the phenomenon may be the annual RoboCup soccer competition, which dates back to the mid-1990s. Table tennis has played a key role in benchmarking robot arms since a decade prior. The sport requires speed, responsiveness and strategy, among other things.

In a newly published paper titled “Achieving Human Level Competitive Robot Table Tennis,” Google’s DeepMind Robotics team is showcasing its own work on the game. The researchers have effectively developed a “solidly amateur human-level player” when pitted against a human component.

During testing, the table tennis bot was able to beat all of the beginner-level players it faced. With intermediate players, the robot won 55% of matches. It’s not ready to take on pros, however. The robot lost every time it faced an advanced player. All told, the system won 45% of the 29 games it played.

“This is the first robot agent capable of playing a sport with humans at human level and represents a milestone in robot learning and control,” the paper claims. “However, it is also only a small step towards a long-standing goal in robotics of achieving human level performance on many useful real world skills. A lot of work remains in order to consistently achieve human-level performance on single tasks, and then beyond, in building generalist robots that are capable of performing many useful tasks, skillfully and safely interacting with humans in the real world.”

The system’s biggest shortcoming is its ability to react to fast balls. DeepMind suggests the key reasons for this are system latency, mandatory resets between shots and a lack of useful data.

Image Credits: Google DeepMind Robotics

“To address the latency constraints that hinder the robot’s reaction time to fast balls, we propose investigating advanced control algorithms and hardware optimizations,” the researchers note. “These could include exploring predictive models to anticipate ball trajectories or implementing faster communication protocols between the robot’s sensors and actuators.”

Other exploitable issues with the system are high and low balls, backhand and the ability to read the spin on an incoming ball.

As far as how such research could affect robotics beyond the very limited usefulness of table tennis, DeepMind cites policy architecture, its use of simulation to operate in real games, and its ability to adapt its strategy in real time.

a young sundar pichai

Breaking up Google would offer a chance to remodel the web

a young sundar pichai

Image Credits: AFP

Just for a minute, as we digest the information that Google has been found to operate an illegal monopoly, can you imagine a web without Google? An internet without Google Search, Chrome, Gmail, Maps and so on would — very obviously — be a different place. But would such a change have implications for utility — or something else? Something bigger?

Alternatives to Google’s popular freemium products exist. You can use DuckDuckGo for search, for example, Brave to browse the web and Proton Mail for webmail to name a few of the non-Google options for key digital tools out there. There’s even a web beta of Apple Maps these days. Or — hey — why not switch straight to the community mapping open data project OpenStreetMap? All of these are also services that can be accessed for free, too.

What would be different in a web without Google is absolutely much bigger than mere utility.

The real issue here is about the business model underpinning service delivery. And the opportunity, if we can imagine for a minute a web that’s not dominated by Google, for different models of service delivery — ones that prioritize the interests of web users and the public infosphere — to achieve scale and thrive.

Such alternatives do already exist, as the list above shows. But on a web dominated by Google’s model of tracking-based advertising it’s extremely hard for pro-user approaches to thrive. That’s the real harm flowing from Google’s monopoly.

Google likes to paint its company “mission” as “organizing the world’s information and making it universally accessible and useful,” as its marketing puts it. But this grandiose claim has always been a fig-leaf atop a business model that makes vast amounts of money by organizing data — most especially information about people — so it can make money from microtargeted advertising.

Tracking web users’ activity feeds Google’s ability to profile the online population and profit from services related to selling highly targeted advertising. And it makes truly staggering amounts of money from this business: Alphabet, the brand Google devised almost a decade ago to pop a corporate wrapper around Google, reported full-year revenue of $307.39 billion for 2023. The vast majority of which is earned from ads.

Whether from pay-per-click ads displayed on Google search or YouTube; or through ads Google displays elsewhere on publishers’ websites; or other programmatic ad services it offers, including via its AdX exchange; or its mobile advertising platform for app developers; or through Google’s ad campaign management, marketing and analytics tools, that’s all revenue flowing to Google.

The simple truth is Google is making your information “useful” so it can feed Google’s bottom line because it’s in the advertising business. Put another way, its “mission” is chain-linked to a business model that’s based on tracking and profiling web users. Organizing the world’s information doesn’t sound so benign now does it?

Consider how Google’s incentives to structure data to mesh with its commercial priorities extend to making user-hostile changes to how it displays information. See, for example, endless dark pattern design tricks it’s used to make it harder for users of Google Search to distinguish between organic search results and ads.

Every confused user clicking an ad thinking it’s genuine information drives Google’s revenue engine. Useful to Google, obviously, but frustrating (at best) to web users trying to find a particular piece of information (tl;dr: your time being wasted is precious to Google’s profits).

Consider, also, a more recent example: Just last month Google was accused by Italy’s competition and consumer watchdog of “misleading and aggressive” commercial practices. Including providing users with “inadequate, incomplete and misleading information” (emphasis ours) about decisions they should be able to exercise — thanks to a variety of EU laws — over the company’s ability to track and profile them by denying its ability to link their activity across different Google-owned accounts.

Organizing this type of “information” — about the legal rights European users have to choose not to be tracked and profiled for Google’s profit — and making this info about how you can avoid being tracked “universally accessible and useful” does not appear to be a priority for Google, the adtech giant. Quite the opposite: Google stands accused of impeding users’ legal right to information that could help them protect themselves from Google’s surveillance. Oh.

Google’s market power

Google’s market power is linked to its ownership of so much information about user intention which flows from its dominance of online search.

Its market share of search in Europe is consistently above 90%. In the U.S., Google tends to hold a slightly lower but still dominant share. And — critically — on mobile it’s been able to ensure its search engine (or, from an ads perspective, its user intention data funnel) remains the default on Apple’s rival mobile platform because it pays the iPhone maker billions for the placement every year.

A New York Times report last fall suggested Google pays Apple $18 billion a year. During the antitrust trial Google also disclosed it shares a whopping 36% — more than a third! — of search ad revenue from Safari with Apple.

This is a core grievance of the U.S. antitrust ruling finding Google operates an illegal monopoly, as we reported earlier. By paying Apple to be the default search on iOS, the judge decided Google had blocked competitors from being able to build up their own search engines to a scale that would enable them to access enough data and reach to compete with Google Search.

Such placement is important to Google because Apple’s iOS holds a dominant share of the mobile device market in the U.S. versus Google’s own Android platform (where Google typically gets to set all its own services as the default). Add to that, iOS users are generally more valuable targets for advertisers — so being able to keep accessing information about iPhone users’ intentions is strategically important to Google’s ad business.

No surprise, then, that Google is willing to fork over such a major chunk of revenue to Apple so it can keep squatting on iOS as the default search choice. But buying this spot is also about shielding its tracking-based business model.

Because Google pays Apple so much, Apple has little incentive to develop its own search engine to rival Google’s — meaning web users have missed out on the chance to try a web search product made in Cupertino. Given Apple puts such a premium on marketing privacy as a core brand value, you could at least imagine an Apple-designed search engine would do things differently and wouldn’t have to concern itself with perpetuating the mass tracking and profiling of web users as Google Search does.

It’s true Apple does have an advertising business of its own. But the device maker is not, as Google is, also the owner and operator of core adtech infrastructure that’s been used to bake tracking and profiling into the mainstream web for decades.

Add to that, if other search engines had the chance to gain more users because Google didn’t own the default iOS placement, there would be an opportunity for pro-privacy competitors, such as DuckDuckGo, to get in front of more humans and build greater momentum for alternative non-tracking-based business models.

Instead, we have a web that’s locked to tracking as the default because it’s in Google’s business interests.

Google’s ownership of Chrome gives it another key piece of infrastructure. Google’s browser holds a majority share of the market worldwide (currently around 65% per Statista). Its Chromium browser engine also underpins multiple rival browsers — such as Microsoft’s Edge browser, for example — meaning even lots of rival browsers to Google’s Chrome still use an engine that’s developed by Google. And the decisions it makes about browser infrastructure determine the business models that can fly.

In recent years, Google has been working on reformulating its adtech stack under a project it dubbed “Privacy Sandbox.” The effort is intended to shift the current adtech model that Chrome supports from cookie-based microtargeting of web users, so individual-level tracking and profiling, to a new form of browser-level interest-based targeting that Google claims would be less bad for privacy.

We can debate whether Privacy Sandbox would actually be a positive evolution of the tracking ads business model — the technical solution Google has devised may, technically, be less harmful to individual privacy, if it ends the mass insecure sharing of data about web users that currently takes place via real-time programmatic ad auctions. But the alternative infrastructure it’s devised is still designed to allow targeted manipulation of web users at scale — just based on organizing browser users’ into interest-based buckets for targeting. Regardless, one thing is crystal clear: It’s Google’s dominance that’s driving decisions about the future of web business models.

Other mainstream browsers have already blocked tracking cookies. Google hasn’t, as yet, not only because of its commercial interests over the years but because its browser is also dominant. Which means all sorts of other players (publishers, advertisers, smaller adtechs etc.) are attached to the tracking data flows involved — dependent on Google’s infrastructure continuing to allow this spice through. This is why Google’s Privacy Sandbox has been closely supervised by regulators in Europe.

Principally, the U.K.’s Competition and Markets Authority (CMA) stepped in. In early 2022, it accepted a series of commitments on how Google would undertake the planned migration from tracking-cookie-based adtech to the reformulated interest-based targeting alternative, following complaints that the end of support for tracking cookies would be harmful to online publishers and advertisers reliant on the tracking ads business model.

What’s happened as a result of this close regulatory scrutiny led by a competition authority? Google’s timeline to deprecate cookies got delayed. And then, just last month, it announced it was abandoning the move — saying it was instead proposing that regulators accept an alternative whereby Chrome users would be shown some form of a choice screen. (Presumably this would let them decide whether to accept cookie-based tracking or choose Google’s interest-based alternative but Google hasn’t shared further details yet.)

Google’s self-interested approach to displaying information might be one reason not to trust the design of any such consent pop-up it devised. But the wider point here is that Google’s dominance of web infrastructure is so trenchant — the company’s model is so utterly baked into the mainstream web — that even Google can’t just make a change which might allow web users to get slightly more privacy. Because in flicking such levers the knock-on impact on other businesses that are dependent on its adtech infrastructure risks being a competition harm in itself.

An alternative approach

If there’s ever a definition of a company that got too big — so big it basically owns and operates the web — then surely it’s Google.

We can dream what a web without Google would look like. But it’s not easy to imagine, given how thoroughly it’s ingrained in web infrastructure. Not so much Mountain View as the whole mountain.

Writing in the wake of the Google antitrust decision, Matt Stoller, author of the antitrust-focused newsletter Big, has a go at imagining a post-Google web in the latest edition of his publication.

“I think there’s a vision tucked in an April speech by Federal Trade Commission consumer protection chief Sam Levine on how the internet didn’t have to become the cesspool that it is today,” Stoller writes. “He sketched out what the internet could become if well-regulated, a place where we have zones of privacy, where not everything operates like a casino, and where AI works for us. This [Google antitrust] case brings us a step closer to Levine’s vision, because it means that people who want to build better safer products now have the chance to compete.”

I think you can also see glimpses of the better web that’s possible in some of the great alternative products of our age. The private messaging provided by Signal, for example. Or the strongly encrypted email, calendar, collaborative documents and other privacy-safe productivity tools being developed by Proton. Though it’s notable that both have had to be structured as nonprofit foundations in a bid to ensure they can keep providing free access to pro-user products that don’t generate revenue by data-mining their users.

In an age of monopoly power driving wall-to-wall digital surveillance that unpleasant reality remains the mainstream web rule.

“I believe our digital economy can get better,” wrote Levine. “Not because our tech giants will voluntarily change their ways, or because markets will magically fix themselves. But because, at long last, there is momentum across government — state and federal, Republicans and Democrats — to push back against unchecked surveillance.”

The decision Monday by Judge Amit P. Mehta of the U.S. District Court for the District of Columbia to find Google a monopolist could be the first brick ripped out of the surveillance wall. If Google’s appeal fails, and remedies are imposed — just imagine! — a corporate break-up that forces the fig-leaf Alphabet to divest key Google infrastructure. Such an outcome could finally upend Google’s decades-long grip on web data flows and reboot the default model, setting this place free for users, startups and communities to reimagine and rebuild anew.

Clock Face Silver Nest Learning Thermostat

After nine years, Google's Nest Learning Thermostat gets an AI makeover

Clock Face Silver Nest Learning Thermostat

Image Credits: Google Nest

After nine long years, Google is finally refreshing the device that gave Nest its name. The company on Tuesday announced the launch of the Nest Learning Thermostat 4, 13 years after the release of the original and nearly a decade after the Learning Thermostat 3 and ahead of the Made by Google 2024 event next week.

Google hopes this release will usher in a new era for its smart home play. The last several years saw a marked slowdown from the company, leading many to believe the category was all but dead in the water. The Nest line’s stasis coincided with a period of relative quiet for Amazon’s Echo line.

It’s no coincidence that the new Learning Thermostat arrives as Google is amping up work on its generative AI model, Gemini. While the system appears to replace Google Assistant on Pixel and other Android devices, the branding is sticking around for the smart home line — albeit powered by many of Google’s new LLM-based models.

Gemini will effectively boost Assistant’s conversational capabilities. Generative AI is capable of powering the kinds of more natural language interactions Google and Amazon have been working for more than a decade to achieve.

Google notes in a release, “We’re thrilled to unveil how we’re using Gemini models to make our devices smarter and simpler to use than ever, starting with cameras and home automation. We’re also using Gemini models to make Google Assistant much more natural and helpful on your Nest speakers and displays.”

Image Credits: Google Nest

The fourth-generation Learning Thermostat refines the line’s familiar design with thinner and sleeker hardware. The always-on display is more customizable, launching with a choice of four faces that offer up more contextual information once someone comes closer. Each features a combination of time, temperature and air quality.

Google opted to keep touch functionality off the display, instead maintaining the familiar turning radial hardware. The screen itself is 60% larger than the gen 3’s, with an edge to edge design that finally ditches the thick black bezel.

In addition to a more conversational Assistant, new AI models are being leveraged for what Google calls “micro-adjustments,” based on the user’s habits. That’s the whole “learning” part of the product name. The refinements also utilize outside temperature to determine adjustments, all in a bid to save on energy consumption.

The $280 smart thermostat comes with an additional Temperature Sensor in-box. The pebble-like piece of hardware can be placed in any key spot in the home to give the system a better overall notion of average temperature. Additional sensors can be purchased at $40 apiece or $99 for a three-pack.

The third-gen Learning Thermostat will remain on shelves until the stock is fully depleted. The more budget-focused Thermostat E, which is currently priced at $130, is staying put.

Preorders open today for the new Nest Learning Thermostat. It hits shelves August 20.

Google releases new 'open' AI models with a focus on safety

The Google Inc. logo

Image Credits: David Paul Morris/Bloomberg / Getty Images

Google has released a trio of new, “open” generative AI models that it’s calling “safer,” “smaller” and “more transparent” than most — a bold claim, to be sure.

They’re additions to Google’s Gemma 2 family of generative models, which debuted back in May. The new models, Gemma 2 2B, ShieldGemma and Gemma Scope, are designed for slightly different applications and use cases, but share in common a safety bent.

Google’s Gemma series of models are different from its Gemini models in that Google doesn’t make the source code available for Gemini, which is used by Google’s own products as well as being available to developers. Rather, Gemma is Google’s effort to foster goodwill within the developer community, much like Meta is attempting to do with Llama.

Gemma 2 2B is a lightweight model for generating analyzing text that can run on a range of hardware, including laptops and edge devices. It’s licensed for certain research and commercial applications and can be downloaded from sources such as Google’s Vertex AI model library, the data science platform Kaggle and Google’s AI Studio toolkit.

As for ShieldGemma, it’s a collection of “safety classifiers” that attempt to detect toxicity like hate speech, harassment and sexually explicit content. Built on top of Gemma 2, ShieldGemma can be used to filter prompts to a generative model as well as content that the model generates.

Lastly, Gemma Scope allows developers to “zoom in” on specific points within a Gemma 2 model and make its inner workings more interpretable. Here’s how Google describes it in a blog post: “[Gemma Scope is made up of] specialized neural networks that help us unpack the dense, complex information processed by Gemma 2, expanding it into a form that’s easier to analyze and understand. By studying these expanded views, researchers can gain valuable insights into how Gemma 2 identifies patterns, processes information and ultimately makes predictions.”

The release of the new Gemma 2 models comes shortly after the U.S. Commerce Department endorsed open AI models in a preliminary report. Open models broaden generative AI’s availability to smaller companies, researchers, nonprofits and individual developers, the report said, while also highlighting the need for capabilities to monitor such models for potential risks.

Google Cloud Partners With TechCrunch Disrupt 2024

TechCrunch Disrupt Partners With Google Cloud

TechCrunch is joining forces with Google Cloud as its lead partner for Startup Battlefield 200. This event will highlight and support the most promising startups from around the globe at TechCrunch Disrupt 2024, which will take place in San Francisco from October 28-30.

At the heart of this collaboration is a shared vision of fostering innovation. Google Cloud is dedicated to helping startups scale faster and smarter through a combination of robust infrastructure, advanced AI capabilities, and global networks. This approach aligns seamlessly with the goals of TechCrunch Disrupt and Startup Battlefield 200, which aim to spotlight and nurture the next wave of industry disruptors.

TechCrunch Startup Battlefield 200 has a rich history of propelling early-stage companies to stardom, with notable alumni such as Dropbox, Mint and Trello. This year, the event will showcase 200 startups, selected for their innovative potential and diverse industry representation.

Participants in Startup Battlefield 200 receive invaluable exposure, networking opportunities and the chance to compete for a $100,000 equity-free prize. Beyond these tangible benefits, the event fosters an environment where cutting-edge ideas can flourish and the seeds of future industry leaders can be planted.

More information about Startup Battlefield 200 can be found here.

About TechCrunch Disrupt 2024

TechCrunch Disrupt is where you’ll find innovation for every stage of your startup journey. Whether you’re a budding founder with a revolutionary idea, a seasoned startup looking to scale or an investor seeking the next big thing, TechCrunch Disrupt offers unparalleled resources, connections and expert insights to propel your venture forward. Over 10,000 startup leaders will be attending this year’s event on October 28-30 in San Francisco. Learn more here.

Meet Brex, Google Cloud, Aerospace and more at Disrupt 2024

TechCrunch Disrupt 2024

We’re about four months away from TechCrunch Disrupt 2024, taking place October 28 to 30 in San Francisco!

We could not bring you this world-class event without our world-class partners — some of the startup ecosystem’s leading tech companies. Why? They show up armed with their expertise, educational resources and connections. They present sessions on topics that help founders — on every point along the startup journey — take their next steps toward building a solid, successful business.

They also show up looking for opportunities to form alliances and partnerships or to, potentially, become a startup’s next client. If there’s one thing we know, magic happens at Disrupt.

The TechCrunch Disrupt 2024 parade of partners continues

We already announced some of our partners here. Let’s take a look at the latest group of companies eager to meet you in San Francisco — and where you’ll find them. Pro tip: Some of them appear in multiple categories.

Many thanks to Google Cloud, which will be meeting and greeting Disrupt attendees for a second year in a row in its 30×30 spot in the Level 2 Lobby. Google Cloud is also the exclusive sponsor of the AI Session Track.

Industry stage partner session — Space

The Aerospace Corporation

Broadcast partner session on the Builders Stage

Golub Capital

AI Stage

Nebius AI

Event badges

Nebius AI

Roundtable discussions

TELUS10TimesUC Berkeley SCETGreentank500 Global

Breakout sessions

Llama LoungeThis Week in FintechSOSVHackerOneTheory Ventures Girls in Tech

Charging stations

Brex

The TechCrunch Disrupt Exhibition floor

BrexFyeLabsElectronic Frontier Foundation (EFF)DomEN DOOCodelinkCodevRezoomexDescope

Startup pavilions

Japan External Trade Organization (JETRO)SilkRoad

Is your company interested in sponsoring or exhibiting at TechCrunch Disrupt 2024? Contact our sponsorship sales team by filling out this form.

Watch a robot navigate the Google DeepMind offices using Gemini

Google DeepMind robot

Image Credits: Google DeepMind

Generative AI has already shown a lot of promise in robots. Applications include natural language interactions, robot learning, no-code programming and even design. Google’s DeepMind Robotics team this week is showcasing another potential sweet spot between the two disciplines: navigation.

In a paper titled “Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs,” the team demonstrates how it has implemented Google Gemini 1.5 Pro to teach a robot to respond to commands and navigate around an office. Naturally, DeepMind used some of the Every Day Robots that have been hanging around since Google shuttered the project amid widespread layoffs last year.

In a series of videos attached to the project, DeepMind employees open with a smart assistant-style “OK, Robot,” before asking the system to perform different tasks around the 9,000-square-foot office space.

Image Credits: Google DeepMind

In one example, a Googler asks the robot to take him somewhere to draw things. “OK,” the robot responds, wearing a jaunty yellow bowtie, “give me a minute. Thinking with Gemini …” The robot then proceeds to lead the human to a wall-sized white board. In a second video, a different person tells the robot to follow the directions on the whiteboard.

A simple map shows the robot how to get to the “Blue Area.” Again, the robot thinks for a moment before taking a long route to what turns out to be a robotics testing area. “I’ve successfully followed the directions on the whiteboard,” the robot announces with a level of self-confidence most humans can only dream of.

Prior to these videos, the robots were familiarized with the space using what the team calls “Multimodal Instruction Navigation with demonstration Tours (MINT).” Effectively, that means walking the robot around the office while pointing out different landmarks with speech. Next, the team utilizes hierarchical Vision-Language-Action (VLA) to “that combin[e] the environment understanding and common sense reasoning power.” Once the processes are combined, the robot can respond to written and drawn commands, as well as gestures.

Image Credits: Google DeepMind

Google says the robot had a 90% or so success rate across more than 50 interactions with employees.

Google backs Indian open source Uber rival

Moving Tech team photo

Image Credits: Moving Tech

Google has become one of the latest investors in Moving Tech, the parent firm of Indian open source ridesharing app Namma Yatri that is quickly capturing market share from Uber and Ola with its no-commission model.

Bengaluru-based Moving Tech has raised $11 million in a pre-Series A funding round co-led by Blume Ventures and Antler, the startup said. Google, which has pledged to invest $10 billion in India, participated in the round.

Namma Yatri works atop the Open Network for Digital Commerce (ONDC), an interoperable scheme backed by the Indian government that is aiming to democratize e-commerce in the country. Namma Yatri’s app connects customers with auto-rickshaw and cab drivers without charging either party for rides. Instead, the startup collects a small monthly fee from its driver partners.

Uber and Ola, in comparison, charge their driver partners as much as 25%-30% of the ride cost, and have refused to join the ONDC network for their core mobility offerings.

Moving Tech’s co-founders, Magizhan Selvan and Shan M S, told TechCrunch that they identified an opportunity after they found out how frustrated drivers were with their treatment in the existing system.

“There was a lack of differentiated approach,” Shan said, reflecting on the decade-long duopoly that Uber and Ola have enjoyed unchallenged in India. Moving Tech doesn’t offer customer discounts or driver incentives, and it is banking on providing a service that people find genuinely useful, he added.

To understand drivers’ challenges, Selvan drove over 500 auto-rickshaw rides, and he said the startup’s guiding principle is to infuse empathy into its services.

Namma Yatri is operational in more than half-a-dozen Indian cities, including Bengaluru and Hyderabad, and has had over 46 million rides since its launch in 2022, according to its public dashboard. The startup was incubated by Juspay, a SoftBank-backed financial services startup.

The startup said it is operationally profitable, and doesn’t see the need to raise a lot of capital.

Over the past decade, India has been pursuing an ambitious strategy to digitize its economy and public services through the “India Stack,” a set of open APIs for identity, payments and data sharing. This government-led initiative aims to create unified digital infrastructure that can be leveraged by both public and private sectors to deliver services more efficiently and inclusively to India’s 1.4 billion citizens.

Notably, India has revolutionized the mobile payments landscape in the country with UPI, an interoperable network that now processes over 11 billion transactions a month — surpassing the combined volume of all card companies.

Karthik Reddy, a partner at Blume Ventures, said Moving Tech was at the forefront of transforming mobility “with a fresh and innovative model.” He added, “We were amazed by the simplicity of what the tech and a robust product can do to solve mass mobility. We are glad to partner with an exceptional team and back their grand vision.”

Namma Yatri will deploy the fresh funds to expand its engineering and research and development teams, its founders said. It’s also seeking to expand its offerings to more types of transportations, including buses, they added.