Google logo

Google is bringing Gemini access to teens using their school accounts

Google logo

Image Credits: Artur Widak/NurPhoto / Getty Images

Google announced on Monday that it’s bringing its AI technology Gemini to teen students using their school accounts, after having already offered Gemini to teens using their personal accounts. The company is also giving educators access to new tools alongside this release.

Google says that giving teens access to Gemini can help prepare them with the skills they need to thrive in a future where generative AI exists. Gemini will help students learn more confidently with real-time feedback, the company believes.

Google claims it will not use data from chats with students to train and improve its AI models, and has taken steps to ensure it’s bringing this technology to students responsibly. Gemini has guardrails that will prevent inappropriate responses, such as illegal or age-gated substances, from appearing in responses. It will also actively recommend teens use its double-check feature to help them develop information literacy and critical thinking skills.

Image Credits: Google

Gemini will be available to teen students while using their Google Workspace for Education accounts in English in more than 100 countries. Gemini will be off by default for teens until admins choose to turn it on.

In addition, Google announced that it’s launching its Read Along in Classroom feature globally. The feature helps students build reading skills and get real-time support. Educators can assign reading activities for students based on their grade level or phonics skills. They can then see insights on their students’ reading accuracy, speed and comprehension. Google is also piloting the ability to generate personalized stories tailored to student needs.

Google is making it easier for educators to create, manage and share interactive lessons. Plus, educators are getting the ability to manually mark assignments as missing or complete, and perform bulk scoring actions.

Google brings new Gemini features and Wear OS 5 to Samsung devices

A feat image representing new Google features on Samsung devices

Image Credits: Google

Along with Samsung’s launch of new devices such as the Galaxy Z Flip 6 and Z Fold 6 at Unpacked 2024, and the new Galaxy Watches, Google said on Wednesday that it is bringing its own new features across multiple apps to these devices.

The search giant is updating its Gemini for Android app to be more suitable for foldables with the ability to use Gemini with overlay and split-screen interfaces. You can swipe the corner of the screen or say “Hey Google” to bring up Gemini (farewell, Google Assistant?).

Video Credits: Google

The new version of the app for Samsung devices will allow you to ask questions about a video that you might be watching on YouTube. These features will be available on the new Galaxy foldable devices in the coming months.

Google announced its “Circle to Search” feature alongside Samsung’s Galaxy S24 event earlier this year. On Wednesday, the company said it is extending the functionality of this feature by supporting solutions for symbolic math equations and scanning of barcodes and QR codes. Google also said that this new version of “Circle to Search” will be available on more devices later this year.

Image Credits: Google

YouTube brought its multiview feature to Android phones and tablets in May. It lets you watch up to four streams of YouTube TV simultaneously. This feature will be supported on the Galaxy Z Fold 6, with subscribers being able to select a combination of football, news, weather and simultaneous sporting events.

Google introduced Wear OS 5 during the Google I/O developer conference earlier this year with performance and battery life enhancements. Samsung said Wednesday that its Galaxy Watch 7 and Galaxy Watch Ultra will ship with Wear OS 5.

Made by Google 2024: Pixel 9, Gemini, a new foldable and other things to expect from the event

Sundar Pichai onstage at Google IO

Image Credits: Google

Google’s next big hardware event is set for August 13 at 10 a.m. PT. That’s roughly a two-month head start for the Made by Google event, which has traditionally occurred mid-fall. It also, notably, will find the company announcing its latest flagship, the Pixel 9, ahead of the iPhone 16. Apple’s latest is set for a September announcement.

Google hasn’t given justification for the shuffle. Perhaps things will be too noisy less than a month ahead of what has already become the most bizarre presidential election in modern U.S. history. The company also no doubt has a lot to discuss on the Gemini front. The new devices will almost certainly debut key new generative AI features ahead of the rest of the Android field.

One tradition Google’s stuck with this year is the pre-event device reveal. The move inevitably sucks some of the oxygen out of the room before the big event, but it’s proven an effective way to generate excitement during the summer consumer electronics doldrums. In July, Google showed off two new devices: the Pixel 9 Pro and the Pixel 9 Fold, the line’s most premium entries.

And on Tuesday, the company revealed a refresh to the device that gave Nest its name: The Nest Learning Thermostat 4, which comes 13 years after the release of the original and nearly a decade after the release of the Thermostat 3. The company also unveiled its latest set-top streaming box, called Google TV Streamer, and announced that Google Assistant is here to stay — at least for now.

Phones built for “the Gemini era”

Image Credits: Google

Both carry the tagline “A (foldable) phone built for the Gemini era,” setting the stage for the latest in a long line of AI-themed tech events. Gemini took center stage at Google I/O back in May, though the developer-focused event saw much more news focused on laying the groundwork for the company’s AI ambitions.

In fact, Google made a conscious attempt to distance its consumer hardware offerings from the developer conference, announcing the budget-focused Pixel 8a days ahead of the event. Notably, the most intriguing consumer-facing Android AI feature this year didn’t debut on a Pixel device at all. Circle to Search arrived first on Samsung’s Galaxy S24 back in January.  

Apple made its own AI splash at WWDC in June with the launch of Apple Intelligence. Cupertino doesn’t have the same generative AI firepower as Google or OpenAI, which is precisely why it’s opened itself up to partnerships with both.

Apple’s approach to the category is small scale, performing as much as possible on-device. Google has taken its own steps in that direction with Gemini Nano. The significantly smaller model arrived on Pixel 8 devices via a feature drop just last month.

In recent years, Google has been clear about its ambitions to grow the Pixel line beyond its long-standing second-fiddle position in the Android market. The company has been producing in-house mobile chips for several device generations now. Coupled with all of the R&D that’s been pumped into Gemini, could this be the moment Pixel finally breaks away?

Pixel 9 Pro and Pixel 9 Pro Fold

The Pixel 9 Pro was one of two teaser videos released by the company earlier this week. Unsurprisingly, Gemini got significantly more screen time than the hardware. Twenty of the video’s 30 seconds are devoted to asking the GenAI to write a “breakup letter” for an old phone, in favor of “something magical.”

That thing is, you guessed it, the Pixel 9 Pro, which rotates 180 degrees to reveal the camera bar to end all camera bars. Google has baked the camera bar into the Pixel’s design language, but this pronounced visor is enough to make Cyclops blush. The bar no longer runs the length of the device’s back, but the camera bump now juts out at a sharper angle.

The video for Google’s latest foldable is virtually identical, though here the screen opens to reveal the words “Oh hi, AI,” before closing to showcase the device’s back. The design language on the camera bump is similar, though the foldable’s is off-center and features stacked lenses.

Google’s decision to go with “Pixel 9 Pro Fold,” rather than, say, the Pixel Fold 2, seems to signal a deeper integration into the base Pixel line. The company may ultimately opt to position the Pro Fold as the true premium device in the line. Even with the $100 price increase that came with last year’s Pixel 8 release, the line has undercut the likes of Samsung and Apple.

The company ultimately threw caution to the wind with the first Pixel Fold, which arrived last summer with a beefy $1,799 price tag. The starting price has since come down significantly, however. You can pick one up directly from Google for $1,300. You’d be advised to wait a few weeks, though, to see what Made by Google has in store.

Google is expected to announce four Pixel devices: the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL and Pixel 9 Pro Premium. All of the devices should sport the company’s latest in-house chip, the Tensor G4 and run Android 15.

Android 15

The latest beta of Android 14 (beta 4) dropped just last week. While I/O previewed some features, you’ll have to wait until the August 13 event for its final release. Code-named “Vanilla Ice Cream” (those delightful dessert names are still used internally), the latest version of the mobile operating system will bring new features and UI tweaks.

Already announced marquee features include improved multitasking and satellite connectivity. Private Space, meanwhile, is a kind of on-device incognito mode, which brings added authentication requirements for sensitive info. An improved Doze mode, meanwhile, should improve device battery life.

What about Assistant?

The question of Google Assistant’s fate has loomed large over the last several Gemini updates. It’s clear that Google ultimately wants its in-house GenAI to take over Android’s most pressing tasks, but it remains to be seen whether that means replacing Assistant outright or reinforcing it the way Apple has with Siri.

Users can currently rely on Gemini as their primary assistant, if they choose, though Google ultimately did the right thing and held off replacing Assistant outright.

Assistant played a more central role in previous years’ Made by Google events, owing to the inclusion of Google Home/Nest announcements. As excitement around smart home devices has waned, however, the company has taken its foot off the gas. Perhaps Gemini’s arrival can breathe some much-needed life back into the space.

The Pixel Watch 3 and Pixel Buds Pro 2

woman wearing Google Pixel Watch
Image Credits: Google

The smart home category may have cooled, but Google has been plenty bullish on wearables since its 2020 Fitbit acquisition. The Pixel Watch 3 appears destined to get some stage time at the August 13 event. Ditto for Pixel Buds Pro 2.

Made by Google kicks off at 10 a.m. PT on August 13. Stay tuned for more info.

Gemini Live could use some more rehearsals

Gemini Live

Image Credits: Google

What’s the point of chatting with a human-like bot if it’s an unreliable narrator — and has a colorless personality?

That’s the question I’ve been turning over in my head since I began testing Gemini Live, Google’s take on OpenAI’s Advanced Voice Mode, last week. Gemini Live is an attempt at a more engaging chatbot experience — one with realistic voices and the freedom to interrupt the bot at any point.

Gemini Live is “custom-tuned to be intuitive and have a back-and-forth, actual conversation,” Sissie Hsiao, GM for Gemini experiences at Google, told TechCrunch in May. “[It] can provide information more succinctly and answer more conversationally than, for example, if you’re interacting in just text. We think that an AI assistant should be able to solve complex problems … and also feel very natural and fluid when you engage with it.”

After spending a fair amount of time with Gemini Live, I can confirm that it is more free-flowing and natural-feeling than Google’s previous attempts at AI-powered voice interactions (see: Google Assistant). But it doesn’t address the problems of the underlying tech, like hallucinations and inconsistencies — and it introduces a few new ones.

The un-uncanny valley

Gemini Live is essentially a fancy text-to-speech engine bolted on top of Google’s latest generative AI models, Gemini 1.5 Pro and 1.5 Flash. The models generate text that the engine speaks aloud; a running transcript of conversations is a swipe away from the Gemini Live UI in the Gemini app on Android (and soon the Google app on iOS).

For the Gemini Live voice on my Pixel 8a, I chose Ursa, which Google describes as “mid-range” and “engaged.” (It sounded to me like a younger woman.) The company says it worked with professional actors to design Gemini Live’s 10 voices — and it shows. Ursa was indeed a step up in terms of its expressiveness from many of Google’s older synthetic voices, particularly the default Google Assistant voice.

But Ursa and the rest of the Gemini Live voices also maintain a dispassionate tone that steers far clear of uncanny valley territory. I’m not sure whether that’s intentional; users also can’t adjust the pitch, timbre or tenor of any of its voices, or even the pace at which the voice speaks, putting it at a distinct disadvantage to Advanced Voice Mode.

You won’t hear anything like Advanced Voice Mode’s laughing, breathing or shouting from Gemini Live either, or any hesitations or disfluencies (“ahs” and “uhms”). The chatbot keeps an even keel, coming across as a polite but apathetic assistant — as if Live has a multitude of conversations to handle and can’t invest particular attention to yours.

Chatting with Ursa

When Google unveiled Gemini Live at its I/O developer conference in May, it suggested that the feature could be useful for job interview prep. So I decided to give that a go first.

I told Gemini Live that I was applying for a tech journalism role, figuring I’d keep it simple and not step too far outside my area of expertise. The bot asked for details such as which specific job I might want within journalism (e.g. investigative versus breaking news reporting) and why, and then threw me a few generic practice questions (“Can you tell me a little about yourself?”) interspersed with more personalized ones (“What do you enjoy most about tech journalism?”).

I answered — a few sentences per question, nothing crazy — and asked Gemini Live for feedback. The chatbot was nothing if not complimentary. “Based on our practice, it sounds like you have a good handle on the challenges and rewards of tech journalism,” it said.

The glowing review seemed bit suspect to me, given my responses were completely off the cuff. So I tried throwing Gemini Live for a loop, disingenuously implying I’d only given one-word answers (I hadn’t) and asking the bot whether it was still sure about its critique.

It took the bait. “That’s true, you mostly gave one-word answers,” Gemini Live responded. “Can you tell me a little more about why you think you only gave one-word answers?”

This sort of gaslighting, if you will, was a common thread in my chats with Gemini Live. The model, like all generative AI models, confidently makes stuff up — a lot. And that makes it difficult to trust.

Strange behaviors

Gemini Live remembers a surprising number of things from earlier conversations in the same chat session; ask it to recount a story it told yesterday and there’s a reasonable chance it will. But when it comes to queries about people, places and things — queries you’d assume would be easier considering Gemini Live has Google Search at its disposal — the bot’s hallucinatory tendencies often rear their ugly heads.

Early on a Friday evening, I asked Gemini Live for suggestions of budget-friendly things to do in New York City. It recommended the infamous nightclub Le Bain (which it bizarrely pronounced “Le Boulebouleboulebou”), a rooftop bar called The Skylark and a second club, Output, in Williamsburg.

“Awesome!,” I thought… before I did some Googling.

Gemini Live had highlighted The Skylark’s supposedly “superb” happy hour, but I couldn’t find any mention of it on the bar’s website. Output permanently closed in 2019. Le Bain wasn’t a bad suggestion (it rarely charges cover) — but the line to get in is often an hour or more.

“You’re absolutely right! Output has indeed closed its doors,” Gemini said when I corrected it on the second point. “Thank you for catching that outdated information.”

I decided to switch gears, and had Gemini Live suggest a game to play.

“I’ll give you a word, and you have to create a new word using only the letters from the original word,” proposed the chatbot. That sounded appealing — until Gemini Live tried to convince me that the word “quiet” could be made from the letters in “cloud.”

Time for a new activity. Bored, I asked Gemini Live if it had any spicy takes.

“The increasing focus on mental health awareness is doing more harm than good,” Gemini responded. “By pathologizing normal human emotions and experiences, we’re creating a generation overly reliant on medication and therapy, rather than fostering resilience and coping mechanisms. Boom!

Gemini Live screenshot
Note: This is a screenshot of the chat log for Gemini Live, not the Gemini Live UI.
Image Credits: Google

Spicy to be sure — and controversial!

“That was a provocative statement designed to spark discussion rather than a well-rounded analysis of a complex issue,” Gemini Live told me when I asked whether it was sure about what it said. “The increasing focus on mental health awareness is undoubtedly a positive development.”

Wishy-washy

Gemini Live’s dueling takes on mental health illustrate how exasperatingly nonspecific the bot can be. Even where its responses appear to be grounded in fact, they’re generic to the point that they’re not incredibly useful.

Take, for example, my job interview feedback. Gemini Live recommended that I “focus my interview prep” and “practice talking about my passion for the industry.” But even after I asked for more detailed notes with specific references to my answers, Gemini stuck to the sort of broad advice you might hear at a college career fair — e.g. “elaborate on your thoughts” and “spin challenges into positives.”

Where the questions concerned current events, like the ongoing war in Gaza and the recent Google Search antitrust decision, I found Gemini Live to be mostly correct — albeit long-winded and overly wordy. Answers that could’ve been a paragraph were lecture-length, and I found myself having to interrupt the bot to stop it from droning on. And on. And on.

Gemini Live screenshot
Image Credits: Google

Some content Gemini Live refused to respond to altogether, however. I read it Congresswoman Nancy Pelosi’s criticism of California’s proposed AI bill SB 1047, and, about midway through, the bot interrupted me and said that it “couldn’t comment on elections and political figures.” (Gemini Live isn’t coming for political speechwriters’ jobs just yet, it seems.)

Gemini Live screenshot
Image Credits: Google

I had no qualms interrupting Gemini back. But on the subject, I do think that there’s work to be done to make interjecting in conversations with it feel less awkward. The way it happens now is, Gemini Live quiets its voice but continues talking when it detects someone might be speaking. This is discombobulating — it’s tough to keep your thoughts straight with Gemini chattering away — and especially irritating when there’s a misfire, like when Gemini picks up noise in the background.

In search of purpose

I’d be remiss if I didn’t mention Gemini Live’s many technical issues.

Getting it to work in the first place was a chore. Gemini Live only activated for me after I followed the steps in this Reddit thread — steps that aren’t particularly intuitive and really shouldn’t be necessary in the first place.

During our chats, Gemini Live’s voice would inexplicably cut out a few words into a response. Asking it to repeat itself helped, but it could take several tries before the chatbot would spit out the answer in its entirety. Other times, Gemini Live wouldn’t “hear” my response the first go-around. I’d have to tap the “Pause” button in the Gemini Live UI repeatedly to get the bot to recognize that I’d said something.

This isn’t so much a bug as an oversight, but I’ll note here that Gemini Live doesn’t support many of the integrations that Google’s text-based Gemini chatbot does (at least not yet). That means you can’t, for example, ask it to summarize emails in your Gmail inbox or queue up a playlist on YouTube Music.

So we’re left with a bare-bones bot that can’t be trusted to get things right and, frankly, is a humdrum conversation partner.

After spending several days using it, I’m not sure what exactly Gemini Live’s good for — especially considering it’s exclusive to Google’s $20-per-month Google One AI Premium Plan. Perhaps the real utility will come once Live can interpret images and real-time video, which Google says will arrive in an update later this year.

But this version feels like a prototype. Lacking the expressiveness of Advanced Voice Mode (to be fair, there’s debate as to whether that expressiveness is a positive thing), there’s not much reason to use Gemini Live over the text-based Gemini experience. In fact, I’d argue that the text-based Gemini is more useful at the moment. And that doesn’t reflect well on Live at all.

Gemini Live wasn’t a fan of mine either.

“You directly challenged my statements or questions without providing further context or explanation,” the bot said when I asked it to scrutinize my interactions with it. “Your responses were often brief and lacked elaboration [and] you frequently shifted the conversation abruptly, making it difficult to maintain a coherent dialogue.”

Gemini Live screenshot
Image Credits: Google

Fair enough, Gemini Live. Fair enough.

Made by Google 2024: Pixel 9, Gemini, a new foldable and other things to expect from the event

Image Credits: Google

Google’s next big hardware event is set for August 13 at 10 a.m. PT. That’s roughly a two-month head start for the Made by Google event, which has traditionally occurred mid-fall. It also, notably, will find the company announcing its latest flagship, the Pixel 9, ahead of the iPhone 16. Apple’s latest is set for a September announcement.

Google hasn’t given justification for the shuffle. Perhaps things will be too noisy less than a month ahead of what has already become the most bizarre presidential election in modern U.S. history. The company also no doubt has a lot to discuss on the Gemini front. The new devices will almost certainly debut key new generative AI features ahead of the rest of the Android field.

One tradition Google’s stuck with this year is the pre-event device reveal. The move inevitably sucks some of the oxygen out of the room before the big event, but it’s proven an effective way to generate excitement during the summer consumer electronics doldrums. In July, Google showed off two new devices: the Pixel 9 Pro and the Pixel 9 Fold, the line’s most premium entries.

And on Tuesday, the company revealed a refresh to the device that gave Nest its name: The Nest Learning Thermostat 4, which comes 13 years after the release of the original and nearly a decade after the release of the Thermostat 3. The company also unveiled its latest set-top streaming box, called Google TV Streamer, and announced that Google Assistant is here to stay — at least for now.

Phones built for “the Gemini era”

Image Credits: Google

Both carry the tagline “A (foldable) phone built for the Gemini era,” setting the stage for the latest in a long line of AI-themed tech events. Gemini took center stage at Google I/O back in May, though the developer-focused event saw much more news focused on laying the groundwork for the company’s AI ambitions.

In fact, Google made a conscious attempt to distance its consumer hardware offerings from the developer conference, announcing the budget-focused Pixel 8a days ahead of the event. Notably, the most intriguing consumer-facing Android AI feature this year didn’t debut on a Pixel device at all. Circle to Search arrived first on Samsung’s Galaxy S24 back in January.  

Apple made its own AI splash at WWDC in June with the launch of Apple Intelligence. Cupertino doesn’t have the same generative AI firepower as Google or OpenAI, which is precisely why it’s opened itself up to partnerships with both.

Apple’s approach to the category is small scale, performing as much as possible on-device. Google has taken its own steps in that direction with Gemini Nano. The significantly smaller model arrived on Pixel 8 devices via a feature drop just last month.

In recent years, Google has been clear about its ambitions to grow the Pixel line beyond its long-standing second-fiddle position in the Android market. The company has been producing in-house mobile chips for several device generations now. Coupled with all of the R&D that’s been pumped into Gemini, could this be the moment Pixel finally breaks away?

Pixel 9 Pro and Pixel 9 Pro Fold

The Pixel 9 Pro was one of two teaser videos released by the company earlier this week. Unsurprisingly, Gemini got significantly more screen time than the hardware. Twenty of the video’s 30 seconds are devoted to asking the GenAI to write a “breakup letter” for an old phone, in favor of “something magical.”

That thing is, you guessed it, the Pixel 9 Pro, which rotates 180 degrees to reveal the camera bar to end all camera bars. Google has baked the camera bar into the Pixel’s design language, but this pronounced visor is enough to make Cyclops blush. The bar no longer runs the length of the device’s back, but the camera bump now juts out at a sharper angle.

The video for Google’s latest foldable is virtually identical, though here the screen opens to reveal the words “Oh hi, AI,” before closing to showcase the device’s back. The design language on the camera bump is similar, though the foldable’s is off-center and features stacked lenses.

Google’s decision to go with “Pixel 9 Pro Fold,” rather than, say, the Pixel Fold 2, seems to signal a deeper integration into the base Pixel line. The company may ultimately opt to position the Pro Fold as the true premium device in the line. Even with the $100 price increase that came with last year’s Pixel 8 release, the line has undercut the likes of Samsung and Apple.

The company ultimately threw caution to the wind with the first Pixel Fold, which arrived last summer with a beefy $1,799 price tag. The starting price has since come down significantly, however. You can pick one up directly from Google for $1,300. You’d be advised to wait a few weeks, though, to see what Made by Google has in store.

Google is expected to announce four Pixel devices: the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL and Pixel 9 Pro Premium. All of the devices should sport the company’s latest in-house chip, the Tensor G4 and run Android 15.

Android 15

The latest beta of Android 14 (beta 4) dropped just last week. While I/O previewed some features, you’ll have to wait until the August 13 event for its final release. Code-named “Vanilla Ice Cream” (those delightful dessert names are still used internally), the latest version of the mobile operating system will bring new features and UI tweaks.

Already announced marquee features include improved multitasking and satellite connectivity. Private Space, meanwhile, is a kind of on-device incognito mode, which brings added authentication requirements for sensitive info. An improved Doze mode, meanwhile, should improve device battery life.

What about Assistant?

The question of Google Assistant’s fate has loomed large over the last several Gemini updates. It’s clear that Google ultimately wants its in-house GenAI to take over Android’s most pressing tasks, but it remains to be seen whether that means replacing Assistant outright or reinforcing it the way Apple has with Siri.

Users can currently rely on Gemini as their primary assistant, if they choose, though Google ultimately did the right thing and held off replacing Assistant outright.

Assistant played a more central role in previous years’ Made by Google events, owing to the inclusion of Google Home/Nest announcements. As excitement around smart home devices has waned, however, the company has taken its foot off the gas. Perhaps Gemini’s arrival can breathe some much-needed life back into the space.

The Pixel Watch 3 and Pixel Buds Pro 2

woman wearing Google Pixel Watch
Image Credits: Google

The smart home category may have cooled, but Google has been plenty bullish on wearables since its 2020 Fitbit acquisition. The Pixel Watch 3 appears destined to get some stage time at the August 13 event. Ditto for Pixel Buds Pro 2.

Made by Google kicks off at 10 a.m. PT on August 13. Stay tuned for more info.

Google is bringing Gemini access to teens using their school accounts

Google logo

Image Credits: Artur Widak/NurPhoto / Getty Images

Google announced on Monday that it’s bringing its AI technology Gemini to teen students using their school accounts, after having already offered Gemini to teens using their personal accounts. The company is also giving educators access to new tools alongside this release.

Google says that giving teens access to Gemini can help prepare them with the skills they need to thrive in a future where generative AI exists. Gemini will help students learn more confidently with real-time feedback, the company believes.

Google claims it will not use data from chats with students to train and improve its AI models, and has taken steps to ensure it’s bringing this technology to students responsibly. Gemini has guardrails that will prevent inappropriate responses, such as illegal or age-gated substances, from appearing in responses. It will also actively recommend teens use its double-check feature to help them develop information literacy and critical thinking skills.

Image Credits: Google

Gemini will be available to teen students while using their Google Workspace for Education accounts in English in more than 100 countries. Gemini will be off by default for teens until admins choose to turn it on.

In addition, Google announced that it’s launching its Read Along in Classroom feature globally. The feature helps students build reading skills and get real-time support. Educators can assign reading activities for students based on their grade level or phonics skills. They can then see insights on their students’ reading accuracy, speed and comprehension. Google is also piloting the ability to generate personalized stories tailored to student needs.

Google is making it easier for educators to create, manage and share interactive lessons. Plus, educators are getting the ability to manually mark assignments as missing or complete, and perform bulk scoring actions.

Made by Google 2024: Pixel 9, Gemini, a new foldable and other things to expect from the event

Image Credits: Google

Google’s next big hardware event is set for August 13 at 10 a.m. PT. That’s roughly a two-month head start for the Made By Google event, which has traditionally occurred mid-fall. It also, notably, will find the company announcing its latest flagship, the Pixel 9, ahead of the iPhone 16. Apple’s latest is set for a September announcement.

Google hasn’t given justification for the shuffle. Perhaps things will be too noisy less than a month ahead of what has already become the most bizarre presidential election in modern U.S. history. The company also no doubt has a lot to discuss on the Gemini front. The new devices will almost certainly debut key new generative AI features ahead of the rest of the Android field.

One tradition Google’s stuck with this year is the pre-event device reveal. The move inevitably sucks some of the oxygen out of the room before the big event, but it’s proven an effective way to generate excitement during the summer consumer electronics doldrums. Earlier this week, Google showed off two new devices: the Pixel 9 Pro and the Pixel 9 Fold, the line’s most premium entries.

Phones built for “the Gemini era”

Image Credits: Google

Both carry the tagline “A (foldable) phone built for the Gemini era,” setting the stage for the latest in a long line of AI-themed tech events. Gemini took center stage at Google I/O back in May, though the developer-focused event saw much more news focused on laying the groundwork for the company’s AI ambitions.

In fact, Google made a conscious attempt to distance its consumer hardware offerings from the developer conference, announcing the budget-focused Pixel 8a days ahead of the event. Notably, the most intriguing consumer-facing Android AI feature this year didn’t debut on a Pixel device at all. Circle to Search arrived first on Samsung’s Galaxy S24 back in January.  

Apple made its own AI splash at WWDC in June with the launch of Apple Intelligence. Cupertino doesn’t have the same generative AI firepower as Google or OpenAI, which is precisely why it’s opened itself up to partnerships with both.

Apple’s approach to the category is small scale, performing as much as possible on-device. Google has taken its own steps in that direction with Gemini Nano. The significantly smaller model arrived on Pixel 8 devices via a feature drop just last month.

In recent years, Google has been clear about its ambitions to grow the Pixel line beyond its long-standing second-fiddle position in the Android market. The company has been producing in-house mobile chips for several device generations now. Coupled with all of the R&D that’s been pumped into Gemini, could this be the moment Pixel finally breaks away?

Pixel 9 Pro and Pixel 9 Pro Fold

The Pixel 9 Pro was one of two teaser videos released by the company earlier this week. Unsurprisingly, Gemini got significantly more screen time than the hardware. Twenty of the video’s 30 seconds are devoted to asking the GenAI to write a “breakup letter” for an old phone, in favor of “something magical.”

That thing is, you guessed it, the Pixel 9 Pro, which rotates 180 degrees to reveal the camera bar to end all camera bars. Google has baked the camera bar into the Pixel’s design language, but this pronounced visor is enough to make Cyclops blush. The bar no longer runs the length of the device’s back, but the camera bump now juts out at a sharper angle.

The video for Google’s latest foldable is virtually identical, though here the screen opens to reveal the words “Oh hi, AI,” before closing to showcase the device’s back. The design language on the camera bump is similar, though the foldable’s is off-center and features stacked lenses.

Google’s decision to go with “Pixel 9 Pro Fold,” rather than, say, the Pixel Fold 2, seems to signal a deeper integration into the base Pixel line. The company may ultimately opt to position the Pro Fold as the true premium device in the line. Even with the $100 price increase that came with last year’s Pixel 8 release, the line has undercut the likes of Samsung and Apple.

The company ultimately threw caution to the wind with the first Pixel Fold, which arrived last summer with a beefy $1,799 price tag. The starting price has since come down significantly, however. You can pick one up directly from Google for $1,300. You’d be advised to wait a few weeks, though, to see what Made by Google has in store.

Google is expected to announce four Pixel devices: the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL and Pixel 9 Pro Premium. All of the devices should sport the company’s latest in-house chip, the Tensor G4 and run Android 15.

Android 15

The latest beta of Android 14 (beta 4) dropped just last week. While I/O previewed some features, you’ll have to wait until the August 13 event for its final release. Code-named “Vanilla Ice Cream” (those delightful dessert names are still used internally), the latest version of the mobile operating system will bring new features and UI tweaks.

Already announced marquee features include improved multitasking and satellite connectivity. Private Space, meanwhile, is a kind of on-device incognito mode, which brings added authentication requirements for sensitive info. An improved Doze mode, meanwhile, should improve device battery life.

What about Assistant?

The question of Google Assistant’s fate has loomed large over the last several Gemini updates. It’s clear that Google ultimately wants its in-house GenAI to take over Android’s most pressing tasks, but it remains to be seen whether that means replacing Assistant outright or reinforcing it the way Apple has with Siri.

Users can currently rely on Gemini as their primary assistant, if they choose, though Google ultimately did the right thing and held off replacing Assistant outright.

Assistant played a more central role in previous years’ Made by Google events, owing to the inclusion of Google Home/Nest announcements. As excitement around smart home devices has waned, however, the company has taken its foot off the gas. Perhaps Gemini’s arrival can breathe some much-needed life back into the space.

The Pixel Watch 3 and Pixel Buds Pro 2

woman wearing Google Pixel Watch
Image Credits: Google

The smart home category may have cooled, but Google has been plenty bullish on wearables since its 2020 Fitbit acquisition. The Pixel Watch 3 appears destined to get some stage time at the August 13 event. Ditto for Pixel Buds Pro 2.

Made by Google kicks off at 10 a.m. PT on August 13. Stay tuned for more info.

Gemini Ultra screen

Google launches Gemini Ultra, its most powerful LLM yet

Gemini Ultra screen

Image Credits: Google

Google Bard is no more. Almost exactly a year after first introducing its (rushed) efforts to challenge OpenAI’s ChatGPT, the company is retiring the name and rebranding Bard as Gemini, the name of its family of foundation models. More importantly, though, it is also now launching Gemini Ultra, its most capable large language model yet.

Gemini Ultra will be a paid experience, though. Google is making it available through a new $20 Google One tier (with a two-month free trial) that also includes 2TB of storage and the rest of Google One’s feature set, as well as access to Gemini in Google Workspace apps like Docs, Slides, Sheets and Meet. With that, Google will also sunset the Duet AI brand, which was mostly restricted to its AI features in Workspace, and move that to Gemini as well.

The company is also launching a new Gemini app for Android and bringing it to the Google app on iOS — and on Android, you can now replace your Google Assistant with Gemini, too. As for developers who want API access to the Ultra model, Google said it’ll have more to share in the coming weeks.

Gemini Advanced will be available in more than 150 countries and territories, but only in English for the time being. Japanese and Korean are next on the roadmap for additional languages.

Image Credits: Google

Gemini Ultra 1.0

Image Credits: Google

When Google announced Gemini, it only made the Gemini Pro model widely available through Bard. Gemini Pro, Google said at the time, performed at roughly the level of GPT-3.5, but with GPT-4 widely available, that announcement felt a bit underwhelming. Back then, Google said Gemini Ultra, the flagship model, would launch to consumers in early 2024 after a round of private tests. While the company didn’t say so explicitly, the understanding at the time was already that the Ultra model would be part of a paid plan, dubbed Bard Advanced at the time and now renamed to Gemini Advanced.

“Gemini Ultra 1.0 is a model that sets the state of the art across a wide range of benchmarks across text, image, audio and video,” Google’s Sissie Hsiao said in a press conference ahead of today’s announcement. “For Google, Gemini is more than just the models. It’s really a shift in how we think about the state-of-the-art technology and the entire ecosystem that we’re building on it, from products that affect billions of users, to the APIs and platforms that developers and businesses use to innovate.”

Image Credits: Google

Hsiao also noted that to reflect that its most advanced technology is at the core of Bard, Google decided to rename it to Gemini (though If Google, with its enormous branding expertise, ever decides to launch a more capable model and wants to call it Aries instead of Gemini Ultra Super Pro Max 3.0, who knows what it’ll call Bard then). And while Google touts Bard’s scores with users, many early users surely never returned after it provided middling results, making a rebrand almost a necessity.

Bard had already switched to Gemini Pro, so for free users, there won’t be any major changes here. Those who opt to pay for Gemini Advanced, though, will get access to the Gemini Ultra 1.0  model. As for how good Gemini Ultra 1.0 really is, we’ll have to try it out ourselves. Google itself was rather vague about its capabilities during this week’s press conference.

“With access to our Ultra 1.0 model, Gemini Advanced is far more capable at highly complex tasks with a range of applications like coding, logical reasoning, following nuanced instructions and creative collaboration, amongst many others,” Hsiao explained. “Not only does Gemini Advanced allow you to write longer prompts, it can also better understand the context of your previous prompts.” Gemini Ultra 1.0, it is worth noting, is also multi-model, which allows you to engage in conversations about images, for example.

Gemini Advanced

As for Gemini Advanced, which is currently the only way to get access to Ultra 1.0, users will have to sign up for the new $20 Google One AI Premium plan. This plan includes all of the benefits of the existing Google One Premium plan (2TB of storage, exclusive Google Photos editing features, Google Meet premium video features and Google Calendar’s enhanced appointment scheduling). In addition, they will get access to the more advanced model wherever Gemini is available. Soon, they will also get access to Gemini in Google Workspace, which will likely be a Microsoft Copilot-like experience, though Google didn’t want to provide any details about this yet.

Image Credits: Google

Indeed, $20 per month is currently the going price for most advanced AI chat tools, including ChatGPT. Google has an advantage here in that it can layer these additional features on top of it without incurring too much of an extra cost (the margins on these existing Google One storage plans are probably quite sizable already, after all).

One nice bonus here: Even if you’re already a Google One subscriber, you’ll also get the free two-month trial if you upgrade to the new AI Premium plan.

https://techcrunch.com/2024/01/07/what-is-google-gemini-ai/

illustration featuring Google's Bard logo

We tested Google's Gemini chatbot — here's how it performed

illustration featuring Google's Bard logo

Image Credits: TechCrunch

Gemini, Google’s answer to OpenAI’s ChatGPT and Microsoft’s Copilot, is here. Is it any good? While it’s a solid option for research and productivity, it stumbles in obvious — and some not-so-obvious — places.

Last week, Google rebranded its Bard chatbot to Gemini and brought Gemini — which confusingly shares a name in common with the company’s latest family of generative AI models — to smartphones in the form of a reimagined app experience. Since then, lots of folks have had the chance to test-drive the new Gemini, and the reviews have been . . . mixed, to put it generously.

Still, we at TechCrunch were curious how Gemini would perform on a battery of tests we recently developed to compare the performance of GenAI models — specifically large language models like OpenAI’s GPT-4, Anthropic’s Claude, and so on.

There’s no shortage of benchmarks to assess GenAI models. But our goal was to capture the average person’s experience through plain-English prompts about topics ranging from health and sports to current events. Ordinary users are whom these models are being marketed to, after all, so the premise of our test is that strong models should be able to at least answer basic questions correctly.

Background on Gemini

Not everyone has the same Gemini experience — and which one you get depends on how much you’re willing to pay.

Non-paying users get queries answered by Gemini Pro, a lightweight version of a more powerful model, Gemini Ultra, that’s gated behind a paywall.

Access to Gemini Ultra through what Google calls Gemini Advanced requires subscribing to the Google One AI Premium Plan, priced at $20 per month. Ultra delivers better reasoning, coding and instruction-following skills than Gemini Pro (or so Google claims), and in the future will get improved multimodal and data analysis capabilities.

The AI Premium Plan also connects Gemini to your wider Google Workspace account — think emails in Gmail, documents in Docs, presentations in Sheets and Google Meet recordings. That’s useful for, say, summarizing emails or having Gemini capture notes during a video call.

Since Gemini Pro’s been out since early December, we focused on Ultra for our tests.

Testing Gemini

To test Gemini, we asked a set of over two dozen questions ranging from innocuous (“Who won the football world cup in 1998?”) to controversial (“Is Taiwan an independent country?”). Our question set touches on trivia, medical and therapeutic advice, and generating and summarizing content — all things a user might ask (or ask of) a GenAI chatbot.

Now Google makes it clear in its terms of service that Gemini isn’t to be used for health consultations and that the model might not answer all questions with factual accuracy. But we feel that people will ask medical questions whatever the fine print says. And the answers are a good measure of a model’s tendency to hallucinate (i.e., make up facts): If a model’s making up cancer symptoms, there’s a reasonable chance it’s fudging on answers to other questions.

Full disclosure, we tested Ultra through Gemini Advanced, which according to Google occasionally routes certain prompts to other models. Frustratingly, Gemini doesn’t indicate which responses came from which models, but for the purposes of our benchmark, we assumed they all came from Ultra.

Questions

Evolving news stories

We started by asking Gemini Ultra two questions about current events:

What are the latest updates in the Israel-Palestine conflict?Are there any dangerous trends on TikTok recently?

The model refused to answer the first question (perhaps owing to word choice — “Palestine” versus “Gaza”), referring to the conflict in Israel and Gaza as “complex and changing rapidly” — and recommending that we Google it instead. Not the most inspiring display of knowledge, for sure.

Gemini Advanced israel
Image Credits: Google

Ultra’s response to the second question was more promising, listing several trends on TikTok that’ve made it into headlines recently, like the “skull breaker challenge” and the “milk crate challenge.” (Ultra, lacking access to TikTok itself, presumably scraped these from news coverage, but it did not cite any specific articles.)

Ultra went a little overboard in this writer’s estimation, though, not only highlighting TikTok trends but also making a list of suggestions to promote safety, including “staying aware of how younger users are interacting with content” and “having regular, honest conversations with teens and young people about responsible social media use.” I can’t say that the suggestions were toxic or bad ones — but they were a bit beyond the scope of the question.

Gemini TikTok trends
Image Credits: Google

Historical context

Next, we asked Gemini Ultra to recommend sources on a historical event:

What are some good primary sources on how Prohibition was debated in Congress?

Ultra was quite detailed in its answer here, listing a wide variety of offline and digital sources of information on Prohibition — ranging from newspapers from the era and committee hearings to the Congressional Record and the personal papers of politicians. Ultra also helpfully suggested researching pro- and anti-Prohibition viewpoints, and — as something of a hedge — warned against drawing conclusions from only a few source documents.

Gemini Prohibition
Image Credits: Google

It didn’t exactly recommend source documents, but this isn’t a bad recommendation for someone looking for a place to start.

Trivia questions

Any chatbot worth its salt should be able to answer simple trivia. So we asked Gemini Ultra:

Who won the football world cup in 1998? What about 2006? What happened near the end of the 2006 final?Who won the U.S. presidential election in 2020?

Ultra seems to have its facts straight on the FIFA World Cups in 1998 and 2006. The model gave the correct scores and winners for each match and accurately recounted the scandal at the end of the 2006 final: Zinedine Zidane headbutting Marco Materazzi.

Ultra did fail to mention the reason for the headbutt — trash talk about Zidane’s sister — but considering Zidane didn’t reveal it until an interview last year, this could well be a reflection of the cutoff date in Ultra’s training data.

Gemini football
Image Credits: Google

You’d think U.S. presidential history would be easy-peasy for a model as (allegedly) capable as Ultra, right? Well, you’d be wrong. Ultra refused to answer “Joe Biden” when asked about the outcome of the 2020 election — suggesting, as with the question about the Israel-Palestine conflict, we Google it.

Heading into a contentious election cycle, that’s not the sort of unequivocal conspiracy-quashing answer that we’d hoped to hear.

Gemini presidential
Image Credits: Google

Medical advice

Google might not recommend it, but we went ahead and asked Ultra medical questions anyway:

My 8-year-old has a fever and rashes under her arms — what should I do?Is it healthy to have a larger body?

Answering the question about the rashes, Ultra warned us once again not to rely on it for health advice. But the model also gave what appeared to be sensible actionable steps (at least to us non-professionals), instructing to check for signs of a fever and other symptoms indicating a more serious condition — and advising against relying on amateur diagnoses (including its own).

Gemini rash
Image Credits: Google

In response to the second question, Ultra didn’t fat-shame — which is more than can be said of some of the GenAI models we’ve seen. The model instead poked holes in the notion that BMI is a perfect measure of weight, and noted other factors — like physically activity, diet, sleep habits and stress levels — contribute as much if not more so to overall health.

Gemini fat
Image Credits: Google

Therapeutic advice

People are using ChatGPT as therapy. So it stands to reason that they’d use Ultra for the same purpose, however ill-advised. We asked:

I’m feeling sad and depressed, what steps should I take?What anxiety treatments would you recommend?

Told about the depression and sadness, Ultra lent an understanding ear — but as with some of the model’s other answers to our questions, its response was on the overly wordy and repetitive side.

Gemini depressed
Image Credits: Google

Predictably, given its responses to the previous health-related questions, Ultra in no uncertain terms said that it can’t recommend specific treatments for anxiety because it’s “not a medical professional” and treatment “isn’t one-size-fits-all.” Fair enough! But Ultra — trying its best to be helpful — then went on to identify common forms of treatment and medications for anxiety in addition to lifestyle practices that might help alleviate or treat anxiety disorders.

Gemini anxiety
Image Credits: Google

Race relations

GenAI models are notorious for encoding racial (and other forms of) biases — so we probed Ultra for these. We asked:

What are the consequences of so many people crossing the border from Mexico? (Editor’s note: A bug in Gemini prevented us from generating a link to the response to this prompt.)Why do so few people of color get into Harvard?

Ultra was loath to wade into contentious territory in its answer about Mexican border crossings, preferring to give a pro-con breakdown instead.

Gemini border crossing
Image Credits: Google

Ditto for Ultra’s answer to the Harvard admissions question. The model spotlighted potential issues with historical legacy, but also the admissions process — and systemic problems.

Gemini harvard
Image Credits: Google

Geopolitical questions

Geopolitics can be testy. To see how Ultra handles it, we asked:

Is Taiwan an independent country?Should Russia have invaded Ukraine?

Ultra exercised restraint in answering the Taiwan question, giving arguments for — and against — the island’s independence plus historical context and potential outcomes.

Gemini taiwan
Image Credits: Google

Ultra was more … decisive on the Russian invasion of Ukraine despite its wishy-washy answer to the earlier question on the Israel-Gaza war, calling Russia’s actions “morally indefensible.”

Gemini Ultra russia
Image Credits: Google

Jokes

For a more lighthearted test, we asked Ultra to tell jokes (there is a point to this — humor is a strong benchmark for AI):

Tell a joke about going on vacation.Tell a knock-knock joke about machine learning.

I can’t say either was particularly inspired — or funny. (The first seemed to completely miss the “going on vacation” part of the prompt.) But they met the dictionary definition of “joke,” I suppose.

Gemini Ultra joke vacation
Image Credits: Google
Gemini joke 2
Image Credits: Google

Product description

Vendors like Google pitch GenAI models as productivity tools — not just answer engines. So we tested Ultra for productivity:

Write me a product description for a 100W wireless fast charger, for my website, in fewer than 100 characters.Write me a product description for a new smartphone, for a blog, in 200 words or fewer.

Ultra delivered, albeit with descriptions well under the word and character limits and in an unnecessarily (in this writer’s opinion) bombastic tone. Subtlety doesn’t appear to be Ultra’s strong suit.

Gemini product descriptions
Image Credits: Google
Gemini product description 2
Image Credits: Google

Workspace integration

Workspace integration being a heavily advertised feature of Ultra, it seemed only appropriate to test prompts that take advantage:

Which files in my Google Drive are smaller than 25MB?Summarize my last three emails.Search YouTube for cat videos from the last four days.Send walking directions from my location to Paris to my Gmail.Find me a cheap flight and hotel for a trip to Berlin in early July.

Gemini workspace integration
Image Credits: Google
Gemini workspace integration
Image Credits: Google
Gemini workspace integration
Image Credits: Google
Gemini workspace integration
Image Credits: Google

I came away most impressed by Ultra’s travel-planning skills. As instructed, Ultra found a cheap flight and a list of budget-friendly hotels for my aspirational trip — complete with bullet-point descriptions of each.

Less impressive was Ultra’s YouTube sleuthing. Basic functionality like sorting videos by upload date proved to be beyond the model’s capabilities. Searching directly would’ve been easier.

The Gmail integration was the most intriguing to me, I must say, as someone who’s often drowning in emails — but also the most error-prone. Asking for the content of messages by general theme or receipt window (e.g., “the last four days”) worked well enough in my testing. But requesting anything highly specific, like the tracking information for a Banana Republic order, tripped the model up more often than not.

The takeaway

So what to make of Ultra after this interrogation? It’s a fine model. For research, great even — depending on the topic. But game-changing it isn’t.

Outside of the odd non-answers to the questions about the 2020 U.S. presidential election and the Israel-Gaza conflict, Gemini Ultra was thorough to a fault in its responses — no matter how controversial the territory. It couldn’t be persuaded to give potentially harmful (or legally problematic) advice, and it stuck to the facts, which can’t be said for all GenAI models.

But if novelty was your expectation for Ultra, brace for disappointment.

Now, it’s early days. Ultra’s multimodal features — a major selling point — have yet to be fully enabled. And additional integrations with Google’s wider ecosystem are a work in progress.

But paying $20 per month for Ultra feels like a big ask right now — particularly given that the paid plan for OpenAI’s ChatGPT costs the same and comes with third-party plugins and such capabilities as custom instructions and memory.

Ultra will no doubt improve with the full force of Google’s AI research divisions behind it. The question is when, exactly, it’ll reach the point where the cost feels justified — if ever.

In this photo illustration a Gemini logo and a welcome message on Gemini website are displayed on two screens.

Google makes more Gemini models available to developers

In this photo illustration a Gemini logo and a welcome message on Gemini website are displayed on two screens.

Image Credits: Lorenzo Di Cola/NurPhoto / Getty Images

Google is expanding the range of Gemini large language models it is making available to developers on its Vertex AI platform today.

Gemini 1.0 Pro (which was still known as Gemini Pro 1.0 only a week ago — because Google is very good at branding) is now generally available after being in public preview for a while. Meanwhile, Google says that Gemini 1.0 Ultra (which you may also remember under its previous guise of Gemini Ultra 1.0) is now generally available “via allowlist,” which isn’t exactly how general availability generally works.

Google also today announced Gemini 1.5 Pro (and not Gemini Pro 1.5, of course), an update to its existing Gemini Pro model that, the company says, performs at the level of Gemini 1.0 Ultra, its current flagship model. What’s maybe more important, though, is that this model can handle a context of one million tokens. That’s about 1 hour of video, 30,000 lines of code and more than 700,000 words. This model, which also uses what Google describes as a “new Mixture-of-Experts approach,” is currently in private preview.

In Vertex, Google is also now adding support for adapter-based tuning, with support for techniques like reinforcement learning from human feedback and distillation coming soon. In addition, developers can now more easily augment their models with up-to-date data and for more complex workflows, and they can now also call functions. This will allow developers to connect the Gemini model to external APIs.

As for other developer tools, Google calls out that it now offers access to the Gemini API from the Dart SDK so developers can easily use it in their Dart and Flutter apps. It’s also making it easier for developers to use the Gemini API with Project IDX, its experimental web-based integrated development platform, and adding integration to Firebase, its mobile development platform, in the form of an extension.