MWC 2024: Nothing enters the budget range with Phone (2a)

Image Credits: Nothing

Nothing isn’t one to be quiet about new releases. The London-based phone company’s media push largely relies on trickling out information about devices bit by bit. It’s been a solid strategy thus far (if a bit annoying as someone who covers this world), as so many of its announcements have been first-gen products, each generating a buzz beyond the company’s loyal fanbase.

Nothing Phone (2a) certainly fits the bill. While it’s actually the company’s third handset, it’s aimed squarely at a different demographic than the flagship Phone (1) and Phone (2).  The “2a” bit, as you’ve likely gathered from previous handsets, implies a budget focus. In recent years, that’s mostly been a game of deciding which flagship features can be sacrificed to reduce the price, while keeping it as close to a premium feel as possible.

After various teases and a handful of official image releases, the Phone (2a) finally saw the light of day (well, the warm glow of a Barcelona night) at MWC 2024. More specifically, it was a guest of honor at last night’s Nothing after-show party, glowing up in all of its low-priced glory inside a glass box. Otherwise, Nothing has been lying low at the big mobile trade show, opting out of a floor presence.

To quote Operation Ivy paraphrasing Plato’s account of Socrates, “all I know is that I don’t know Nothing.” Details are few and far between at the moment. That said, the design does tell us a good amount about the product. For starters, Nothing has unsurprisingly retained some of the transparent aesthetic of the rest of the line. The light-up glyphs are back as well — though they cover a lot less surface area than the other models, relegated to a trio of bands up top.

Phone (2a) keeps the Phone (2)’s dual-camera setup, though it’s been moved to the center. I’m curious to hear whether that’s primarily a pragmatic decision or an aesthetic one. With Nothing being so focused on design, I wouldn’t be surprised if it was moved simply to distinguish the device from its flagships. Whatever the case, this is a good-looking and (it appears) solidly built budget phone. The rear may be a bit busy for some, but — as ever — I appreciate what Nothing has done to break away from the same design most manufacturers have settled into.

We don’t know specifics on the camera setup beyond number and orientation, but I wouldn’t be surprised if it’s a step down from the Phone (2), as camera configurations certainly contribute to manufacturing price. We do know, however, that the phone will be powered by a MediaTek Dimensity 7200 Pro chip — a variant built specifically for the device.

Price is very much still an open question — and an important one at that.

Read more about MWC 2024 on TechCrunch

Google's Gemini 1.5 Pro enters public preview on Vertex AI

Google Gemini Pro 1.5presentation onstage at Google Cloud Next

Image Credits: Frederic Lardinois/TechCrunch

Gemini 1.5 Pro, Google’s most capable generative AI model, is now available in public preview on Vertex AI, Google’s enterprise-focused AI development platform. The company announced the news during its annual Cloud Next conference, which is taking place in Las Vegas this week.

Gemini 1.5 Pro launched in February, joining Google’s Gemini family of generative AI models. Undoubtedly its headlining feature is the amount of context that it can process: between 128,000 tokens to up to 1 million tokens, where “tokens” refers to subdivided bits of raw data (like the syllables “fan,” “tas” and “tic” in the word “fantastic”).

One million tokens is equivalent to around 700,000 words or around 30,000 lines of code. It’s about four times the amount of data that Anthropic’s flagship model, Claude 3, can take as input and about eight times as high as OpenAI’s GPT-4 Turbo max context.

A model’s context, or context window, refers to the initial set of data (e.g. text) the model considers before generating output (e.g. additional text). A simple question — “Who won the 2020 U.S. presidential election?” — can serve as context, as can a movie script, email, essay or e-book.

Models with small context windows tend to “forget” the content of even very recent conversations, leading them to veer off topic. This isn’t necessarily so with models with large contexts. And, as an added upside, large-context models can better grasp the narrative flow of data they take in, generate contextually richer responses and reduce the need for fine-tuning and factual grounding — hypothetically, at least.

So what specifically can one do with a 1 million-token context window? Lots of things, Google promises, like analyzing a code library, “reasoning across” lengthy documents and holding long conversations with a chatbot.

Because Gemini 1.5 Pro is multilingual — and multimodal in the sense that it’s able to understand images and videos and, as of Tuesday, audio streams in addition to text — the model can also analyze and compare content in media like TV shows, movies, radio broadcasts, conference call recordings and more across different languages. One million tokens translates to about an hour of video or around 11 hours of audio.

Thanks to its audio-processing capabilities, Gemini 1.5 Pro can generate transcriptions for video clips, as well, although the jury’s out on the quality of those transcriptions.

In a prerecorded demo earlier this year, Google showed Gemini 1.5 Pro searching the transcript of the Apollo 11 moon landing telecast (which comes to about 400 pages) for quotes containing jokes, and then finding a scene in movie footage that looked similar to a pencil sketch.

Google says that early users of Gemini 1.5 Pro — including United Wholesale Mortgage, TBS and Replit — are leveraging the large context window for tasks spanning mortgage underwriting; automating metadata tagging on media archives; and generating, explaining and transforming code.

Gemini 1.5 Pro doesn’t process a million tokens at the snap of a finger. In the aforementioned demos, each search took between 20 seconds and a minute to complete — far longer than the average ChatGPT query.

Google previously said that latency is an area of focus, though, and that it’s working to “optimize” Gemini 1.5 Pro as time goes on.

Of note, Gemini 1.5 Pro is slowly making its way to other parts of Google’s corporate product ecosystem, with the company announcing Tuesday that the model (in private preview) will power new features in Code Assist, Google’s generative AI coding assistance tool. Developers can now perform “large-scale” changes across codebases, Google says, for example updating cross-file dependencies and reviewing large chunks of code.