LinkedIn rolls out new job search features to make it easier to find relevant opportunities

photo illustration a screen displays LinkedIn logo

Image Credits: Ali Balikci / Anadolu Agency (opens in a new window) / Getty Images

LinkedIn is rolling out new tools that are designed to help people find relevant job opportunities, the company announced on Wednesday. A new “Job Collections” feature will allow users to expand their job options by exploring collections of relevant jobs across a variety of industries and companies that they may have been unaware of. Plus, the platform is launching a new “Preferences” page that makes it easier to select and manage your preferences.

In a blog post, LinkedIn noted that job applications are up an average of 16% per person per job, and that competition is heating up. The platform’s new Job Collections feature aims to make it easier for people to find opportunities that are a fit for them. To get started with Job Collections, you have to visit the Jobs tab on LinkedIn. From there, you have to look for “Explore with Job Collections.” Next, you can click on any of the collections that align with your interests. For example, you can look at jobs that offer remote work or good parental leave.

Scrolling through LinkedIn's new job collections feature
Image Credits: LinkedIn

You also can look at industry-specific collections, such as food & beverage, healthcare, media, pro sports and more. If you’re someone who has always worked in large companies but want to try something new, you can choose to look at job opportunities at startups and small businesses.

As for the new preferences page, LinkedIn says users can now manage their preferences in one place in order to ensure the best possible chances for an ideal job match. The new preferences page can be located at the top of the Jobs tab on mobile and on the left rail on desktop. Once you set your preferences, LinkedIn will highlight them in green on every job details page so you can quickly determine if it aligns with your ideals.

The preferences you can set include employment type (full-time, part-time, contract, etc.) and location type (remote, hybrid, on-site). Plus, you can set a minimum pay preference in you’re in the United States. LinkedIn plans to add more preferences to the page in the future.

LinkedIn is also introducing a new “I’m Interested” button that allows you to privately express interest in working for a company without having to apply for a specific role. You can do so even if there aren’t any open roles at the company. After you have signaled your interest, recruiters at the company may look at your profile when looking for candidates. LinkedIn recommends expressing your interest for your top 10-20 companies.

Today’s announcements come as LinkedIn is beginning to roll out a new AI-powered LinkedIn Premium experience that is designed to help people quickly assess if a job opening is a good fit for them. The tool can also help users identify the best way to position themselves for any job, while also learning more about the company and industry.

LinkedIn is rolling out these new tools as a new wave of layoffs have hit the tech industry. Last week, Google laid off over 1,000 employees across its Google Assistant division and the team that manages Pixel, Nest and Fitbit hardware. Also last week, Audible laid off 5% of its workforce, Discord laid off 17% of its staff and Amazon laid off “several hundreds” of employees at Prime Video and MGM Studios.

LinkedIn, now at 1B users, turns on OpenAI-powered reading and writing tools

Brave integrates its own search results with its Leo AI assistant

Brave search Leo integration

Image Credits: Brave

Privacy-focused search engine and web browser company Brave Software is integrating search results into its Leo chatbot. Search results are based on the Brave Search API and Leo is integrated into the company’s browser. The company said that this integration will help users find more up-to-date information.

People can use this integration to fetch information like the latest scores, pull more context related to the topic while reading an article and search recent and relevant topics to create social media posts.

Image Credit: Brave

Brave emphasized that the integration is privacy-forward because the company doesn’t require users to log in. Plus, it doesn’t store your conversations with the AI chatbot on its server. Brave said it doesn’t use the responses to train its model. The company added that it sends requests to an anonymization server first to hide a user’s identity.

You can also purchase Leo Premium at $14.99/month, which gives you higher rate limits and access to the latest models. The company said it issues unlinkable tokens when you buy the subscription to prevent any personal identification.

Brave has tried to build its AI capability to attract more users to its browser and search engine. The company launched AI-powered summarization for its search product last year to show users the gist of answers to a search query. It also made the Leo AI Assistant available to everyone in November 2023. In April, the company introduced an AI Answer engine to search to answer users’ queries.

Other browsers such as Edge, Opera, Arc and SigmaOS have integrated AI to varying degrees. Brave likely wants to use its advantage of owning both a browser and search stack to refine its product.

Google introduces 'Circle to Search,' a new way to search from anywhere on Android using gestures

Image Credits: Samsung

Alongside Samsung’s launch event today, Google announced a new way to search on Android phones dubbed “Circle to Search.” The feature will allow users to search from anywhere on their phone by using gestures like circling, highlighting, scribbling or tapping. The addition, Google explains, is designed to make it more natural to engage with Google Search at any time a question arises — like when watching a video, viewing a photo inside a social app or having a conversation with a friend over messaging, for example.

Image Credits: Google

Circle to Search is something of a misnomer for the new Android capability, as it’s more about interacting with the text or image on the screen to kick off the search…and not always via a “circling” gesture.

The circling gesture is just one option you can use to initiate a search — such as when you want to identify something in a video or photo. For example, if you’re watching a food video featuring a Korean corn dog, you could ask “Why are these so popular?” after circling the item.

Image Credits: Google
Image Credits: Google

The feature can be engaged through other gestures, as well. If you’re chatting in a messaging app with a friend about a restaurant, you could simply tap on the name of the restaurant to see more details about it. Or you could swipe across a series of words to turn that string into a search, the company explains, like the term “thrift flip” that appears when watching a YouTube Shorts video about thrifting, another example shows.

Image Credits: Google

When interested in something visual on your screen, you can circle or scribble across the item. For instance, Google suggests that you could circle the sunglasses a creator wore in their video or scribble on their boots to look up related items on Google without needing to switch between different apps. The scribble gesture can be used on both images and words, Google also notes.

The search results users will see will differ based on their query and the Google Labs products they opt into. For a simple text search, you may see traditional search results while a query that combines an image and text — or “multisearch” as Google calls it — uses generative AI. And if the user is participating in Google’s Search Generative Experience (SGE) experiment offered via Google Labs, they’ll be offered AI-powered answers, as with other SGE queries.

Image Credits: Google

The company believes the ability to access search from any app will be meaningful, as users will no longer have to stop what they’re doing to start a search or take a screenshot as a reminder to search for something later.

However, the feature is also arriving at a time when Google Search’s influence is waning. The web has been taken over by SEO-optimized pages and spam, making it more difficult to find answers via search. At the same time, generative AI chatbots are now being used to augment or even supplant traditional searches. The latter could negatively impact Google’s core advertising business if more people begin to get their answers elsewhere.

Image Credits: Google

Turning the entire Android phone platform into a surface for search, then, is more than just a “meaningful” change for consumers — it’s something of an admission that Google’s Search business needs shoring up by deeper integration with the smartphone OS itself.

The feature was one of several Google AI announcements across Gemini, Google Messages and Android Auto announced at today’s event. It also arrives alongside a new AI-powered overview feature for multisearch in Google Lens.

Circle to Search will launch on January 31 on the new Galaxy S24 Series smartphones, announced today at the Samsung event, as well as on premium Android smartphones, including the Pixel 8 and Pixel 8 Pro. It will be available in all languages and locations where the phones are available. Over time, more Android smartphones will support the feature, Google says.

Samsung’s Galaxy S24 line arrives with camera improvements and generative AI tricks

seen in Landover, Maryland,

Walmart debuts generative AI search and AI replenishment features at CES

seen in Landover, Maryland,

Image Credits: SAUL LOEB/AFP / Getty Images

In a keynote address at the Consumer Electronics Show in Las Vegas, Walmart president and CEO Doug McMillon and other Walmart execs offered a glimpse as to how the retail giant was putting new technologies, including augmented reality (AR), drones, generative AI and other artificial intelligence tech to work in order improve the shopping experience for customers.

At the trade show, the company revealed a handful of new products, including two AI-powered tools for managing product search and replenishment, as well as a new beta AR social commerce platform called “Shop with Friends.” It also highlighted how it was using AI in other areas of its business, including within Sam’s Club and in apps used by store associates.

Most notably, Walmart is launching a new generative AI search feature on iOS that will allow customers to search for products by use cases, instead of by product or brand names. For example, you could ask Walmart to return search results for things needed for a “football watch party,” instead of specifically typing in searches for chips, wings, drinks or a 90-inch TV. These enhanced search results will span categories, rivaling Google’s SGE (Search Generative Experience), which can recommend products and show various factors to consider, along with reviews, prices, images and more.

Image Credits: Walmart

Ahead of CES, the company had demonstrated an AI shopping assistant that would let customers interact with a chatbot as they shopped, to ask questions and receive personalized product suggestions, as well. At the time, Walmart teased that a generative AI-powered search feature was also in the works. It suggested customers could ask for things like a “unicorn-themed birthday party” and get results like unicorn-themed napkins, balloons, streamers and more. Now the feature is rolling out on mobile devices, iOS first.

Another potentially promising use of AI involves the replenishment of frequently ordered items.

Walmart will initially test this use case with Walmart InHome Replenishment, which will use AI and its existing replenishment expertise combined to create automated online shopping carts for customers with items they regularly order. Because it’s only available through the InHome program, these items are then delivered to a customer’s fridge in their kitchen or garage using the smart lock-powered InHome delivery service, but it will not be a subscription service.

The company also noted that customers will be able to remove items from their basket, as needed, and the service will adjust to customer’s changing needs over time.

Image Credits: Walmart

However, if the feature works well, it’s not hard to imagine how it could be put to use to offer replenishment of other household items as well, similar to Amazon’s Subscribe-and-Save.

Surprisingly, Amazon has not yet leveraged AI to do the same (i.e. to augment or replace Dash Replenishment). However, the online retailer has been putting AI to work in other ways, including by helping connect customers with the right product by summarizing product reviews, highlighting key attributes or helping them find clothes that fit. 

Image Credits: Walmart

Another new Walmart product making a debut at CES is “Shop with Friends,” an AR shopping tool that lets customers share virtual outfits they create with their friends and then get feedback on their finds. The tool combines Walmart’s AI-powered virtual try-on tech, launched last year, with social features.

Image Credits: Walmart

CEO Doug McMillon referred to the suite of new products as something he called “adaptive retail” — that is, retail experiences that are personalized and flexible.

“While omnichannel retail has been around for decades, this new type of retail — adaptive retail — takes it a step further, said Suresh Kumar, global chief technology officer, and chief development officer, Walmart Inc., in a statement shared ahead of the CES keynote. “It’s retail that is not only e-commerce or in-store, but a single, unified retail experience that seamlessly blends the best aspects of all channels. And for Walmart, adaptive retail is rooted in a clear focus on people,” he said. 

The company touched on other ways it’s employing AI, as well.

Walmart’s Sam’s Club will introduce an AI and computer vision-powered technology that helps solve the problem of waiting in line for receipt verification when exiting the store. The pilot, currently running in 10 locations, will confirm members have paid for their items without requiring a store associate to check their charts. Instead, computer vision tech will capture images of customers’ carts and AI will speed the process of matching cart items to sales. Walmart expects to bring the tech to its nearly 600 clubs by year-end.

Image Credits: Walmart

In another area, Walmart’s generative AI tool for store associates, My Assistant, will be expanded to 11 countries outside the U.S. in 2024, where it will work in employees’ native languages. Already, the tool has become available in Canada, Mexico, Chile, Costa Rica, El Salvador, Honduras, Guatemala and Nicaragua and is on track for launches in India and South Africa. My Assistant helps employees with writing, summarizing large documents and offering “thought starters” to spark creativity, Walmart says.

Image Credits: Walmart

On the matter of AI, McMillon stressed that the company wouldn’t prioritize the technology without considering the potential implications. Instead, Walmart’s “underlying principle is that we should use technology to serve people and not the other way around,” he said.

Still, McMillon admitted that AI will mean some jobs will be eliminated.

“No doubt some tasks will go away and some roles will change. And some of them should, like the ones that involve lifting heavy weights or doing repetitive tasks,” the exec explained. “As that’s happening, we’re designing new roles that our associates tell us are more enjoyable and satisfying, and also often result in higher pay. So we’re investing to help our associates transition to this shared future,” McMillon added.

During the keynote, McMillon also brought Microsoft CEO Satya Nadella onstage, after announcing that Walmart used large language models from Azure OpenAI alongside its own retail-specific models.

Nadella spoke broadly about the breakthroughs made possible with generative AI, including in areas like coding, productivity apps, like Microsoft’s own, healthcare, education and more, adding that with new technology, “one has to be mindful that you want to be able to amplify the opportunity with it…and then also be very mindful of the unintended consequences of this technology.”

Outside of AI, Walmart is looking to other new technology for faster deliveries.

The company announced it’s expanding its drone delivery service in the Dallas-Ft. Worth metro to 1.8 million households, or 75% of the metroplex area. The deliveries, which take place in 30 minutes or less, are powered by Wing and Zipline. Walmart also notes that 75% of the 120,000 items in a Walmart Supercenter meet the size and weight requirements for drone delivery. To date, Walmart has done over 20,000 drone deliveries in its two-year trial.

Read more about CES 2024 on TechCrunch

Walmart experiments with generative AI tools that can help you plan a party or decorate

LinkedIn rolls out new job search features to make it easier to find relevant opportunities

photo illustration a screen displays LinkedIn logo

Image Credits: Ali Balikci / Anadolu Agency (opens in a new window) / Getty Images

LinkedIn is rolling out new tools that are designed to help people find relevant job opportunities, the company announced on Wednesday. A new “Job Collections” feature will allow users to expand their job options by exploring collections of relevant jobs across a variety of industries and companies that they may have been unaware of. Plus, the platform is launching a new “Preferences” page that makes it easier to select and manage your preferences.

In a blog post, LinkedIn noted that job applications are up an average of 16% per person per job, and that competition is heating up. The platform’s new Job Collections feature aims to make it easier for people to find opportunities that are a fit for them. To get started with Job Collections, you have to visit the Jobs tab on LinkedIn. From there, you have to look for “Explore with Job Collections.” Next, you can click on any of the collections that align with your interests. For example, you can look at jobs that offer remote work or good parental leave.

Scrolling through LinkedIn's new job collections feature
Image Credits: LinkedIn

You also can look at industry-specific collections, such as food & beverage, healthcare, media, pro sports and more. If you’re someone who has always worked in large companies but want to try something new, you can choose to look at job opportunities at startups and small businesses.

As for the new preferences page, LinkedIn says users can now manage their preferences in one place in order to ensure the best possible chances for an ideal job match. The new preferences page can be located at the top of the Jobs tab on mobile and on the left rail on desktop. Once you set your preferences, LinkedIn will highlight them in green on every job details page so you can quickly determine if it aligns with your ideals.

The preferences you can set include employment type (full-time, part-time, contract, etc.) and location type (remote, hybrid, on-site). Plus, you can set a minimum pay preference in you’re in the United States. LinkedIn plans to add more preferences to the page in the future.

LinkedIn is also introducing a new “I’m Interested” button that allows you to privately express interest in working for a company without having to apply for a specific role. You can do so even if there aren’t any open roles at the company. After you have signaled your interest, recruiters at the company may look at your profile when looking for candidates. LinkedIn recommends expressing your interest for your top 10-20 companies.

Today’s announcements come as LinkedIn is beginning to roll out a new AI-powered LinkedIn Premium experience that is designed to help people quickly assess if a job opening is a good fit for them. The tool can also help users identify the best way to position themselves for any job, while also learning more about the company and industry.

LinkedIn is rolling out these new tools as a new wave of layoffs have hit the tech industry. Last week, Google laid off over 1,000 employees across its Google Assistant division and the team that manages Pixel, Nest and Fitbit hardware. Also last week, Audible laid off 5% of its workforce, Discord laid off 17% of its staff and Amazon laid off “several hundreds” of employees at Prime Video and MGM Studios.

LinkedIn, now at 1B users, turns on OpenAI-powered reading and writing tools

Google introduces 'Circle to Search,' a new way to search from anywhere on Android using gestures

Image Credits: Samsung

Alongside Samsung’s launch event today, Google announced a new way to search on Android phones dubbed “Circle to Search.” The feature will allow users to search from anywhere on their phone by using gestures like circling, highlighting, scribbling or tapping. The addition, Google explains, is designed to make it more natural to engage with Google Search at any time a question arises — like when watching a video, viewing a photo inside a social app or having a conversation with a friend over messaging, for example.

Image Credits: Google

Circle to Search is something of a misnomer for the new Android capability, as it’s more about interacting with the text or image on the screen to kick off the search…and not always via a “circling” gesture.

The circling gesture is just one option you can use to initiate a search — such as when you want to identify something in a video or photo. For example, if you’re watching a food video featuring a Korean corn dog, you could ask “Why are these so popular?” after circling the item.

Image Credits: Google
Image Credits: Google

The feature can be engaged through other gestures, as well. If you’re chatting in a messaging app with a friend about a restaurant, you could simply tap on the name of the restaurant to see more details about it. Or you could swipe across a series of words to turn that string into a search, the company explains, like the term “thrift flip” that appears when watching a YouTube Shorts video about thrifting, another example shows.

Image Credits: Google

When interested in something visual on your screen, you can circle or scribble across the item. For instance, Google suggests that you could circle the sunglasses a creator wore in their video or scribble on their boots to look up related items on Google without needing to switch between different apps. The scribble gesture can be used on both images and words, Google also notes.

The search results users will see will differ based on their query and the Google Labs products they opt into. For a simple text search, you may see traditional search results while a query that combines an image and text — or “multisearch” as Google calls it — uses generative AI. And if the user is participating in Google’s Search Generative Experience (SGE) experiment offered via Google Labs, they’ll be offered AI-powered answers, as with other SGE queries.

Image Credits: Google

The company believes the ability to access search from any app will be meaningful, as users will no longer have to stop what they’re doing to start a search or take a screenshot as a reminder to search for something later.

However, the feature is also arriving at a time when Google Search’s influence is waning. The web has been taken over by SEO-optimized pages and spam, making it more difficult to find answers via search. At the same time, generative AI chatbots are now being used to augment or even supplant traditional searches. The latter could negatively impact Google’s core advertising business if more people begin to get their answers elsewhere.

Image Credits: Google

Turning the entire Android phone platform into a surface for search, then, is more than just a “meaningful” change for consumers — it’s something of an admission that Google’s Search business needs shoring up by deeper integration with the smartphone OS itself.

The feature was one of several Google AI announcements across Gemini, Google Messages and Android Auto announced at today’s event. It also arrives alongside a new AI-powered overview feature for multisearch in Google Lens.

Circle to Search will launch on January 31 on the new Galaxy S24 Series smartphones, announced today at the Samsung event, as well as on premium Android smartphones, including the Pixel 8 and Pixel 8 Pro. It will be available in all languages and locations where the phones are available. Over time, more Android smartphones will support the feature, Google says.

Samsung’s Galaxy S24 line arrives with camera improvements and generative AI tricks

Cubes with Slack name and logo stacked on one another with city view in background.

Slack adds AI-fueled search and summarization to the platform

Cubes with Slack name and logo stacked on one another with city view in background.

Image Credits: Ron Miller

As an enterprise communications platform, Slack has become a de facto storage repository for institutional knowledge, but getting at that information has been challenging with conventional search tools. Today Slack introduced a couple of new features designed to make that information more accessible, including a new AI-fueled search tool and the ability to summarize information inside channels.

Noah Weiss, the chief product officer at Slack, says the platform naturally gathers corporate information in an informal and unstructured way. The problem is finding a way to surface that hidden trove of knowledge. “The punchline to all of this is that now this new wave of generative AI capabilities allows us to extract a whole new set of meaning and intelligence out of all that analysis that has been created for years [on our platform],” Weiss told TechCrunch.

Last May Slack announced that it was incorporating generative AI into the platform at the Salesforce World Tour in New York City. It was more of a generalized call to action with the creation of SlackGPT, its own flavor of generative AI designed specifically for content on the Slack platform.

Today’s announcement is about putting that to work in more specific ways. Weiss says being able to summarize channel content helps employees catch up after time off, or simply avoid having to read a long thread to get the gist of the conversation. With channel summaries, you can ask for a summary and Slack’s AI model generates a summary of all the topics discussed along with references to show how the model created each part of the summary, which Weiss says was an essential part of the design of this feature.

“You can drill into any area and we show you all the detailed context. So we were really thinking about transparency, building trust, making sure that we show our work and giving people the ability to drill in to learn more if they want to,” he said.

Gif showing how the Slack Summarize AI feature works.
Image Credits: Slack

The company also allows users to ask questions in a natural way, just as with ChatGPT, but it uses Slack content instead of more generalized internet content, so a user could ask a question like ‘What is Project Gizmo?’ Slack AI then delivers an answer, again with sourcing, to let people see where the answer came from and if they can trust it.

Slack AI search asking what is project gizmo and giving an answer derived from the Slack answer archive.
Image Credits: Slack

Each answer includes a quality check, where users can say if the answer was good, bad or neutral, so the model can learn about the quality of the responses, and the system engineers can see how well the model is performing.

He wouldn’t get into specifics about the underlying model, only saying that it was a mix of large language models. “What we found is that they all perform in different ways and with different speed and quality characteristics. We spent a lot of time fine-tuning the models for the data that we actually have in Slack, and also doing a lot on the prompt engineering side.”

Slack AI with search and summarization is an add-on product for enterprise plans, meaning that it will cost additional money over and above the normal license cost. Slack did not provide cost details, but it’s available today in the U.S. and U.K., English only for now, but will be available in additional languages in the near future, according to the company.

WhatsApp filter

WhatsApp now lets you search conversations by date on Android

WhatsApp filter

Image Credits: WhatsApp

WhatsApp announced today that it is rolling out a “search by date” function for individual and group chats on Android devices. The feature has been available on other platforms, including iOS, Mac desktop and WhatsApp Web.

Mark Zuckerberg shared the announcement on his WhatsApp channel with a video of him searching for an old chat about Karaoke. Users can only search for a chat on a particular date instead of specifying a date range.

To use the feature, users must go to one-on-one or group chat details by tapping on the contact or the group name. To search by date, they have to tap on the search button and then tap the calendar icon.

WhatsApp has a search-by-date feature for individual and group chats now. Image Credits: WhatsApp

Users can already search through conversations by media type such as links, media and docs through the conversation detail page.

WhatsApp has been testing other search-related features as well. In December, WABetaInfo noted that the company is experimenting with chat filters such as “All,” “Unread,” “Contacts” and “Groups.” Earlier this month, the blog said that the company was developing another filter called “Favorites” to quickly check starred messages.

Last week, WhatsApp announced support for new text formatting options such as bulleted lists, numbered lists, block quotes and inline code for individual and group chat along with Channels.

Danti's natural language search engine for Earth data soars with $5M in new funding

Image Credits: Danti (opens in a new window)

Danti, an artificial intelligence company building a superpower search engine for Earth data, has brought on prominent defensive tech investor Shield Capital as it looks to scale its technology for government customers.

Founded by Jesse Kallman in early 2023, Danti has developed a natural language search engine for data that has historically been highly siloed, like satellite imagery, collating it with other commercial and government sources to report back across multiple sources and domains.

For example, an analyst can pose a complex question in simple language, like “What are the latest tank movements in Eastern Ukraine?” and receive in turn straightforward answers collated across data sources.

The idea is to empower a single analyst to do more, Kallman said in a recent interview. While American adversaries are throwing manpower at the problem of analyzing huge amounts of data, Danti aims to help “one analyst do the work of 10 or 15,” he said. It means that a relatively straightforward question — where is a particular ship off the coast of Lagos, Nigeria, for example — can potentially be answered in seconds, rather than hours.

“We’re not replacing the analysts,” Kallman clarified. “We’re helping them do their work way faster, so that they can get to the part that humans are way better at, which is synthesizing and deciding, ‘What do I now do about this information? How do I want to report on it?’”

Among the startup’s early customers is the U.S. Space Force, which is using Danti’s product to help officers easily search and share data. The use of natural language models in the search engine means an intuitive, straightforward user experience; no doubt this is paramount in high-pressure situations where analysts must make complex decisions but have little time to trawl through reams of satellite or drone data.

Right now, Danti is squarely focused on government, though in the longer term it plans to roll out a version of its product for commercial industry. This version would focus on property records, parcel information, and risk data, to serve markets like electric utilities and insurance, Kallman said. Customers will also be able to connect their own information into Danti’s engine to use its natural language processing to query their own data.

The $5 million round was led by Shield Capital and includes participation from the startup’s existing investors Tech Square Ventures, Humba Ventures and Leo Polovets, Space.VC, and Radius Capital. Kallman said the startup looked deliberately for a defense-focused fund to lead their next round, particularly as the company looks to execute its government go-to-market plan and scale its engineering team.

Since last summer, when the company announced its $2.75 million pre-seed, the team has grown to over 20 people, and Kallman said the engineering team will grow even more with the new injection of funds.

Circle to Search is now a better homework helper

person holding phone using Google Circle to Search

Image Credits: Google

It’s a teacher’s worst nightmare: The AI is doing the kids’ homework. At the Google I/O 2024 developer conference on Tuesday, the company announced that its AI-powered Circle to Search feature, which allows Android users to get instant answers using gestures like circling, will now be able to solve more complex problems across psychics and math word problems. 

The new capabilities are made possible thanks to Google’s new family of AI models for learning, LearnLM. 

The addition expands the capabilities of the new search feature, first introduced at Samsung’s Unpacked event in January. Circle to Search is designed to make it more natural to engage with Google Search from anywhere on the phone by taking some action — like circling, highlighting, scribbling, or tapping. But it’s also a way to ensure users engage with Google Search over other information-retrieval tools in the age of AI.

Today, Circle to Search will be able to better help kids with their homework directly from supported Android phones and tablets. Now if they get stuck on a particularly hard problem, they’ll be able to use Circle to Search to pull up step-by-step instructions that guide them through solving it. Google says the feature will be able to handle problems such as those with symbolic formulas, diagrams, graphs, and more.

Since launching earlier this year, Circle to Search has expanded to more Samsung and Google Pixel devices, making the feature available to more than 100 million devices in total. Google says it expects to double that number by year-end. 

We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.

Read more about Google I/O 2024 on TechCrunch