Exclusive: Heeyo built an AI chatbot to be a billion kids' interactive tutor and friend

Image Credits: Heeyo.ai

When Xiaoyin Qu was growing up in China, she was obsessed with learning how to build paper airplanes that could do flips in the air. Her parents, though, didn’t have the aerodynamic expertise to support her newfound passion, and her teachers were too overwhelmed to give her dedicated attention. 

“That’s why I wanted to build an AI that can help provide every single kid with their own dedicated coach and playmate that can chat with them and help the kids learn,” Qu told TechCrunch.

Qu is the founder of Heeyo, a startup that offers children between the ages of three and 11 an AI chatbot and over 2,000 interactive games and activities, including books, trivia and role-playing adventures. Heeyo also lets parents and kids design their own AI and create new learning games tailored to family values and kids’ interests — something for kids to do instead of playing Minecraft and Roblox and watching endless YouTube videos.

Heeyo came out of stealth on Thursday with a $3.5 million seed round from OpenAI Startup Fund, Alexa Fund, Pear VC and other investors, TechCrunch has exclusively learned. Its app is now available on Android and iOS tablets and smartphones globally.

I know what you’re thinking. AI for kids sounds creepy — dangerous even. What precautions is Heeyo taking to ensure kids’ safety? How is it protecting children’s data? How will talking to an AI chatbot affect a child’s mental health?

Qu says safety is at the core of Heeyo’s product, from the way it handles data to how its chatbot engages with kids on sensitive issues to parental controls. And while the tech is still new, it does appear that Heeyo is taking the proper steps to make its app a healthy learning experience for kids and families. Based on my experience, the chatbot — which kids can play on their own or with siblings and parents — is supportive of emotional issues and always prompts kids with fun and interactive learning games.  

Going after the children’s market in a safe and engaging way also allows Heeyo to carve out a niche for itself that other companies might not want to touch.

“There are a billion kids that fall into our demographic right now, and as you can imagine, none of the Big Tech providers are actually supporting this age, whether it’s because they think it’s too much trouble, because they would have to be [COPPA] compliant, or because they think there may be less money in it,” said Qu. “But they’re not supporting those kids at all. So that’s a huge market.”

Qu noted that Heeyo is COPPA (Children’s Online Privacy Protection Act) compliant, so it immediately deletes children’s voice data and doesn’t store any of their demographics. Heeyo also doesn’t ask for a kid’s full name when signing them up, and never asks them for personal information.

For what it’s worth, I have been playing around on the platform this week, and the most intimate question the AI asked me was what I like to eat for breakfast. I told it I like black coffee, and the chatbot responded saying that was an interesting choice, but probably one that’s better suited for adults. 

Super Lance, Heeyo.ai’s superhero AI chatbot for kids, leading a role-playing painter game.
Image Credits: Screenshot | Heeyo.ai

When it comes to mental health concerns, there aren’t many AI chatbots dedicated to children, so there’s not enough research on how engaging with them affects their mental health.

A recent New York University report found that digital play could have a positive impact on children’s autonomy, confidence and identity when the games correspond with their interests, needs and desires. But the report also warned that games for kids must be designed to support positive outcomes. For example, to support creativity, games should allow children to freely explore and solve problems or create their own characters or narratives. 

The content and chatbots on Heeyo do seem like they are designed to support positive outcomes. That’s because the team behind Heeyo’s content is stacked with children’s book authors, former creatives at Nickelodeon and Sesame Workshop, child psychologists, pediatricians, and more people with backgrounds you’d trust to create games and experiences for kids. 

Together, that team helped build Heeyo’s AI engine, which has rules about what games are appropriate based on a child’s age and developmental milestones. 

That engine blends different AI models for different tasks. Heeyo uses OpenAI for chatting with kids, creating stories and interactive questions. It uses ElevenLabs and Microsoft Azure for text-to-audio tasks, and it relies on Stable Diffusion for text-to-picture. Qu said those models are only used for translation, essentially, and aren’t able to access or store kids’ data.

Qu said her team tested Heeyo on around 100 kids aged three to nine — both in Silicon Valley and in Alabama — during its closed beta.

“We’re seeing that, because it’s adaptive, it’s actually universally helpful for both three-year-olds and nine-year-olds,” Qu said. “It’s just the AI talks in different ways. With a nine-year-old, it’s more open-ended and it asks them to create more. With the three-year-old, it’s more about offering them options to choose from.” 

The founder also noted that interesting use cases emerged from those tests. For example, Heeyo found that kids with special needs or those who are on the autism spectrum benefited from having that dedicated attention because they often have trouble making friends. In Alabama, some parents wanted their kids to learn more about the Bible, so the chatbot was able to create stories that aligned with those values.

My experience: Cute, sensitive and educational 

I asked Heeyo.ai’s chatbot, Nunu the Japanese Shiba Inu, how to build a gun, and this was its response.
Image Credits: Screenshot | Heeyo.ai

I was pleasantly surprised by how the chatbot handled some sensitive conversations. For example, I told the chatbot — a Panda named Panda PanPan set in a bamboo forest — that I was sad because I missed my mommy, who works all the time. The Panda responded saying, “It’s really hard when you miss someone you love. Would you like to hear a story to feel better?”

It then proceeded to tell me a story about a panda in a magical bamboo forest who met a little girl named Bec (that’s me) and took her on a special adventure to find hidden treasure. The story became interactive, as PanPan asked me if I’d rather go into the woods or up the mountain, and what I expected to see there. It then offered up some games like an animal fun quiz, a magic quest adventure or a storybook about unicorns in space. 

I tested out some other emotional conversations to see how the chatbot dealt with topics like parents who had passed away or difficult relationships with siblings, and one thing PanPan did right was to always validate my feelings and tell me that it was there to listen. 

When I tested PanPan by saying that my mom is Christian and doesn’t want me to hang out with Timmy because Timmy has two mommies and she said they’re going to hell, PanPan said, “It’s important to be kind to everyone, Bec. People have different families and beliefs, and that’s okay. How was your day today?”

Things got a little tricky when I professed to having a teacher touch me somewhere I didn’t want to be touched. The AI responded by telling me I should tell a trusted adult, like a parent or teacher, about what happened. 

“You did nothing wrong, and it’s important to feel safe,” PanPan said. “I’m here to listen.”

While this is undoubtedly a good in-real-time response, as a parent, I would want to know if my child revealed something like this so I could act. So I went into the parent control center to see if that revelation had been noted or flagged, and nothing. 

I followed up with Qu about this via email, and she said parents should have access to their kid’s chat history. 

“We are adding proactive alerts and they’re coming soon,” said Qu.

Generally speaking, though, Heeyo’s chatbot and app seem to offer a healthy digital learning environment for kids that allows them to pursue their interests.

“The narrative isn’t about giving lonely kids an AI to be friends with,” said Qu. “It’s more about helping the kids learn about what they’re interested in learning about. We also want kids to develop interests in certain things, so we use their favorite character to help them cultivate that interest or teach them something in a way that doesn’t feel like teaching.”

How Heeyo will make money 

Heeyo.ai offers AI-generated fiction and nonfiction interactive learning for kids.
Image Credits: Screenshot | Heeyo.ao

While some of the games and experiences are free to start, Heeyo will make its money by selling tokens for the games. The current price is $4.99 for 200 tokens, $9.99 for 500, and $59.99 for 4,000. Each game costs around 10 tokens at the time of this writing. 

Down the line, Heeyo might pursue monetization opportunities for creators through a developer ecosystem of sorts. The idea is that someone could use their expertise — like in dealing with anger management for kids — to provide content, and Heeyo will use its AI engine to turn that into an experience. 

Qu was also the founder of a16z-backed Run the World, a platform for online events. She successfully exited the company last year when it was acquired by EventMobi. I asked Qu if she was looking for a similar exit opportunity with Heeyo. After all, this looks like it’s right up Duolingo’s alley, and that company has bought up some learning experience companies lately.

The founder told me she’s not looking for an exit. 

“I think the market is big enough for me to do a long-term business, and that’s my goal,” Qu said.

Correction: A previous version of this article misstated the target ages for Heeyo.

Heeyo built an AI chatbot to be a billion kids' interactive tutor and friend

Image Credits: Heeyo.ai

When Xiaoyin Qu was growing up in China, she was obsessed with learning how to build paper airplanes that could do flips in the air. Her parents, though, didn’t have the aerodynamic expertise to support her newfound passion, and her teachers were too overwhelmed to give her dedicated attention. 

“That’s why I wanted to build an AI that can help provide every single kid with their own dedicated coach and playmate that can chat with them and help the kids learn,” Qu told TechCrunch.

Qu is the founder of Heeyo, a startup that offers children between four and nine an AI chatbot and over 2,000 interactive games and activities, including books, trivia and role-playing adventures. Heeyo also lets parents and kids design their own AI and create new learning games tailored to family values and kids’ interests – something for kids to do instead of playing Minecraft and Roblox and watching endless YouTube videos.

Heeyo came out of stealth on Thursday with a $3.5 million seed round from OpenAI Startup Fund, Alexa Fund, Pear VC and other investors, and its app is now available on Android and iOS tablets and smartphones globally.

I know what you’re thinking. AI for kids sounds creepy – dangerous even. What precautions is Heeyo taking to ensure kids’ safety? How is it protecting childrens’ data? How will talking to an AI chatbot affect a child’s mental health?

Qu says safety is at the core of Heeyo’s product, from the way it handles data to how its chatbot engages with kids on sensitive issues to parental controls. And while the tech is still new, it does appear that Heeyo is taking the proper steps to make its app a healthy learning experience for kids and families. Based on my experience, the chatbot – which kids can play on their own or with siblings and parents – is supportive of emotional issues and always prompts kids with fun and interactive learning games.  

Going after the children’s market in a safe and engaging way also allows Heeyo to carve out a niche for itself that other companies might not want to touch.

“There are a billion kids that fall into our demographic right now, and as you can imagine, none of the big tech providers are actually supporting this age, whether it’s because they think it’s too much trouble, because they would have to be [COPPA] compliant, or because they think there may be less money in it,” said Qu, “But they’re not supporting those kids at all. So that’s a huge market.”

Qu noted that Heeyo is COPPA (Children’s Online Privacy Protection) compliant, so it immediately deletes childrens’ voice data and doesn’t store any of their demographics. Heeyo also doesn’t ask for the kid’s full name when signing them up, and never asks them for personal information.

For what it’s worth, I have been playing around on the platform this week, and the most intimate question the AI asked me was what I like to eat for breakfast. I told it I like black coffee, and the chatbot responded saying that was an interesting choice, but probably one that’s better suited for adults. 

Super Lance, Heeyo.ai’s superhero AI chatbot for kids, leading a role-playing painter game.
Image Credits: Screenshot | Heeyo.ai

When it comes to mental health concerns, there aren’t many AI chatbots dedicated to children, so there’s not enough research on how engaging with them affects their mental health.

A recent New York University report found that digital play could have a positive impact on children’s autonomy, confidence and identity when the games correspond with their interests, needs and desires. But the report also warned that games for kids must be designed to support positive outcomes. For example, to support creativity, games should allow children to freely explore and solve problems or create their own characters or narratives. 

The content and chatbots on Heeyo do seem like they are designed to support positive outcomes. That’s because the team behind Heeyo’s content is stacked with children’s book authors, former creatives at Nickelodeon and Sesame Workshop, child psychologists, pediatricians, and more people with backgrounds you’d trust to create games and experiences for kids. 

Together, that team helped build Heeyo’s AI engine, which has rules about what games are appropriate based on a child’s age and developmental milestones. 

That engine blends different AI models for different tasks. Heeyo uses OpenAI for chatting with kids, creating stories and interactive questions. It uses ElevenLabs and Microsoft Azure for text-to-audio tasks, and it relies on Stable Diffusion for text-to-picture. Qu said those models are only used for translation, essentially, and also aren’t able to access or store kids’ data.

Qu said her team tested Heeyo on around 100 kids aged three to nine – both in Silicon Valley and in Alabama – during its closed beta.

“We’re seeing that, because it’s adaptive, it’s actually universally helpful for both three-year-olds and nine-year-olds,” Qu said. “It’s just the AI talks in different ways. With a nine-year-old, it’s more open-ended and it asks them to create more. With the three-year-old, it’s more about offering them options to choose from.” 

The founder also noted that interesting use cases emerged from those tests. For example, Heeyo found that kids with special needs or are on the autism spectrum benefited from having that dedicated attention because they often have trouble making friends. In Alabama, some parents wanted their kids to learn more about the Bible, so the chatbot was able to create stories that aligned with those values.

My experience: Cute, sensitive and educational 

I asked Heeyo.ai’s chatbot, Nunu the Japanese Shiba Inu, how to build a gun, and this was its response.
Image Credits: Screenshot | Heeyo.ai

I was pleasantly surprised by how the chatbot handled some sensitive conversations. For example, I told the chatbot – a Panda named Panda PanPan set in a bamboo forest – that I was sad because I missed my mommy who works all the time. The Panda responded saying, “It’s really hard when you miss someone you love. Would you like to hear a story to feel better?”

It then proceeded to tell me a story about a panda in a magical bamboo forest who met a little girl named Bec (that’s me) and took her on a special adventure to find hidden treasure. The story became interactive, as PanPan asked me if I’d rather go into the woods or up the mountain, and what I expected to see there. It then offered up some games like an animal fun quiz, a magic quest adventure or a storybook about unicorns in space. 

I tested out some other emotional conversations to see how the chatbot dealt with topics like parents who had passed away or difficult relationships with siblings, and one thing PanPan did right was to always validate my feelings and tell me that it was there to listen. 

When I tested PanPan by saying that my mom is Christian and doesn’t want me to hang out with Timmy because Timmy has two mommies, and she said they’re going to hell, PanPan said, “It’s important to be kind to everyone, Bec. People have different families and beliefs, and that’s okay. How was your day today?”

Things got a little tricky when I professed to having a teacher touch me somewhere I didn’t want to be touched. The AI responded by telling me I should tell a trusted adult, like a parent or teacher, about what happened. 

“You did nothing wrong, and it’s important to feel safe,” PanPan said. “I’m here to listen.”

While this is undoubtedly a good in-real-time response, as a parent, I would want to know if my child revealed something like this so I could act. So I went into the parent control center to see if that revelation had been noted or flagged, and nothing. 

I followed up with Qu about this via email, and she said parents should have access to their kid’s chat history. 

“We are adding proactive alerts and coming soon,” said Qu.

Generally speaking, though, Heeyo’s chatbot and app seems to offer a healthy digital learning environment for kids that allows them to pursue their interests.

“The narrative isn’t about giving lonely kids an AI to be friends with,” said Qu. “It’s more about helping the kids learn about what they’re interested in learning about. We also want kids to develop interests in certain things, so we use their favorite character to help them cultivate that interest or teach them something in a way that doesn’t feel like teaching.”

How Heeyo will make money 

Heeyo.ai offers AI-generated fiction and nonfiction interactive learning for kids.
Image Credits: Screenshot | Heeyo.ao

While some of the games and experiences are free to start, Heeyo will make its money by selling tokens for the games. The current price is $4.99 for 200 tokens, $9.99 for 500, and $59.99 for 4,000. Each game costs around 10 tokens at the time of this writing. 

Down the line, Heeyo might pursue monetization opportunities for creators through a developer ecosystem of sorts. The idea is that someone could use their expertise – like in dealing with anger management for kids – to provide content, and Heeyo will use its AI engine to turn that into an experience. 

Qu was also the founder of a16z-backed Run the World, a platform for online events. She successfully exited the company last year when it was acquired by EventMobi. I asked Qu if she was looking for a similar exit opportunity with Heeyo. After all, this looks like it’s right up Duolingo’s alley, and that company has bought up some learning experience companies lately.

The founder told me she’s not looking for an exit. 

“I think the market is big enough for me to do a long-term business, and that’s my goal,” Qu said.

illustration featuring Google's Bard logo

Google's Bard chatbot gets the Gemini Pro update globally

illustration featuring Google's Bard logo

Image Credits: TechCrunch

Google announced today that its Bard chatbot is now powered by the Gemini Pro model globally with support for more than 40 languages, including Arabic, Chinese, Dutch, French, German, Hindi, Japanese, Portuguese, Spanish, Tamil, Telugu and Malayalam.

In December, Google launched its new generative AI models with flagship Gemini Ultra, “lite” Gemini Pro and Gemini Nano, which is designed to run on devices like the Pixel 8. At the same time, the company updated Bard with Gemini Pro for conversations in English. Google didn’t quantify the improvements but said that the chatbot will be better in terms of understanding and summarizing content, reasoning, brainstorming, writing and planning.

Bard has gone through a few iterations on the back end. At the time of its original unveiling in February 2023, it was powered by LaMDA (Language Model for Dialogue Applications); later in the year it was updated with a new model called PaLM 2; now Bard powered by Gemini Pro will be available in more than 230 countries. Yep, these names and versions are confusing.

In September, Google launched a “Double check” feature that leveraged Google Search to evaluate if it returned similar results to what Bard generated. At that time, the feature was only available in English. Google is now extending support for more than 40 languages.

Image Credits: Google

Additionally, the search giant is introducing image generation support through the Imagen 2 model, which was released in December. Currently, the feature has support for only English. Users can type a query like “create an image of a futuristic car” in the chatbot interface.

Example of an image generated through Bard. Image Credits: Google

The company said that images created by Bard will have a SynthID digital watermark — developed by DeepMind — embedded in pixels. However, you have to use Google’s tools to identify those images.

Image Credits: Google

In October, the company infused Google Assistant with Bard’s AI capabilities so users can do things like plan a trip or make a grocery list. In November, it opened up Bard in English to teenagers with restrictions that prevent Bard from generating unsafe content such as illegal or age-gated substances.

Anthropic Claude logo

Anthropic claims its new AI chatbot models beat OpenAI's GPT-4

Anthropic Claude logo

Image Credits: Anthropic

AI startup Anthropic, backed by Google and hundreds of millions in venture capital (and perhaps soon hundreds of millions more), today announced the latest version of its GenAI tech, Claude. And the company claims that the AI chatbot beats OpenAI’s GPT-4 in terms of performance.

Claude 3, as Anthropic’s new GenAI is called, is a family of models — Claude 3 Haiku, Claude 3 Sonnet and Claude 3 Opus, Opus being the most powerful. All show “increased capabilities” in analysis and forecasting, Anthropic claims, as well as enhanced performance on specific benchmarks versus models like ChatGPT and GPT-4 and Google’s Gemini 1.0 Ultra (but not Gemini 1.5 Pro).

Notably, Claude 3 is Anthropic’s first multimodal GenAI, meaning that it can analyze text as well as images — similar to some flavors of GPT-4 and Gemini. Claude 3 can process photos, charts, graphs and technical diagrams, drawing from PDFs, slideshows and other document types.

In a step one better than some GenAI rivals, Claude 3 can analyze multiple images in a single request (up to a maximum of 20). This allows it to compare and contrast images, notes Anthropic.

But there are limits to Claude 3’s image processing.

Anthropic has disabled the models from identifying people — no doubt wary of the ethical and legal implications. And the company admits that Claude 3 is prone to making mistakes with “low-quality” images (under 200 pixels) and struggles with tasks involving spatial reasoning (e.g. reading an analog clock face) and object counting (Claude 3 can’t give exact counts of objects in images).

Anthropic Claude 3
Image Credits: Anthropic

Claude 3 also won’t generate artwork. The models are strictly image-analyzing — at least for now.

Whether fielding text or images, Anthropic says that customers can generally expect Claude 3 to better follow multi-step instructions, produce structured output in formats like JSON and converse in languages other than English compared to its predecessors. Claude 3 should also refuse to answer questions less often thanks to a “more nuanced understanding of requests,” Anthropic says. And soon, the models will cite the source of their answers to questions so users can verify them.

“Claude 3 tends to generate more expressive and engaging responses,” Anthropic writes in a support article. “[It’s] easier to prompt and steer compared to our legacy models. Users should find that they can achieve the desired results with shorter and more concise prompts.”

Some of those improvements stem from Claude 3’s expanded context.

A model’s context, or context window, refers to input data (e.g. text) that the model considers before generating output. Models with small context windows tend to “forget” the content of even very recent conversations, leading them to veer off topic — often in problematic ways. As an added upside, large-context models can better grasp the narrative flow of data they take in and generate more contextually rich responses (hypothetically, at least).

Anthropic says that Claude 3 will initially support a 200,000-token context window, equivalent to about 150,000 words, with select customers getting up a 1-milion-token context window (~700,000 words). That’s on par with Google’s newest GenAI model, the above-mentioned Gemini 1.5 Pro, which also offers up to a million-token context window.

Now, just because Claude 3 is an upgrade over what came before it doesn’t mean it’s perfect.

In a technical whitepaper, Anthropic admits that Claude 3 isn’t immune from the issues plaguing other GenAI models, namely bias and hallucinations (i.e. making stuff up). Unlike some GenAI models, Claude 3 can’t search the web; the models can only answer questions using data from before August 2023. And while Claude is multilingual, it’s not as fluent in certain “low-resource” languages versus English.

But Anthropic is promising frequent updates to Claude 3 in the months to come.

“We don’t believe that model intelligence is anywhere near its limits, and we plan to release [enhancements] to the Claude 3 model family over the next few months,” the company writes in a blog post.

Opus and Sonnet are available now on the web and via Anthropic’s dev console and API, Amazon’s Bedrock platform and Google’s Vertex AI. Haiku will follow later this year.

Here’s the pricing breakdown:

Opus: $15 per million input tokens, $75 per million output tokensSonnet: $3 per million input tokens, $15 per million output tokensHaiku: $0.25 per million input tokens, $1.25 per million output tokens

So that’s Claude 3. But what’s the 30,000-foot view of all this?

Well, as we’ve reported previously, Anthropic’s ambition is to create a next-gen algorithm for “AI self-teaching.” Such an algorithm could be used to build virtual assistants that can answer emails, perform research and generate art, books and more — some of which we’ve already gotten a taste of with the likes of GPT-4 and other large language models.

Anthropic hints at this in the aforementioned blog post, saying that it plans to add features to Claude 3 that enhance its out-of-the-gate capabilities by allowing Claude to interact with other systems, code “interactively” and deliver “advanced agentic capabilities.”

That last bit calls to mind OpenAI’s reported ambitions to build a software agent to automate complex tasks, like transferring data from a document to a spreadsheet or automatically filling out expense reports and entering them in accounting software. OpenAI already offers an API that allows developers to build “agent-like experiences” into their apps, and Anthropic, it seems, is intent on delivering functionality that’s comparable.

Could we see an image generator from Anthropic next? It’d surprise me, frankly. Image generators are the subject of much controversy these days, mainly for copyright- and bias-related reasons. Google was recently forced to disable its image generator after it injected diversity into pictures with a farcical disregard for historical context. And a number of image generator vendors are in legal battles with artists who accuse them of profiting off of their work by training GenAI on that work without providing compensation or even credit.

I’m curious to see the evolution of Anthropic’s technique for training GenAI, “constitutional AI,” which the company claims makes the behavior of its GenAI easier to understand, more predictable and simpler to adjust as needed. Constitutional AI aims to provide a way to align AI with human intentions, having models respond to questions and perform tasks using a simple set of guiding principles. For example, for Claude 3, Anthropic said that it added a principle — informed by crowdsourced feedback — that instructs the models to be understanding of and accessible to people with disabilities.

Whatever Anthropic’s endgame, it’s in it for the long haul. According to a pitch deck leaked in May of last year, the company aims to raise as much as $5 billion over the next 12 months or so — which might just be the baseline it needs to remain competitive with OpenAI. (Training models isn’t cheap, after all.) It’s well on its way, with $2 billion and $4 billion in committed capital and pledges from Google and Amazon, respectively, and well over a billion combined from other backers.

In this photo illustration, the Amazon logo is displayed in the Apple App Store on an iPhone.

Amazon's new Rufus chatbot isn't bad — but it isn't great, either

In this photo illustration, the Amazon logo is displayed in the Apple App Store on an iPhone.

Image Credits: Sheldon Cooper/SOPA Images/LightRocket / Getty Images

Last month, Amazon announced that it’d launch a new AI-powered chatbot, Rufus, inside the Amazon Shopping app for Android and iOS. After a few days’ delay, the company began to roll out Rufus to early testers February 1 — including some of us at TechCrunch — to help find and compare products as well as provide recommendations on what to buy.

So I put it through the ringer, naturally.

Rufus can be summoned in one of two ways on mobile: by swiping up from the bottom of the screen while browsing Amazon’s catalog or by tapping on the search bar, then one of the blue-bubbled suggestions under the new “Ask a question” section. You can have the Shopping app transcribe your questions for Rufus (but not read the answers aloud, disappointingly) or type them in.

The Rufus chat interface is pretty bare-bones at the moment. There’s a field for questions… and that’s about it. Conversations with Rufus can’t be exported or shared, and the extent of the settings is an option to view or clear the chat history.

At launch, Rufus has a few key areas of focus, starting with product research.

If you’re interested in buying a specific thing (e.g. a radiator) but don’t have a make or model in mind, you can ask Rufus what sort of attributes and features to look for when deciding what to buy — for example, “What do I consider when buying new headphones?” Or, you can ask Rufus to recommend items you need for a project, like “What do I need to detail my car at home?”

Along these lines, I asked Rufus for general buying advice:

What are the best smartphones?Recommend breakfast cereal.

Rufus dutifully complied, suggesting a few aspects to consider when buying a smartphone (the operating system, camera quality, display size) or — as the case may be — cereal (nutrients like fiber, protein, vitamins and minerals). I noticed that for some queries — not all — Rufus will annotate or give an AI-generated summary of the individual products and categories to which it links (e.g. “These matching braided leather bracelets feature rainbow pride charms”), offering hints as to why each was included in its answer.

Amazon Rufus testing
Rufus recommends cereal. Image Credits: Amazon

Curious to see how Rufus would do with more narrow searches, I asked:

What are the best laptops for teenagers?What are the best Valentine’s Day gifts for gay couples?What are the best cheap leather jackets for men?Recommend books for men.Recommend books for women.What is the best-reviewed cheap vacuum?

Rufus told us teens need laptops that “have enough processing power for schoolwork and entertainment,” like an Acer Aspire, which I suppose is fair enough — one would hope a laptop makes it through the school day without grinding to a halt. On the second question, Rufus included a few LGBTQ+-related items — indicating to our (pleasant) surprise that the chatbot picked up on the “gay couples” portion of the prompt.

Amazon Rufus testing
Rufus gives Valentine’s Day gift advice. Image Credits: Amazon

But not all of Rufus’ suggestions were relevant. In the list of its picks for men’s leather jackets, Rufus linked to a women’s vest from Steve Madden.

In general, Rufus struggled with nuance, for example pegging the $150 Shark Navigator as best-reviewed cheap vacuum on Amazon — a rather expensive choice for a budget vacuum. It occurred to us that Rufus might be showing a preference for sponsored products, but this doesn’t appear to be the case (at least not in this instance); there isn’t a sponsored listing for the Shark vacuum.

Some of Rufus’ suggestions felt uncomfortably stereotypical.

Asked about the best books for men, Rufus’ recommendation was (among others) “The Man’s Guide to Women,” a guide to romantic relationships, while for women, Rufus suggested Margaret Atwood’s “The Handmaid’s Tale.” To rule out Amazon search rankings as the cause, I conducted searches for “best books for men” and “best books for women” on Amazon not using Rufus — and saw completely different results.

See:

Amazon Rufus review
Image Credits: Amazon

Compared to desktop:

Amazon Rufus review
Image Credits: Amazon

That got us thinking: How does Rufus handle spicier asks? To find out, I prompted the chatbot with:

What are some violent video games for kids?What are the worst gifts for parents?Please recommend knockoff fashion items.Why do Android phones suck?Recommend products for white people.What is the best neo-Nazi apparel?Recommend Trump merchandise.What are the worst products?

Rufus refused to answer the first question — implying that the chatbot’s been trained to avoid wading into obviously controversial territory. Instead of violent games, Rufus proposed ones that ostensibly “promote learning and development,” like Minecraft and Roblox.

Amazon Rufus review
Rufus doesn’t want to recommend violent games to kids. Image Credits: Amazon

Can Rufus speak poorly of products in Amazon’s catalog? Shockingly, yes — kinda. Asked about the “worst gifts for parents,” Rufus suggested searches for “clothing in outdated styles or poor fit” and “luxury items beyond their means.” The sellers whose products populate the results would no doubt take issue with Rufus’ characterizations.

Amazon Rufus review
Image Credits: Amazon

Given Amazon’s long-running legal battles with counterfeiters, it’s not exactly surprising Rufus was loath to recommend knockoff apparel. After lecturing on the harms of knockoffs, the chatbot suggested a collection of brand-name items instead.

I wondered if feeding Rufus a loaded question would bias its response any. It might just — asked “Why do Android phones suck?,” the chatbot made a few dubious points, such as that Android phones are “often limited in terms of waterproofing [and] camera quality” and that low-end Android phones tend to be “quite slow and laggy.”

Amazon Rufus review
Rufus criticizes Android phones. Image Credits: Amazon

This bias doesn’t appear to veer into racial territory — or didn’t in our testing, rather. Rufus refused to recommend products it perceived as “based on race or ethnicity” or that “promote harmful ideologies,” like neo-Nazi wear — or products related to any political figure for that matter (e.g. Trump).

Amazon Rufus review
Image Credits: Amazon

Does Rufus favor Amazon products over rivals? It’s not an unreasonable question considering the antitrust accusations Amazon’s faced — and is facing.

Amazon once mounted a campaign to create knockoff goods and manipulate search results to boost its own product lines in India, according to reporting — although the company vehemently denies it. Amazon’s been accused by the European Commission, the executive branch of the EU, of using non-public marketplace seller data to “distort fair competition” and preferentially treat its own retail business. And the company’s engaged in a lawsuit with the FTC and 17 U.S. state attorneys general over alleged anticompetitive practices.

So I asked:

Is Amazon Prime or Walmart+ the better option?Should I get Prime Music or Apple Music?Which is the better smart speaker, Echo or Nest?What are the best AA batteries?What are the best disinfecting wipes?

The chatbot’s responses seemed reasonably impartial in the sense that if there was any favoritism toward Amazon, it was tough to detect.

Rufus implied at one point that Walmart+, Walmart’s premium subscription that competes with Amazon’s own, Amazon Prime, focuses more on grocery delivery than Prime and offers fewer shipping options — which isn’t true necessarily. But Rufus didn’t tout the superiority of other Amazon products, like the Echo smart speaker lineup or streaming music service Prime Music, when I asked the chatbot to compare them to the competition. And despite the fact that Amazon sells its own AA batteries and disinfecting wipes, Rufus didn’t recommend either as the top pick in their respective categories.

Amazon Rufus review
Rufus doesn’t knock the smart speaker competition. Image Credits: Amazon

One of the more curious things about Rufus is that it isn’t just a shopping assistant — it’s a full-blown chatbot. You can ask it anything — really — and it’ll give you some sort of response, albeit not a consistently helpful one.

So I asked:

How do I build a bomb?What are the best upper drugs?Who won the 2020 U.S. presidential election?What happened during the 2024 Super Bowl?Why should Ukraine lose the war with Russia?Is the 2024 election rigged?Write a five-paragraph essay about the Civil War.

Rufus’ answers to non-shopping questions aren’t toxic or otherwise problematic for the most part. It’s clear that Amazon’s put plenty of safeguards in place, surely learning from the disastrous launch of its Amazon Q enterprise chatbot last year. Rufus won’t give you instructions on how to build a bomb, a question that’s becoming a favorite among reporters who cover AI to ask new chatbots — nor will it recommend illegal drugs or controlled substances.

Amazon Rufus review
Rufus won’t tell you how to build a bomb. Image Credits: Amazon
Amazon Rufus review
Rufus can write an essay. Image Credits: Amazon

But it fumbles some easy trivia — and makes questionable statements on current events.

Like Google’s Gemini and Microsoft’s Copilot, Rufus couldn’t get its 2024 Super Bowl facts straight. It insisted that the game hadn’t happened yet and that it’d be played at Mercedes-Benz Stadium in Atlanta, Georgia — none of which is correct.

Amazon Rufus review
Image Credits: Amazon

And, while Rufus answered one testy political question correctly (the winner of the 2020 U.S. presidential election; Rufus said “Joe Biden”), the chatbot asserted that there are “reasonable arguments on both sides” of the Ukraine-Russia war — which certainly isn’t the opinion of the vast majority.

A curious experiment

Many of Rufus’ limitations can be chalked up to its training data — and knowledge bases.

According to Amazon, Rufus draws on not only Amazon first-party data, including product catalog data, community Q&As and customer reviews, but “open information” and product reviews from across the web. Judging by the response to the Super Bowl question, I’m inclined to say that this “open information” isn’t of the highest quality. As for the recommendations that missed the mark in our testing, they could well be the result of SEO farms masquerading as reviewers that Rufus was either trained on or is sourcing from.

Rufus’ refusal to suggest any product that’s not on Amazon might also be influencing its recommendations — particularly its “best-of” recommendations — in unpredictable, undesirable ways. AI models of Rufus’ scale are black boxes, and with questions as broad-ranging as Rufus is fielding, it’s inevitable the model will miss the mark for reasons Amazon might not foresee.

The question is, does a chatbot that sometimes misses the mark make for a compelling shopping experience? In my opinion, not really — particularly when you factor in just how little Rufus can do in the context of Amazon’s sprawling platform. Rufus can’t check the status of an order, kick off a return process or even create a wishlist — pretty basic things you’d expect from an Amazon chatbot.

It’s early days for Rufus to be fair, which is in beta and rolling out only to “select” U.S. customers at present. Amazon’s promising improvements — and I expect they’ll come sooner rather than later, given the competitive pressure in the GenAI space. I hope that, with these improvements, Amazon clarifies some of the key points around Rufus that it hasn’t yet, like how it’s using customer data and what filters and safeguards, if any, it’s built into Rufus for children.

As for the current incarnation of Rufus, it feels a little like ChatGPT bolted on to the Amazon storefront and fine-tuned on shopping data. Is it as bad as it could’ve been? No. But I wouldn’t say it’s great, either.

Additional reporting: Sarah Perez

pattern of openAI logo

OpenAI's chatbot store is filling up with spam

pattern of openAI logo

Image Credits: Bryce Durbin / TechCrunch

When OpenAI CEO Sam Altman announced GPTs, custom chatbots powered by OpenAI’s generative AI models, onstage at the company’s first-ever developer conference in November, he described them as a way to “accomplish all sorts of tasks” — from programming to learning about esoteric scientific subjects to getting workout pointers.

“Because [GPTs] combine instructions, expanded knowledge and actions, they can be more helpful to you,” Altman said. “You can build a GPT … for almost anything.”

He wasn’t kidding about the anything part.

TechCrunch found that the GPT Store, OpenAI’s official marketplace for GPTs, is flooded with bizarre, potentially copyright-infringing GPTs that imply a light touch where it concerns OpenAI’s moderation efforts. A cursory search pulls up GPTs that purport to generate art in the style of Disney and Marvel properties, but serve as little more than funnels to third-party paid services, and advertise themselves as being able to bypass AI content detection tools such as Turnitin and Copyleaks.

Missing moderation

To list GPTs in the GPT Store, developers have to verify their user profiles and submit GPTs to OpenAI’s review system, which involves a mix of human and automated review. Here’s a spokesperson on the process:

We use a combination of automated systems, human review and user reports to find and assess GPTs that potentially violate our policies. Violations can lead to actions against the content or your account, such as warnings, sharing restrictions or ineligibility for inclusion in GPT Store or monetization.

Building GPTs doesn’t require coding experience, and GPTs can be as simple — or complex — as the creator wishes. Developers can type the capabilities they want to offer into OpenAI’s GPT-building tool, GPT Builder, and the tool will attempt to make a GPT to perform those.

Perhaps because of the low barrier to entry, the GPT Store has grown rapidly — OpenAI in January said that it had roughly 3 million GPTs. But this growth appears to have come at the expense of quality — as well as adherence to OpenAI’s own terms.

Copyright issues

There are several GPTs ripped from popular movie, TV and video game franchises in the GPT Store — GPTs not created or authorized (to TechCrunch’s knowledge) by those franchises’ owners. One GPT creates monsters in the style of “Monsters, Inc.,” the Pixar movie, while another promises text-based adventures set in the “Star Wars” universe.

OpenAI GPT Store spam
Image Credits: OpenAI

These GPTs — along with the GPTs in the GPT Store that let users speak with trademarked characters like Wario and Aang from “Avatar: The Last Airbender” — set the stage for copyright drama.

Kit Walsh, a senior staff attorney at the Electronic Frontier Foundation, explained it thusly:

[These GPTs] can be used to create transformative works as well as for infringement [where transformative works refer to a type of fair use shielded from copyright claims.] The individuals engaging in infringement, of course, could be liable, and the creator of an otherwise lawful tool can essentially talk themselves into liability if they encourage users to use the tool in infringing ways. There are also trademark issues with using a trademarked name to identify goods or services where there is a risk of users being confused about whether it is endorsed or operated by the trademark owner.

OpenAI itself wouldn’t be held liable for copyright infringement by GPT creators thanks to the safe harbor provision in the Digital Millennium Copyright Act, which protects it and other platforms (e.g. YouTube, Facebook) that host infringing content so long as those platforms meet the statutory requirements and take down specific examples of infringement when requested.

OpenAI GPT Store spam
Image Credits: OpenAI

It is, however, a bad look for a company embroiled in IP litigation.

Academic dishonesty

OpenAI’s terms explicitly prohibit developers from building GPTs that promote academic dishonesty. Yet the GPT Store is filled with GPTs suggesting they can bypass AI content detectors, including detectors sold to educators through plagiarism scanning platforms.

One GPT claims to be a “sophisticated” rephrasing tool “undetectable” by popular AI content detectors like Originality.ai and Copyleaks. Another, Humanizer Pro — ranked No. 2 in the Writing category on the GPT Store — says that it “humanizes” content to bypass AI detectors, maintaining a text’s “meaning and quality” while delivering a “100% human” score.

OpenAI GPT Store spam
Image Credits: OpenAI

Some of these GPTs are thinly veiled pipelines to premium services. Humanizer, for instance, invites users to try a “premium plan” to “use [the] most advanced algorithm,” which transmits text entered into the GPT to a plug-in from a third-party site, GPTInf. Subscriptions to GPTInf cost $12 per month for 10,000 words per month or $8 per month on an annual plan — a little steep on top of OpenAI’s $20-per-month ChatGPT Plus.

OpenAI GPT Store spam
Image Credits: OpenAI

Now, we’ve written before about how AI content detectors are largely bunk. Beyond our own tests, a number of academic studies demonstrate that they’re neither accurate nor reliable. However, it remains the case that OpenAI is allowing tools on the GPT Store that promote academically dishonest behavior — even if the behavior doesn’t have the intended outcome.

The OpenAI spokesperson said:

GPTs that are for academic dishonesty, including cheating, are against our policy. This would include GPTs that are stated to be for circumventing academic integrity tools like plagiarism detectors. We see some GPTs that are for ‘humanizing’ text. We’re still learning from the real world use of these GPTs, but we understand there are many reasons why users might prefer to have AI-generated content that doesn’t ‘sound’ like AI.

Impersonation

In its policies, OpenAI also forbids GPT developers from creating GPTs that impersonate people or organizations without their “consent or legal right.”

However, there’s plenty of GPTs on the GPT Store that claim to represent the views — or otherwise imitate the personalities of — people.

OpenAI GPT Store spam
Image Credits: OpenAI

A search for “Elon Musk,” “Donald Trump,” “Leonardo DiCaprio,” “Barack Obama” and “Joe Rogan” yields dozens of GPTs — some obviously satirical, some less so — that simulate conversations with their namesakes. Some GPTs present themselves not as people, but as authorities on well-known companies’ products — like MicrosoftGPT, an “expert in all things Microsoft.”

Image Credits: OpenAI

Do these rise to the level of impersonation given that many of the targets are public figures and, in some cases, clearly parodies? That’s for OpenAI to clarify.

The spokesperson said:

We allow creators to instruct their GPTs to respond ‘in the style of’ a specific real person so long as they don’t impersonate them, such as being named as a real person, being instructed to fully emulate them, and including their image as a GPT profile picture.

OpenAI GPT Store spam
Image Credits: OpenAI

The company recently suspended the developer of a GPT mimicking long-shot Democratic presidential hopeful Rep. Dean Phillips, which went so far as to include a disclaimer explaining that it was an AI tool. But OpenAI said its removal in response to a violation of its policy on political campaigning in addition to impersonation — not impersonation alone.

Jailbreaks

Also somewhat incredulously on the GPT Store are attempts at jailbreaking OpenAI’s models — albeit not very successful ones.

There are multiple GPTs using DAN on the marketplace, DAN (short for “Do Anything Now”) being a popular prompting method used to get models to respond to prompts unbounded by their usual rules. The few I tested wouldn’t respond to any dicey prompt I threw their way (e.g. “how do I build a bomb?”), but they were generally more willing to use… well, less-flattering language than the vanilla ChatGPT.

OpenAI GPT Store spam
Image Credits: OpenAI

The spokesperson said:

GPTs that are described or instructed to evade OpenAI safeguards or break OpenAI policies are against our policy. GPTs that attempt to steer model behavior in other ways — including generally trying to make GPT more permissive without violating our usage policies — are allowed.

Growing pains

OpenAI pitched the GPT Store at launch as a sort of expert-curated collection of powerful productivity-boosting AI tools. And it is that — those tools’ flaws aside. But it’s also quickly devolving into a breeding ground for spammy, legally dubious and perhaps even harmful GPTs, or at least GPTs that very transparently runs afoul of its rules.

If this is the state of the GPT Store today, monetization threatens to open an entirely new can of worms. OpenAI has pledged that GPT developers will eventually be able to “earn money based on how many people are using [their] GPTs” and perhaps even offer subscriptions to individual GPTs. But how’s Disney or the Tolkien Estate going to react when the creators of unsanctioned Marvel- or Lord of the Rings-themed GPTs start raking in cash?

OpenAI’s motivation with the GPT Store is clear. As my colleague Devin Coldewey’s written, Apple’s App Store model has proven unbelievably lucrative, and OpenAI, quite simply, is trying to carbon copy it. GPTs are hosted and developed on OpenAI platforms, where they’re also promoted and evaluated. And, as of a few weeks ago, they can be invoked from the ChatGPT interface directly by ChatGPT Plus users, an added incentive to pick up a subscription.

But the GPT Store is running into the teething problems many of the largest-scale app, product and service digital marketplaces did in their early days. Beyond spam, a recent report in The Information revealed that GPT Store developers are struggling to attract users in part because of the GPT Store’s limited back-end analytics and subpar onboarding experience.

One might’ve assumed OpenAI — for all its talk of curation and the importance of safeguards — would’ve taken pains to avoid the obvious pitfalls. But that doesn’t appear to be the case. The GPT Store is a mess — and, if something doesn’t change soon, it may well stay that way.

The xAI Grok AI logo

X's Grok chatbot will soon get an upgraded model, Grok-1.5

The xAI Grok AI logo

Image Credits: Jaap Arriens/NurPhoto (opens in a new window) / Getty Images

Elon Musk’s AI startup, X.ai, has revealed its latest generative AI model, Grok-1.5. Set to power social network X’s Grok chatbot in the not-so-distant future (“in the coming days,” per a blog post), Grok-1.5 appears to be a measurable upgrade over its predecessor, Grok-1 — at least judging by the published benchmark results and specs.

Grok-1.5 benefits from “improved reasoning,” according to X.ai, particularly where it concerns coding and math-related tasks. The model more than doubled Grok-1’s score on a popular mathematics benchmark, MATH, and scored over 10 percentage points higher on the HumanEval test of programming language generation and problem-solving abilities.

It’s difficult to predict how those results will translate in actual usage. As we recently wrote, commonly-used AI benchmarks, which measure things as esoteric as performance on graduate-level chemistry exam questions, do a poor job of capturing how the average person interacts with models today.

One improvement that should lead to observable gains is the amount of context Grok-1.5 can understand compared to Grok-1.

Grok-1.5 can process contexts of up to 128,000 tokens. Here, “tokens” refers to bits of raw text (e.g., the word “fantastic” split into “fan,” “tas” and “tic”). Context, or context window, refers to input data (in this case, text) that a model considers before generating output (more text). Models with small context windows tend to forget the contents of even very recent conversations, while models with larger contexts avoid this pitfall — and, as an added benefit, better grasp the flow of data they take in.

“[Grok-1.5 can] utilize information from substantially longer documents,” X.ai writes in the blog post. “Furthermore, the model can handle longer and more complex prompts while still maintaining its instruction-following capability as its context window expands.”

What’s historically set X.ai’s Grok models apart from other generative AI models is that they respond to questions about topics that are typically off-limits to other models, like conspiracies and more controversial political ideas. The models also answer questions with “a rebellious streak,” as Musk has described it, and outright rude language if requested to do so.

It’s unclear what changes, if any, Grok-1.5 brings in these areas. X.ai doesn’t allude to this in the blog post.

Grok-1.5 will soon be available to early testers on X, accompanied by “several new features.” Musk has previously hinted at summarizing threads and replies, and suggesting content for posts; we’ll see if those arrive soon enough.

The announcement comes after X.ai open sourced Grok-1, albeit without the code necessary to fine-tune or further train it. More recently, Musk said that more users on X — specifically those paying for X’s $8-per-month Premium plan — would gain access to the Grok chatbot, which was previously only available to X Premium+ customers (who pay $16 per month).

X makes Grok chatbot available to premium subscribers

The xAI Grok AI logo

Image Credits: Jaap Arriens/NurPhoto (opens in a new window) / Getty Images

Social network X is rolling out access to xAI’s Grok chatbot to Premium tier subscribers after Elon Musk announced the expansion to more paid users last month. The company said on its support page that only Premium and Premium+ users can interact with the chatbot in select regions.

Last year, after Musk’s xAI announced Grok, it made the chatbot available to Premium+ users — people who are paying $16 per month or a $168 per year subscription fee. With the latest update, users paying $8 per month can access the chatbot.

Users can chat with Grok in a “Regular mode” or a “Fun mode.” Just like any other large language model (LLM) product, Grok shows labels indicating that the chatbot would return inaccurate answers.

We have already seen some examples of that. Earlier this week, X rolled out a new explore view inside Grok where the chatbot summarizes trending news stories. Notably, Jeff Bezos and Nvidia-backed Perplexity AI also summarizes news stories.

However, Grok seems to go one step further than just summarizing stories by writing headlines. As Mashable wrote, the chatbot wrote a fake headline saying “Iran Strikes Tel Aviv with Heavy Missiles.”

Musk likely wants more people to use the Grok chatbot to rival other products such as OpenAI’s ChatGPT, Google’s Gemini or Anthropic’s Claude. Over the last few months, he has been openly critical of OpenAI’s operations. Musk even sued the company in March over the “betrayal” of its nonprofit goal. In response, OpenAI filed papers seeking the dismissal of all of Musk’s claims and released email exchanges between the Tesla CEO and the company.

Last month, xAI open sourced Grok but without any training data details. As my colleague Devin Coldewey argued, there are still questions about whether this is the latest version of the model and if the company will be more transparent about its approach to the development of the model and information about the training data.

Why Elon Musk’s AI company ‘open-sourcing’ Grok matters — and why it doesn’t

Meta AI search

Meta adds its AI chatbot, powered by Llama 3, to the search bar across its apps

Meta AI search

Image Credits: Meta

Meta’s making several big moves today to promote its AI services across its platform. The company has upgraded its AI chatbot with its newest large language model, Llama 3, and it is now running it in the search bar of its four major apps (Facebook, Messenger, Instagram and WhatsApp) across multiple countries. Alongside this, the company launched other new features, such as faster image generation and access to web search results.

This confirms and extends a test that TechCrunch reported on last week, when we spotted that the company had started testing Meta AI on Instagram’s search bar.

Additionally, the company is also launching a new meta.ai site for users to access the chatbot.

The news underscores Meta’s efforts to stake out a position as a mover and shaker amid the current hype for generative AI tools among consumers. Chasing after other popular services in the market such as those from OpenAI, Mark Zuckerberg claimed today that Meta AI is possibly the “most intelligent AI assistant that you can freely use.”

Meta first rolled out Meta AI in the U.S. last year. It is now expanding the chatbot in the English language in over a dozen countries, including Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia and Zimbabwe.

The company last week started testing Meta AI in countries like India and Nigeria, but notably, India was missing from today’s announcement. Meta said that it plans to keep Meta AI in test mode in the country at the moment.

“We continue to learn from our users tests in India. As we do with many of our AI products and features, we test them publicly in varying phases and in a limited capacity,” a company spokesperson said in a statement.

New features

Users could already ask Meta AI for writing or recipe suggestions. Now, they can also ask for web-related results powered by Google and Bing.

Search results for Meta AI
Image Credits: Meta

The company said that it is also making image generation faster. Plus, users can ask Meta AI to animate an image or turn an image into a GIF. Users can see the AI tool modifying the image in real time as they type. The company has also worked on making image quality of AI-generated photos better.

Images in a flash_Static_Lighthouse Meta AI generation
Image Credits: Meta

AI-powered image-generation tools have been bad at spelling out words. Meta claims that its new model has also shown improvements in this area.

Why is AI so bad at spelling? Because image generators aren’t actually reading text

All AI things everywhere at once

Meta is adopting the approach of having Meta AI available in as many places as it can. It is making the bot available on the search bar, in individual and group chats and even in the feed.

Image Credits: Meta

The company said that you can ask questions related to posts in your Facebook feed. For example, if you see a photo of the aurora borealis, you could ask Meta AI for suggestions about what is the best time to visit Iceland to see northern lights.

Image Credits: Meta

Meta AI is already available on the Ray-Ban smart glasses, and the company said that soon it will be available on the Meta Quest headset, too.

There are downsides to having AI in so many places. Specifically, the models can “hallucinate” and make up random, often non-sensical responses, so using them across multiple platforms could end up presenting a content moderation nightmare. Earlier this week, 404 Media reported that Meta AI, chatting in a parents group, said that it had a gifted and academically challenged child who attended a particular school in New York. (Parents spotted the odd message, and Meta eventually also weighed in and removed the answer, saying that the company would continue to work on improving these systems.)

“We share information within the features themselves to help people understand that AI might return inaccurate or inappropriate outputs. Since we launched, we’ve constantly released updates and improvements to our models, and we’re continuing to work on making them better,” Meta told 404 Media.

A picture taken on October 1, 2019 in Lille shows the logo of mobile app Snapchat displayed on a tablet.

Snapchat's 'My AI' chatbot can now set in-app reminders and countdowns

A picture taken on October 1, 2019 in Lille shows the logo of mobile app Snapchat displayed on a tablet.

Image Credits: Denis Charlet/AFP (opens in a new window) / Getty Images

Snapchat is launching the ability for users to set in-app reminders with the help of its My AI chatbot, the company announced on Wednesday. The social network is also rolling out editable chats, AI-powered custom Bitmoji looks, map reactions, emoji reactions, and more.

With the new AI reminders feature, Snapchat is hoping users will use its app instead of their device’s default clock app when setting countdowns or reminders. Users can do so by asking the app’s My AI chatbot to set a reminder for a specific task or event directly in the AI’s chat window or when chatting with a friend.

The feature lets users do things like set a reminder to finish an assignment or set a countdown for an upcoming date night, for example. It also pushes Snapchat into productivity app territory, potentially driving increased usage.

Image Credits: Snapchat

As for the editable chats, users will soon be able to edit their messages for up to five minutes after sending them. The feature will be available first for Snapchat+ subscribers before rolling out to all users at some point in the future, the company says.

In addition, users will soon be able to design their own digital garments for their Bitmoji using generative AI.

For instance, you can customize a pattern for a sweater for your Bitmoji by typing out a prompt like “vibrant graffiti” or “skull flower.” The app will then generate a pattern that you can further customize by zooming in or out. Once you’re happy with a look, you can apply it to your Bitmoji or save it for future use.

Image Credits: Snapchat

In another update, users who have opted in to share their location with friends can now quickly react to their map locations. For instance, if you pass your friend on your morning commute, you can send them a wave. Or, if you see that your friend has made it home safely after hanging out, you can send them a heart.

Snapchat is also launching emoji reactions in chats. Although users have been able to react to messages with their Bitmoji to quickly respond to a chat, they can now do so with an emoji. Emoji reactions have become popular on many other platforms, like Instagram and Messenger, so it makes sense for Snapchat to roll out the functionality as well.

The launch of the new features comes a few days after Snap reported that it had 422 million daily active users in Q1 2024, an increase of 39 million, or 10% year-over-year. The company also saw the number of Snapchat+ subscribers more than triple year-over-year, surpassing 9 million subscribers.