Former Riot Games employees leverage generative AI to power NPCs in new video game

Jam & Tea team

Image Credits: Jam & Tea

Jam & Tea Studios is the latest gaming startup implementing generative AI to transform the way players interact with non-playable characters (NPCs) in video games. 

Traditionally, video game NPCs are directed by predetermined scripts, which can feel repetitive, unrealistic and boring. It also may restrict the number of potential experiences for players. However, when generative AI is involved, players can engage in casual conversation and interact with NPCs how they want to (within reason).

Founded by gaming veterans from Riot Games, Wizards of the Coast and Magic: The Gathering, the company announced on Friday its first game that will leverage generative AI tools to help with gameplay mechanics, content generation, dialogue and item generation. 

Jam & Tea’s debut game, Retail Mage, is a roleplaying game that allows players to take on the role of a wizard working as a salesperson at a magical furniture store. The main goal of the game is to earn five-star reviews by helping customers. But it’s really up to the players to decide if they actually want to work or cause chaos. With AI NPCs as customers and human players being able to say and do almost whatever they want, the possible outcomes should vary widely.

In Retail Mage, players are approached by customers who each have their own requests. Instead of selecting from preset phrases, players can type in the text generator how they’d like to respond. The player can ask the AI to “say something charming,” and it will offer four different dialogue options. 

Image Credits: Jam & Tea

Jam & Tea is among several companies competing in the AI-powered NPC space, alongside Artificial Agency, Inworld and Nvidia. Ubisoft’s AI-powered “Ghostwriter” tool writes NPC dialogue for some of its games. 

The new game also comes at a time when there’s concern among creatives about the potential challenges posed by the prevalence of generative AI. Last month, SAG-AFTRA — the union comprised of voice actors and other talent — initiated a strike against major game publishers over AI concerns.

However, Jam & Tea claims it’s taking a balanced approach to the inclusion of AI, and wants to protect artists, writers and other creatives working in game design. 

“Our philosophy is that we believe creatives are going to be only more essential as we move forward in using this technology and in bringing new experiences to players,” co-founder and chief creative officer M. Yichao, who was the former narrative designer for Guild Wars 2, League of Legends and other titles, told TechCrunch.

“AI will generate all this dialogue, and you can talk to characters endlessly… but it’s going to take the creative eye and lens to really add meaning to that and to craft that into an experience that matters into something with impact, depth and emotion that carries through stories. That’s going to become more important than ever,” Yichao added.  

He explained that creatives are heavily involved throughout the development process, including when it comes to crafting NPCs, giving them motivation, interests and backstory, as well as providing example lines to help the AI mimic the tone and generate lines in real-time.

Limitations of AI NPCs

Despite its advantages, generative AI in NPCs has its limitations. One major concern is the issue of AI unpredictability, when the behavior of an NPC becomes excessively erratic, resulting in a frustrating experience for the player. AI can also hallucinate answers, so there’s a possibility that the NPC could say something that’s wrong or doesn’t exist in the world. 

Continuously improving the AI engine will help mitigate unpredictable NPCs, Yichao believes. Players can also rate the characters’ responses, which provides data to help improve the characters’ behavior. Plus, Jam & Tea claims to have put guardrails in place to prevent inappropriate conversations. 

Players are still encouraged to be creative, allowing for inventive and spontaneous interactions to occur. For example, instead of helping a customer, players can choose to engage in activities instead, like playing hide and seek — a real scenario that occurred during playtesting.

“Our lead engineer was playtesting one night and went up to the NPCs and just said, ‘I’m bored.’ And the NPC responded by saying, ‘Well, why don’t we play a game? Let’s play hide and seek.’ And so the other NPCs heard and said, ‘Oh, we’re playing too,’” shared co-founder and CTO Aaron Farr. The NPCs proceeded to follow the rules of the game, with one seeker walking throughout the store to find all the hiders. 

“None of that was programmed; all of that was emergent behavior. That is part of the delight of when we have what a player wants to do combined with its experience to modify the experience in real-time,” added Farr, a former engineering leader at Riot Games and Singularity 6. 

The company has been experimenting with various large language models (LLMs) throughout the testing phase, including OpenAI, Google’s Gemma, Mistral AI and Meta’s Llama, and other open models. It’s currently uncertain which LLM it will ultimately use in the final version of the game, but is fine-tuning the model to train it on how to give better responses that are more “in character.”  

Generate items out of thin air 

Jam & Tea’s AI engine goes beyond dialogue generation. Players can also interact with any object in the game and state their intentions with that object, such as picking it up or dismantling it for parts. They can even create items from scratch. Depending on what they want to do, the game interprets that intention and determines if they’re successful or not. 

In a demo shown to TechCrunch, Yichao interacted with an NPC named Noreen, who asked for an antelope-shaped plush. He then typed a command into an action box and retrieved a pillow resembling an antelope from a crate. The game recognized his action as successful and added the item to his inventory. 

Because the item didn’t previously exist in the game, players won’t physically see an antelope-shaped plush appear. All that happens in the game is the item shows up in the player’s inventory as a default image of a pillow. If the player wants to perform an action, like sitting in a chair, a notification appears on the screen indicating that the action was performed. 

“One of the things that’s really exciting about this technology is it allows for open-ended creative expression. Like, I can take a piece of meat and say, what if I put it in the bowl and I make a delicious fish stew? We might not have a fish stew [image], but one of the things that I’m working with our artists on is coming up with a creative ability to represent that item in a way that’s satisfying in the world and allows the player’s imagination to fill in some of those blanks, and gives players maximum creative freedom to make things that are unexpected,” Yichao said.  

AI technology won’t be used for 2D or 3D asset generation. Real artists will create the images.

Image Credits: Jam & Tea

Retail Mage is a relatively basic game compared to others. At launch, the company promises to provide a more advanced product than the test version we saw during the demo. 

Jam & Tea states that the game is primarily intended to demonstrate the application of the technology as it continues to experiment. Beyond Retail Mage, the company is also developing another game — currently referred to as “Project Emily” internally — which will showcase their broader ambitions, featuring more environments and a sophisticated storyline.

The startup’s scrappy team of eight has a lot of work ahead to reach the level of bigger gaming companies. However, taking action now while there is momentum allows the company to adapt and grow as AI models advance. 

Jam & Tea raised $3.15 million in seed funding from London Venture Partners with participation from Sisu Game Ventures and 1Up Ventures. It plans to raise another round later this year. 

As for the business model, Jam & Tea will charge $15 to buy the game and offer extra game packs that players can purchase separately. It’ll launch on PCs initially, but the company aims to enable cross-platform functionality within the next few years.

Retail Mage is slated to be released to the public later this fall. 

Amazon extends generative AI-powered product listings to Europe

Concept illustration depicting online seller.

Image Credits: Worayuth Kamonsuwan via Getty / Worayuth Kamonsuwan via Getty Images

Amazon is bringing its generative AI listing smarts to more sellers, revealing today that those in France, Germany, Italy, Spain and the U.K. can now access tools designed to improve product listings by generating product descriptions, titles and associated details.

Additionally, sellers can “enrich” existing product listings by automatically adding missing information.

The launch comes nine months after Amazon first revealed plans to bring generative AI technology to sellers. The company hasn’t been overly forthcoming about which market the tech will be available in, but it has largely been limited to the U.S. so far. That said, the company did quietly launch the tools in the U.K. earlier this month, according to an Amazon forum post.

In its blog post on Thursday, the company said it rolled out this feature in the U.K. and some EU markets “a few weeks ago,” and more than 30,000 of its sellers are apparently using these AI-enabled listing tools.

Amazon pitches these new tools as a way to help sellers list goods more quickly. Sellers can head to their List Your Products page as usual, where they can enter some relevant keywords that describe their product and simply hit the Create button to formulate the basis of a new listing. The seller can also generate a listing by uploading a photo via the Product Image tab.

Amazon marketing image for generative AI-powered listings
Amazon marketing image for generative AI-powered listings.
Image Credits: Amazon

Amazon will then magic up a product title, bullet point and description that the seller can edit if they want to. However, given the propensity for large language models (LLMs) to hallucinate, it wouldn’t be prudent to post a listing unchecked — Amazon acknowledges that point by recommending that the seller reviews the copy “thoroughly” to ensure everything is correct.

“Our generative AI tools are constantly learning and evolving,” the company said on its U.K. forum two weeks back. “We’re actively developing powerful new capabilities to make generated listings more effective, and make it even easier for you to list products.”

Earlier this year, Amazon launched a new tool that allows sellers to generate product listings by posting a URL to their existing website. It’s not clear when, or if, Amazon will be extending this functionality to Europe or other markets outside the U.S.

The data question

While Amazon is no stranger to AI and machine learning across its vast e-commerce empire, bringing any form of AI to European markets raises some potential issues around regulation. There’s GDPR on the data privacy side for starters, not to mention the Digital Services Act (DSA) on the algorithmic risk side, with Amazon’s online store designated as a Very Large Online Platform (VLOP) for the purposes of ensuring transparency in the application of AI.

For context, Meta last week was forced to pause plans to train its AI on European users’ public posts. Amazon itself has faced the wrath of EU regulators in the past over its misuse of merchant data, when it was alleged that Amazon tapped non-public data from third-party sellers to benefit its own competing business as a retailer. And just this month, U.K. retailers hit Amazon with a £1.1 billion lawsuit over similar accusations.

While Amazon’s latest foray into generative AI is a different proposition, its LLMs have to be trained on some sort of data — what data this is, precisely, isn’t clear. In its initial announcement last September, Amazon shared a quote from its VP of selection and catalog systems, Robert Tekiela, who referred to “diverse sources of information.”

With our new generative AI models, we can infer, improve, and enrich product knowledge at an unprecedented scale and with dramatic improvement in quality, performance, and efficiency. Our models learn to infer product information through the diverse sources of information, latent knowledge, and logical reasoning that they learn. For example, they can infer a table is round if specifications list a diameter or infer the collar style of a shirt from its image.

Robert Tekiela, VP of Amazon Selection and Catalog Systems

TechCrunch has reached out to Amazon for comment on these various issues, and will update when we hear back.

The RIAA's lawsuit against generative music startups will be the bloodbath AI needs

Wooden gavel with brass engraving band and golden alphabets AI on a round wood sound block. Illustration of the concept of legislation of artificial intelligence act and rules

Image Credits: Dragon Claws (opens in a new window) / Getty Images

Like many AI companies, music generation startups Udio and Suno appear to have relied on unauthorized scrapes of copyrighted works in order to train their models. This is by their own and investors’ admission, as well as according to new lawsuits filed against them by music companies. If these suits go before a jury, the trial could be both a damaging exposé and a highly useful precedent for similarly sticky-fingered AI companies facing certain legal peril.

The lawsuits, filed by the Recording Industry Association of America (RIAA), put us all in the uncomfortable position of rooting for the RIAA, which for decades has been the bogeyman of digital media. I myself have received nastygrams from them! The case is simply that clear.

The gist of the two lawsuits, which are extremely similar in content, is that Suno and Udio (strictly speaking, Uncharted Labs doing business as Udio) indiscriminately pillaged more or less the entire history of recorded music to form datasets, which they then used to train a music-generating AI.

And here let us quickly note that these AIs don’t “generate” so much as match the user’s prompt to patterns from their training data and then attempt to complete that pattern. In a way, all these models do is perform covers or mashups of the songs they ingested.

That Suno and Udio did ingest said copyrighted data seems, for all intents and purposes (including legal ones), very likely. The companies’ leadership and investors have been unwisely loose-lipped about the copyright challenges of the space.

They have admitted that the only way to create a good music generation model is to ingest a large amount of high-quality music. It is very simply a necessary step for creating machine learning models of this type.

Then they said that they did so without the permission of music labels. Investor Antonio Rodriguez of Matrix Partners told Rolling Stone just a few months ago:

Honestly, if we had deals with labels when this company got started, I probably wouldn’t have invested in it. I think that they needed to make this product without the constraints.

The companies told the RIAA’s lawyers that they believe the media it has ingested falls under fair-use doctrine — which fundamentally only comes into play in the unauthorized use of a work. Now, fair use is admittedly a complex and hazy concept in idea and execution, but the companies’ use does appear to stray somewhat outside the intended safe harbor of, say, a seventh grader using a Pearl Jam song in the background of their video on global warming.

To be blunt, it looks like these companies’ goose is cooked. They might have hoped that they could take a page from OpenAI’s playbook, using evasive language and misdirection to stall their less deep-pocketed critics, like authors and journalists. (If by the time AI companies’ skulduggery is revealed and they’re the only option for distribution, it no longer matters.)

But it’s harder to pull off when there’s a smoking gun in your hand. And unfortunately for Udio and Suno, the RIAA says in its lawsuit that it has a few thousand smoking guns and that songs it owns are clearly being regurgitated by the music models. Its claim: that whether Jackson 5 or Maroon 5, the “generated” songs are lightly garbled versions of the originals — something that would be impossible if the original were not included in the training data.

The nature of LLMs — specifically, their tendency to hallucinate and lose the plot the more they write — precludes regurgitation of, for example, entire books. This has likely mooted a lawsuit by authors against OpenAI, since the latter can plausibly claim the snippets its model does quote were grabbed from reviews, first pages available online and so on. (The latest goalpost move is that they did use copyright works early on but have since stopped, which is funny because it’s like saying you only juiced the orange once but have since stopped.)

What you can’t do is plausibly claim that your music generator only heard a few bars of “Great Balls of Fire” and somehow managed to spit out the rest word for word and chord for chord. Any judge or jury would laugh in your face, and with luck a court artist will have their chance at illustrating that.

The current legal cases against generative AI are just the beginning

This is not only intuitively obvious but legally consequential as well, as the re-creation of entire works (garbled, but quite obviously based on the originals) opens up a new avenue for relief. If the RIAA can convince the judge that Udio and Suno are doing real and major harm to the business of the copyright holders and artists, it can ask the court to shut down the AI companies’ whole operation at the outset of the trial with an injunction.

Opening paragraphs of your book coming out of an LLM? That’s an intellectual issue to be discussed at length. Dollar-store “Call Me Maybe” generated on demand? Shut it down. I’m not saying it’s right, but it’s likely.

The predictable response from the companies has been that the system is not intended to replicate copyrighted works: a desperate, naked attempt to offload liability onto users under Section 230 safe harbor. That is, the same way Instagram isn’t liable if you use a copyrighted song to back your Reel. Here, the argument seems unlikely to gain traction, partly because of the aforementioned admissions that the company itself ignored copyright to begin with.

What will be the consequence of these lawsuits? As with all things AI, it’s quite impossible to say ahead of time, since there is little in the way of precedent or applicable, settled doctrine.

My prediction is that the companies will be forced to expose their training data and methods, these things being of clear evidentiary interest. And if this evidence shows that they are indeed misusing copyrighted material, we’ll see an attempt to settle or avoid trial, and/or a speedy judgment against Udio and Suno. It’s likely that at least one of the two will attempt to continue onward, using legal (or at least legal-adjacent) sources of music, but the resulting model would (by their own standards for training data) almost certainly result in a huge step down in quality, and users would flee.

Investors? Ideally, they’ll lose their shirts, having placed their bets on something that was in all likelihood illegal and certainly unethical, and not just in the eyes of nebbish author associations but according to the legal minds at the infamously and ruthlessly litigious RIAA.

The consequences may be far-reaching: If investors in a hot new generative media startup suddenly see a hundred million dollars vaporized due to the fundamental nature of generative media, suddenly a different level of diligence will seem appropriate.

Companies may learn from the trial or settlement documents what can be said — or perhaps more importantly, what should not be said — to avoid liability and keep copyright holders guessing.

Though this particular suit seems almost a foregone conclusion, it will not be a playbook to prosecuting or squeezing settlements out of other generative AI companies but an object lesson in hubris.

It’s good to have one of those every once in a while, even if the teacher happens to be the RIAA.

Music video-sharing app Popster uses generative AI and lets artists remix videos

Popster splash screen

Image Credits: Popster

As more music streaming apps and creation tools emerge to compete for users’ attention, social music-sharing app Popster is getting two new features to grow its user base: an AI image generator for cover art and a collaboration capability where artists can remix another user’s song. 

Initially launched last year as a song-creation tool and music video platform, Popster allows artists to engage with other musicians, create original songs and music videos, and share them on social media. Users can record video and voice directly in the app and add stickers and color backgrounds. The app also offers a selection of vocal effects (created in-house) and a community section for artists to interact with each other.

The app has, naturally, jumped on the generative AI bandwagon as well. For instance, it provides ways for artists to generate ideas for lyrics as well as create new beats to record vocals on top. (Popster also uses AI tech to enhance the audio if there’s background noise.) 

Image Credits: Popster

One notable AI-powered tool is the “Add a beat” feature. Users can select a genre (Lofi Hip Hop, R&B, Indie Pop, Slow Ballad and so on) and a vibe like “Smooth” or “Normal” to compose a backtrack for singers to add their voice recordings on top. 

Popster uses Mubert’s library of royalty-free pre-made tracks, distinguishing itself from AI music apps Udio and Suno, both of which recently faced lawsuits for allegedly using copyrighted music without authorization. 

“The issue with AI right now is that many people create songs that are trained from original songs, so you don’t know who is the original creator, and there’s not this concept of creativity,” co-founder and CEO Themis Drakonakis told TechCrunch. “We believe that if you put AI next to the artist as a creative partner, you can experiment with [different sounds], unlock different ideas, and get your creativity to another level.” 

Image Credits: Popster

Popster’s new artwork generator, “Albums,” is the newest addition to its generative AI tools (which are all powered by OpenAI). In addition to being able to record and upload videos, Popster now allows artists to enter a prompt to generate an image that can be displayed like a sticker overlay on top of an artist’s short-form video. This adds an extra layer of sophistication for new and up-and-coming artists trying to introduce their new songs into the world. 

Another one of Popster’s new features appears to be its take on TikTok’s “Stitch” and “Duet” tools, which artists frequently use to combine their videos with other creators to add vocals, harmonize or play instruments. Popster’s new “Mashup” feature lets artists create remixes and collaborate with other artists. Users can now click the “Mashup” button under another person’s video and record their own video, which will appear side-by-side. 

Popster co-founders Themis Drakonakis (left) and Sotiris Kaniras (right)
Image Credits: Popster

Popster is still in its early days, with only a few thousand users. However, Popster’s latest features may be what it needs to attract more people. So far, nearly 10,000 original songs have been created on the app. Drakonakis told us that users spend an average of 1.5 hours on the app daily. 

The startup was co-founded by Drakonakis and Sotiris Kaniras (CTO). They previously created three other apps: Nup, an anonymous chat app; Self’it, a location-based photo-sharing app; and UniPad, a collaboration app for college students. 

Popster raised $280,000 from the Realize Tech Fund and is in the midst of raising a pre-seed funding round, which will help it grow its team and enhance its video server. Other future plans include launching paid features and teaming up with music labels. 

The app is available for download on the App Store.

Updated 7/3/24 at 3:30 pm ET with the correction that the beat generator is not powered by OpenAI. Popster also doesn’t have an app for Android devices.

CIOs' concerns over generative AI echo those of the early days of cloud computing

Group of employees standing in futuristic environment.

Image Credits: gremlin / Getty Images

When I attended the MIT Sloan CIO Symposium in May, it struck me that as I listened to CIOs talking about the latest technology — in this case generative AI — I was reminded of another time at the same symposium in around 2010 when the talk was all about the cloud.

It was notable how similar the concerns over AI were to the ones that I heard about the fledgling cloud all those years ago: Companies were concerned about governance (check), security (check) and responsible use of a new technology (check).

But 2010 was just at the edge of the consumerization of IT where workers were looking for the same type of experience they had at home at work. Soon, they would resort to “shadow IT” to find those solutions on their own when IT said no, and no was the default in those days. It was easy enough for employees to go off on their own unless things went into total lockdown.

Today, CIOs recognize if they just say no to generative AI, employees are probably going to find a way to use these tools anyway. There are plenty of legitimate concerns when it comes to this technology — like hallucinations or who owns the IP — but there are also concerns about security, compliance and controls, especially around data, that large organizations demand and require.

But CIOs speaking at the conference were much more realistic than they had been 15 years ago, even if they had similar concerns.

“You know, everything’s out there and democratized,” said Mathematica CIO Akira Bell, speaking on a panel called “Sustaining Competitive Advantage in the Age of AI.”

“I think somebody else this morning already said, ‘You know, we can’t control this moment.’ We cannot and don’t want to be ‘the agents of no,’ to tell everybody what they can and cannot do, but what we can do is make sure people understand the responsibility they have as actors and users of these tools.”

Bell said that today, instead of saying no, she’s pushing responsible use of the technology and looking for ways to enhance their customers’ experience with AI. “So one is about governing, making sure our data is ready to be used, making sure our employees understand what best practices exist as they go on and use them.”

She said that the second piece is really thinking about how they use generative AI to enhance their core capabilities, and how they might use it on behalf of clients to create or amplify or change existing service offerings to their customers.

Bell said you must also look at the security component, so all of these things matter. Her organization can offer guidance on how to use these tools in a way that is consistent with the values of the company without shutting down access.

Angelica Tritzo, CIO at GE Vernova, a new spinout from GE focused on alternative energy, is taking a deliberate approach to implementing generative AI. “We have a number of pilots in different maturity stages. We probably, like many others, do not fully understand the full potential, so the cost and the benefit is not always fully aligned,” Tritzo told TechCrunch. “We are finding our way with all the pieces of technology, how much to partner with others versus what we need to do ourselves.” But the process is helping her learn what works and what doesn’t and how to proceed while helping employees get familiar with it.

Chris Bedi, who was CDIO (chief digital information officer) at ServiceNow, said that things will change in the coming years as employees start demanding access to AI tools. “From a talent standpoint, as organizations look to retain talent, which is a hot topic, it doesn’t matter what job function, people want their job talent to stay. I think it’ll be unthinkable to ask your company employees to do their jobs without GenAI,” Bedi told TechCrunch. What’s more, he believes the talent will start demanding it and question why you would want them to do work manually. (Bedi’s title recently changed to chief customer officer.)

To that end, Bedi says his company is committed to teaching its employees about AI and how to create an AI-literate workforce because people won’t necessarily understand without guidance how to make best use of this technology.

“We created some learning pathways, so everybody in the company had to take their AI 101,” he said. “We created that and selectively [levels] 201 and 301 because we know the future is AI, and so we have to get our whole workforce comfortable with it,” he said.

All of this suggests that while the concerns may be the same as they were in the last wave of technological change, IT executives have perhaps learned some lessons along the way. They understand now that you can’t just lock it down. Instead they have to find ways to help employees use generative AI tools safely and effectively because if they don’t, employees will probably start using them anyway.

Artists' lawsuit against generative AI makers can go forward, judge says

AI text on illuminated background

Image Credits: Eugene Mymrin / Getty Images

A class action lawsuit filed by artists who allege that Stability, Runway and DeviantArt illegally trained their AIs on copyrighted works can move forward, but only in part, the presiding judge decided on Monday. In a mixed ruling, several of the plaintiffs’ claims were dismissed while others survived, meaning the suit could end up at trial. That’s bad news for the AI makers: Even if they win, it’s a costly, drawn-out process where a lot of dirty laundry will be put on display. And they aren’t the only companies fighting off copyright claims — not by a long shot.

scene created using iPad design app Procreate

Procreate takes a stand against generative AI, vows to never incorporate the tech into its products

scene created using iPad design app Procreate

Image Credits: Procreate

Popular iPad design app Procreate is coming out against generative AI, and has vowed never to introduce generative AI features into its products. The company said on its website that although machine learning is a “compelling technology with a lot of merit,” the current path that generative AI is on is wrong for its platform. 

Procreate goes on to say that it’s not chasing a technology that is a threat to human creativity, even though this may make the company “seem at risk of being left behind.”

Procreate CEO James Cuda released an even stronger statement against the technology in a video posted to X on Monday. 

“I really f****** hate generative AI,” Cuda said in the video. “I don’t like what’s happening in the industry, and I don’t like what it’s doing to artists. We’re not going to be introducing any generative AI into our products. Our products are always designed and developed with the idea that a human will be creating something.”

The company’s stance has attracted widespread praise from digital artists online, many of whom are unhappy with the way other digital art and illustration apps have embraced the technology. 

For instance, illustration app Clip Studio Paint walked back its plans to release an image generator tool after facing backlash from its user base back in 2022. 

Adobe, which arguably has the most popular suite of design tools, has released several generative AI features into its products. In addition, Adobe recently came under fire after its updated terms of service seemed to imply that it would train AI models on users’ content. The company later had to clarify that it doesn’t train AI models on customers’ content. 

At a time when digital art platforms are embracing AI left and right, it’s interesting to see a popular app go against the crowd. Given that Procreate’s announcement has led to significant praise from artists and designers, it will be interesting to see if other companies follow suit.

“We don’t exactly know where this story is going to go, how it ends. But, we believe that we’re on the right path, supporting human creativity,” Cuda said.

Artists' lawsuit against generative AI makers can go forward, judge says

AI text on illuminated background

Image Credits: Eugene Mymrin / Getty Images

A class action lawsuit filed by artists who allege that Stability, Runway and DeviantArt illegally trained their AIs on copyrighted works can move forward, but only in part, the presiding judge decided on Monday. In a mixed ruling, several of the plaintiffs’ claims were dismissed while others survived, meaning the suit could end up at trial. That’s bad news for the AI makers: Even if they win, it’s a costly, drawn-out process where a lot of dirty laundry will be put on display. And they aren’t the only companies fighting off copyright claims — not by a long shot.

Amazon extends generative AI-powered product listings to Europe

Concept illustration depicting online seller.

Image Credits: Worayuth Kamonsuwan via Getty / Worayuth Kamonsuwan via Getty Images

Amazon is bringing its generative AI listing smarts to more sellers, revealing today that those in France, Germany, Italy, Spain and the U.K. can now access tools designed to improve product listings by generating product descriptions, titles and associated details.

Additionally, sellers can “enrich” existing product listings by automatically adding missing information.

The launch comes nine months after Amazon first revealed plans to bring generative AI technology to sellers. The company hasn’t been overly forthcoming about which market the tech will be available in, but it has largely been limited to the U.S. so far. That said, the company did quietly launch the tools in the U.K. earlier this month, according to an Amazon forum post.

In its blog post on Thursday, the company said it rolled out this feature in the U.K. and some EU markets “a few weeks ago,” and more than 30,000 of its sellers are apparently using these AI-enabled listing tools.

Amazon pitches these new tools as a way to help sellers list goods more quickly. Sellers can head to their List Your Products page as usual, where they can enter some relevant keywords that describe their product and simply hit the Create button to formulate the basis of a new listing. The seller can also generate a listing by uploading a photo via the Product Image tab.

Amazon marketing image for generative AI-powered listings
Amazon marketing image for generative AI-powered listings.
Image Credits: Amazon

Amazon will then magic up a product title, bullet point and description that the seller can edit if they want to. However, given the propensity for large language models (LLMs) to hallucinate, it wouldn’t be prudent to post a listing unchecked — Amazon acknowledges that point by recommending that the seller reviews the copy “thoroughly” to ensure everything is correct.

“Our generative AI tools are constantly learning and evolving,” the company said on its U.K. forum two weeks back. “We’re actively developing powerful new capabilities to make generated listings more effective, and make it even easier for you to list products.”

Earlier this year, Amazon launched a new tool that allows sellers to generate product listings by posting a URL to their existing website. It’s not clear when, or if, Amazon will be extending this functionality to Europe or other markets outside the U.S.

The data question

While Amazon is no stranger to AI and machine learning across its vast e-commerce empire, bringing any form of AI to European markets raises some potential issues around regulation. There’s GDPR on the data privacy side for starters, not to mention the Digital Services Act (DSA) on the algorithmic risk side, with Amazon’s online store designated as a Very Large Online Platform (VLOP) for the purposes of ensuring transparency in the application of AI.

For context, Meta last week was forced to pause plans to train its AI on European users’ public posts. Amazon itself has faced the wrath of EU regulators in the past over its misuse of merchant data, when it was alleged that Amazon tapped non-public data from third-party sellers to benefit its own competing business as a retailer. And just this month, U.K. retailers hit Amazon with a £1.1 billion lawsuit over similar accusations.

While Amazon’s latest foray into generative AI is a different proposition, its LLMs have to be trained on some sort of data — what data this is, precisely, isn’t clear. In its initial announcement last September, Amazon shared a quote from its VP of selection and catalog systems, Robert Tekiela, who referred to “diverse sources of information.”

With our new generative AI models, we can infer, improve, and enrich product knowledge at an unprecedented scale and with dramatic improvement in quality, performance, and efficiency. Our models learn to infer product information through the diverse sources of information, latent knowledge, and logical reasoning that they learn. For example, they can infer a table is round if specifications list a diameter or infer the collar style of a shirt from its image.

Robert Tekiela, VP of Amazon Selection and Catalog Systems

TechCrunch has reached out to Amazon for comment on these various issues, and will update when we hear back.