Picsart partners with Getty Images to develop a custom AI model

The Picsart logo on a phone

Image Credits: Pavlo Gonchar/SOPA Images/LightRocket / Getty Images

Picsart, a photo-editing startup backed by SoftBank, announced on Thursday that it’s partnering with Getty Images to develop a custom model to bring AI imagery to its 150 million users. The company says the model will bring responsible AI imagery to creators, marketers and small businesses that use its platform.

The model will be built from scratch and will be trained exclusively on Getty Images’ licensed creative content. The company says the partnership will enable Picsart subscribers to generate their own unique images, with full commercial rights. Users will be able to use any of Picsart’s editing tools to add to or customize the assets. 

By creating its own model trained exclusively on licensed content, Picsart plans to give its users access to safe AI creative tools at a time when there are rising concerns regarding AI-generated images and copyright issues.

Image Credits: Picsart x Getty Images AI Image

Picsart’s AI lab, PAIR, is building the model. The team will make the model accessible through the company’s own API services. 

“Picsart offers endless customization, content, and editing tools for everything from social media ads to website graphics, and this partnership will enable commercially usable AI-generated imagery from a world-class brand,” said Picsart CEO and founder Hovhannes Avoyan, in a statement. “We are thrilled to partner with Getty Images, the most prestigious commercial library out there, to bring this to market.”

Picsart plans to launch the model later this year. 

The company is also integrating Getty Images video content into Picsart’s platform and making it available to Plus members.

Picsart isn’t the first startup that Getty Images has partnered with for responsible AI imagery, as it also partnered with AI image generator Bria, and Runway, a startup building generative AI for content creators.

Snap plans to add watermarks to images created with its AI-powered tools

Snap watermarks

Image Credits: Alexander Shatov/Unsplash (opens in a new window)

Social media service Snap said on Tuesday that it plans to add watermarks to AI-generated images on its platform.

The watermark is a translucent version of the Snap logo with a sparkle emoji, and it will be added to any AI-generated image exported from the app or saved to the camera roll.

The watermark, which is Snap’s logo with a sparkle, denotes AI-generated images created using Snap’s tools. Image Credits: Snap

On its support page, the company said removing the watermark from images will violate its terms of use. It’s unclear how Snap will detect the removal of these watermarks. We have asked the company for more details and will update this story when we hear back.

Other tech giants like Microsoft, Meta and Google have also taken steps to label or identify images created with AI-powered tools.

Currently, Snap allows paying subscribers to create or edit AI-generated images using Snap AI. Its selfie-focused feature, Dreams, also lets users use AI to spice up their pictures.

In a blog post outlining its safety and transparency practices around AI, the company explained that it shows AI-powered features, like Lenses, with a visual marker that resembles a sparkle emoji.

Snap lists indicators for features powered by generative AI. Image credits: Snap

The company said it has also added context cards to AI-generated images created with tools like Dream to better inform users.

In February, Snap partnered with HackerOne to adopt a bug bounty program aimed at stress-testing its AI image-generation tools.

“We want Snapchatters from all walks of life to have equitable access and expectations when using all features within our app, particularly our AI-powered experiences. With this in mind, we’re implementing additional testing to minimize potentially biased AI results,” the company said at the time.

Snapchat’s efforts to improve AI safety and moderation come after its “My AI” chatbot spurred some controversy upon launch in March 2023, when some users managed to get the bot to respond and talk about sex, drinking and other potentially unsafe subjects. Later, the company rolled out controls in the Family Center for parents and guardians to monitor and restrict their children’s interactions with AI.