YouTube now lets you request removal of AI-generated content that simulates your face or voice

YouTube logo

Image Credits: Olly Curtis/Future / Getty Images

Meta is not the only company grappling with the rise in AI-generated content and how it affects its platform. YouTube also quietly rolled out a policy change in June that will allow people to request the takedown of AI-generated or other synthetic content that simulates their face or voice. The change allows people to request the removal of this type of AI content under YouTube’s privacy request process. It’s an expansion on its previously announced approach to responsible AI agenda first introduced in November.

Instead of requesting the content be taken down for being misleading, like a deepfake, YouTube wants the affected parties to request the content’s removal directly as a privacy violation. According to YouTube’s recently updated Help documentation on the topic, it requires first-party claims outside a handful of exceptions, like when the affected individual is a minor, doesn’t have access to a computer, is deceased or other such exceptions.

Simply submitting the request for a takedown doesn’t necessarily mean the content will be removed, however. YouTube cautions that it will make its own judgment about the complaint based on a variety of factors.

For instance, it may consider if the content is disclosed as being synthetic or made with AI, whether it uniquely identifies a person and whether the content could be considered parody, satire or something else of value and in the public’s interest. The company additionally notes that it may consider whether the AI content features a public figure or other well-known individual, and whether or not it shows them engaging in “sensitive behavior” like criminal activity, violence or endorsing a product or political candidate. The latter is particularly concerning in an election year, where AI-generated endorsements could potentially swing votes.

YouTube says it will also give the content’s uploader 48 hours to act on the complaint. If the content is removed before that time has passed, the complaint is closed. Otherwise, YouTube will initiate a review. The company also warns users that removal means fully removing the video from the site and, if applicable, removing the individual’s name and personal information from the title, description and tags of the video, as well. Users can also blur out the faces of people in their videos, but they can’t simply make the video private to comply with the removal request, as the video could be set back to public status at any time.

The company didn’t broadly advertise the change in policy, though in March it introduced a tool in Creator Studio that allowed creators to disclose when realistic-looking content was made with altered or synthetic media, including generative AI. It also more recently began a test of a feature that would allow users to add crowdsourced notes that provide additional context on videos, like whether it’s meant to be a parody or if it’s misleading in some way.

YouTube is not against the use of AI, having already experimented with generative AI itself, including with a comments summarizer and conversational tool for asking questions about a video or getting recommendations. However, the company has previously warned that simply labeling AI content as such won’t necessarily protect it from removal, as it will still have to comply with YouTube’s Community Guidelines.

In the case of privacy complaints over AI material, YouTube won’t jump to penalize the original content creator.

“For creators, if you receive notice of a privacy complaint, keep in mind that privacy violations are separate from Community Guidelines strikes and receiving a privacy complaint will not automatically result in a strike,” a company representative last month shared on the YouTube Community site where the company updates creators directly on new policies and features.

In other words, YouTube’s Privacy Guidelines are different from its Community Guidelines, and some content may be removed from YouTube as the result of a privacy request even if it does not violate the Community Guidelines. While the company won’t apply a penalty, like an upload restriction, when a creator’s video is removed following a privacy complaint, YouTube tells us it may take action against accounts with repeated violations.

Updated, 7/1/24, 4:17 p.m. ET with more information about the actions YouTube may take for privacy violations.

Apple introduces AI-powered object removal in photos with the latest iOS update

In this photo illustration, the 'Apple' logo is displayed on a mobile phone screen in front of a computer screen displaying Apple Intelligence logo.

Image Credits: Hakan Nural/Anadolu / Getty Images

Apple released the new developer betas for iOS 18.1, iPadOS 18.1, and macOS 15.1 Sequoia. With this update, the company is launching new Apple Intelligence features, including the ability to remove objects from photos.

The feature, called Clean Up, lets users identify and remove an object from the photo without affecting the picture. The system uses AI to generate background when you remove an object from an image. Apple said that the system understands even shadow or reflection of an object and handles it while filling in the background.

Users can select an object using the smart detection feature to remove it with just one tap. People can also circle or brush over any unwanted objects to delete them from the image.

Apple’s rival Google made a similar feature called Magic Eraser available to all Google Photos users for free earlier this year.

In July, Apple rolled out the first set of Apple Intelligence features with iOS 18.1 dev beta. These features included writing tools, notification summaries for SMS and Mail, natural language search and memory creation in Photos, transcription for calls and voice recordings in notes, and summaries and smart replies feature in Mail. Apple Intelligence is only available to users in English with their region set to the U.S.

YouTube now lets you request removal of AI-generated content that simulates your face or voice

YouTube logo

Image Credits: Olly Curtis/Future / Getty Images

Meta is not the only company grappling with the rise in AI-generated content and how it affects its platform. YouTube also quietly rolled out a policy change in June that will allow people to request the takedown of AI-generated or other synthetic content that simulates their face or voice. The change allows people to request the removal of this type of AI content under YouTube’s privacy request process. It’s an expansion on its previously announced approach to responsible AI agenda first introduced in November.

Instead of requesting the content be taken down for being misleading, like a deepfake, YouTube wants the affected parties to request the content’s removal directly as a privacy violation. According to YouTube’s recently updated Help documentation on the topic, it requires first-party claims outside a handful of exceptions, like when the affected individual is a minor, doesn’t have access to a computer, is deceased or other such exceptions.

Simply submitting the request for a takedown doesn’t necessarily mean the content will be removed, however. YouTube cautions that it will make its own judgment about the complaint based on a variety of factors.

For instance, it may consider if the content is disclosed as being synthetic or made with AI, whether it uniquely identifies a person and whether the content could be considered parody, satire or something else of value and in the public’s interest. The company additionally notes that it may consider whether the AI content features a public figure or other well-known individual, and whether or not it shows them engaging in “sensitive behavior” like criminal activity, violence or endorsing a product or political candidate. The latter is particularly concerning in an election year, where AI-generated endorsements could potentially swing votes.

YouTube says it will also give the content’s uploader 48 hours to act on the complaint. If the content is removed before that time has passed, the complaint is closed. Otherwise, YouTube will initiate a review. The company also warns users that removal means fully removing the video from the site and, if applicable, removing the individual’s name and personal information from the title, description and tags of the video, as well. Users can also blur out the faces of people in their videos, but they can’t simply make the video private to comply with the removal request, as the video could be set back to public status at any time.

The company didn’t broadly advertise the change in policy, though in March it introduced a tool in Creator Studio that allowed creators to disclose when realistic-looking content was made with altered or synthetic media, including generative AI. It also more recently began a test of a feature that would allow users to add crowdsourced notes that provide additional context on videos, like whether it’s meant to be a parody or if it’s misleading in some way.

YouTube is not against the use of AI, having already experimented with generative AI itself, including with a comments summarizer and conversational tool for asking questions about a video or getting recommendations. However, the company has previously warned that simply labeling AI content as such won’t necessarily protect it from removal, as it will still have to comply with YouTube’s Community Guidelines.

In the case of privacy complaints over AI material, YouTube won’t jump to penalize the original content creator.

“For creators, if you receive notice of a privacy complaint, keep in mind that privacy violations are separate from Community Guidelines strikes and receiving a privacy complaint will not automatically result in a strike,” a company representative last month shared on the YouTube Community site where the company updates creators directly on new policies and features.

In other words, YouTube’s Privacy Guidelines are different from its Community Guidelines, and some content may be removed from YouTube as the result of a privacy request even if it does not violate the Community Guidelines. While the company won’t apply a penalty, like an upload restriction, when a creator’s video is removed following a privacy complaint, YouTube tells us it may take action against accounts with repeated violations.

Updated, 7/1/24, 4:17 p.m. ET with more information about the actions YouTube may take for privacy violations.

YouTube now lets you request removal of AI-generated content that simulates your face or voice

YouTube logo

Image Credits: Olly Curtis/Future / Getty Images

Meta is not the only company grappling with the rise in AI-generated content and how it affects its platform. YouTube also quietly rolled out a policy change in June that will allow people to request the takedown of AI-generated or other synthetic content that simulates their face or voice. The change allows people to request the removal of this type of AI content under YouTube’s privacy request process. It’s an expansion on its previously announced approach to responsible AI agenda first introduced in November.

Instead of requesting the content be taken down for being misleading, like a deepfake, YouTube wants the affected parties to request the content’s removal directly as a privacy violation. According to YouTube’s recently updated Help documentation on the topic, it requires first-party claims outside a handful of exceptions, like when the affected individual is a minor, doesn’t have access to a computer, is deceased or other such exceptions.

Simply submitting the request for a takedown doesn’t necessarily mean the content will be removed, however. YouTube cautions that it will make its own judgment about the complaint based on a variety of factors.

For instance, it may consider if the content is disclosed as being synthetic or made with AI, whether it uniquely identifies a person and whether the content could be considered parody, satire or something else of value and in the public’s interest. The company additionally notes that it may consider whether the AI content features a public figure or other well-known individual, and whether or not it shows them engaging in “sensitive behavior” like criminal activity, violence or endorsing a product or political candidate. The latter is particularly concerning in an election year, where AI-generated endorsements could potentially swing votes.

YouTube says it will also give the content’s uploader 48 hours to act on the complaint. If the content is removed before that time has passed, the complaint is closed. Otherwise, YouTube will initiate a review. The company also warns users that removal means fully removing the video from the site and, if applicable, removing the individual’s name and personal information from the title, description and tags of the video, as well. Users can also blur out the faces of people in their videos, but they can’t simply make the video private to comply with the removal request, as the video could be set back to public status at any time.

The company didn’t broadly advertise the change in policy, though in March it introduced a tool in Creator Studio that allowed creators to disclose when realistic-looking content was made with altered or synthetic media, including generative AI. It also more recently began a test of a feature that would allow users to add crowdsourced notes that provide additional context on videos, like whether it’s meant to be a parody or if it’s misleading in some way.

YouTube is not against the use of AI, having already experimented with generative AI itself, including with a comments summarizer and conversational tool for asking questions about a video or getting recommendations. However, the company has previously warned that simply labeling AI content as such won’t necessarily protect it from removal, as it will still have to comply with YouTube’s Community Guidelines.

In the case of privacy complaints over AI material, YouTube won’t jump to penalize the original content creator.

“For creators, if you receive notice of a privacy complaint, keep in mind that privacy violations are separate from Community Guidelines strikes and receiving a privacy complaint will not automatically result in a strike,” a company representative last month shared on the YouTube Community site where the company updates creators directly on new policies and features.

In other words, YouTube’s Privacy Guidelines are different from its Community Guidelines, and some content may be removed from YouTube as the result of a privacy request even if it does not violate the Community Guidelines. While the company won’t apply a penalty, like an upload restriction, when a creator’s video is removed following a privacy complaint, YouTube tells us it may take action against accounts with repeated violations.

Updated, 7/1/24, 4:17 p.m. ET with more information about the actions YouTube may take for privacy violations.