Brands Social
YouTube Empowers Users to Remove AI-Generated Face and Voice Simulations
- YouTube has updated its policy to allow individuals to request the removal of AI-generated content that mimics their face or voice as a privacy violation, rather than for being misleading.
- This policy requires first-party claims, evaluates requests based on various factors, and will not automatically penalize the content creator for privacy complaints.
Meta isn’t alone when it comes to AI-generated content issues; YouTube quietly introduced an update allowing individuals to request removal of AI or synthetic material that imitates their face or voice – an expansion of its privacy request process which builds off its responsible AI agenda launched last November.
As opposed to its earlier policies where content removal requests for being misleading were permitted on YouTube, such claims now fall under privacy violations and must be lodged through first-party claims except when relevant people such as minors, those without computer access, deceased individuals and/or special circumstances apply (i.e. minors who can no longer access computers etc).
Submitting a takedown request doesn’t guarantee removal, however. YouTube evaluates each complaint according to various factors; such as its label as synthetic content or whether it uniquely identifies someone, as well as whether it could fall under categories of parody, satire or public interest. Furthermore, they take into consideration if content features public figures and well-known individuals and whether depicting “sensitive behavior,” like criminal activities, violence or endorsing products/candidacies could influence votes – this factor becomes especially crucial during election years when AI generated endorsements could influence votes by changing how people see this issue.
YouTube provides content uploaders with 48 hours to address a complaint; if it is taken down within that period, then the complaint can be considered closed; otherwise YouTube initiates an internal review process that may include taking down and/or removing videos as well as editing out specific details like names/personal data from titles/descriptions/tags; blurring faces does not suffice – such videos could potentially reappear back online later.
YouTube did not widely announce this policy change despite its significance, though they introduced an aid for creators using Creator Studio that helps identify content containing altered or synthetic media including generative AI in March and more recently began testing a feature to allow users to add crowdsourced notes indicating whether videos contain parody content or may be misleading content (also called crowdsourced notes).
YouTube does not view AI negatively and has tested various tools such as an AI comments summarizer and conversational tool for video questions and recommendations. Nonetheless, YouTube makes clear that labeled AI content does not exempt it from being removed; it still must comply with their Community Guidelines before being approved on YouTube. For privacy complaints made against original content creators, penalties won’t automatically follow suit on their part.
“For creators, if you receive notice of a privacy complaint, keep in mind that privacy violations are separate from Community Guidelines strikes and receiving a privacy complaint will not automatically result in a strike,” a company representative explained on the YouTube Community site.
YouTube’s Privacy Guidelines differ significantly from its Community Guidelines in that content may be removed due to privacy complaints even if it does not violate those. Although no penalties such as upload restrictions apply when content is removed upon a privacy complaint, YouTube may take action against accounts with repeated infringements.