YouTube is under renewed scrutiny after experts warned that a tool designed to help creators remove AI generated deepfake videos also grants Google the ability to use those creators biometrics under its broad privacy policy. The concern surfaced as the platform expands its likeness detection system, which flags manipulated videos that misuse a creator’s face.

YouTube told CNBC that Google has never trained artificial intelligence models on biometric data from creators and said the company is reviewing the wording on the sign up form to prevent misunderstandings. However, YouTube confirmed that it will not adjust the underlying policy that links the feature to Google’s wider data practices.

The disagreement exposes a growing divide inside Alphabet as Google accelerates its AI efforts while YouTube works to protect trust with creators and rights holders who rely on the platform for their livelihoods.

The likeness detection feature, launched in October, scans newly uploaded videos across YouTube to identify whether a creator’s face has been altered or generated by artificial intelligence. Creators can request the removal of flagged videos. To use the tool, they must upload a government issued ID and a biometric video of their face, which experts say introduces potential long term risk. Biometrics refer to physical measurements used to confirm identity.

Because the tool is tied to Google’s privacy policy, experts say the company could use public biometric content to help train its AI models. The policy allows the use of public data to support AI development and product features.

“Likeness detection is a completely optional feature, but does require a visual reference to work,” YouTube spokesperson Jack Malon said in a statement to CNBC. “Our approach to that data is not changing. As our Help Center has stated since the launch, the data provided for the likeness detection tool is only used for identity verification purposes and to power this specific safety feature.”

YouTube said it is “considering ways to make the in product language clearer,” though it did not specify when changes might appear.

Experts say they warned YouTube about the risks months ago. Dan Neely, CEO of Vermillio, said creators should be mindful of handing over sensitive data at a time when AI training material is viewed as “strategic gold.” He added, “Your likeness will be one of the most valuable assets in the AI era, and once you give that control away, you may never get it back.”

Others echoed similar concerns. Loti CEO Luke Arrigoni said the current policy creates “enormous” risks because attaching a name to a biometric signature gives companies the technical ability to generate synthetic versions of a person’s face.

Both Neely and Arrigoni said they would not advise their clients to enroll in the system at this time.

YouTube’s head of creator product, Amjad Hanif, said the tool was built to operate “at the scale of YouTube,” where hundreds of hours of footage are uploaded every minute. He said the feature will reach more than 3 million creators in the YouTube Partner Program by the end of January. “We do well when creators do well,” Hanif told CNBC.

The expansion comes as AI generated videos continue to improve, raising new challenges for creators whose public visibility makes them particularly vulnerable. YouTube creator Mikhail Varshavski, known as Doctor Mike, said he now reviews dozens of deepfake clips each week through the tool.

Varshavski, who has over 14 million subscribers, said one deepfake promoted a “miracle” supplement on TikTok and could have endangered viewers who trust his medical expertise. “It obviously freaked me out,” he said. “To see someone use my likeness in order to trick someone into buying something they don’t need or that can potentially hurt them, scared everything about me in that situation.”

AI video generation platforms like Google’s Veo 3 and OpenAI’s Sora are making it easier to create deepfakes of public figures because their images often appear in training datasets. Veo 3 is trained on a subset of the more than 20 billion videos uploaded to YouTube, a volume that could include hundreds of hours from creators like Varshavski.

Deepfake activity has “become more widespread and proliferative,” Varshavski said. He noted that some channels now operate entirely on AI generated impersonations used to sell products or harass individuals.

Currently, creators have no way to earn revenue from unauthorized use of their likeness, unlike YouTube’s Content ID system for copyrighted content. Hanif said YouTube is exploring what such a model might look like for AI manipulated likenesses in the future.

He added that takedown rates remain low, claiming many creators decide not to remove flagged videos. Agents and rights advocates dispute that, arguing the low numbers stem from confusion and limited awareness rather than comfort with AI content.

As YouTube pushes forward with AI tools and policy updates, concerns around biometric privacy signal a growing tension between innovation and the protection of creators whose identities form the core of their work.

Related Readings:

UK Lawsuit Over App Store Commissions

LEAVE A REPLY

Please enter your comment!
Please enter your name here