Advisers to Meta are urging the company to improve how it handles misleading videos created with artificial intelligence across its platforms.
The warning came from the company’s Meta Oversight Board, which criticized Meta for leaving up an AI-generated video that falsely showed heavy damage in the Israeli city of Haifa during the conflict with Iran. The video circulated on Facebook and gained significant attention before moderators reviewed it.
The board said Meta should update its policies and label AI-generated content more aggressively, particularly during armed conflicts when misinformation can spread quickly. Members warned that the growing number of synthetic videos online is making it harder for the public to separate real footage from fabricated content.
Meta launched the Oversight Board in 2020 to review major content moderation decisions involving platforms such as Facebook, Instagram, and WhatsApp. While the group provides recommendations, it often disagrees with the company’s handling of sensitive cases.
According to the board, Meta’s current system relies heavily on users to disclose when their posts are generated with AI tools. If they do not, the company typically waits for complaints before deciding whether to add a label.
The board said this approach is not strong enough to manage the growing volume of AI-generated media, especially during global crises when misleading content spreads quickly and attracts high engagement.
The case began after a Facebook account that described itself as a news source posted the fabricated video last year. Despite several user complaints, Meta initially decided that the clip did not require removal or labeling because it did not directly risk physical harm.
The Oversight Board disagreed and said the video should have carried a high-risk AI label to help viewers understand that the footage was not real. Meta said it would apply a label to the video within seven days.
Related Readings:










