Artificial intelligence is beginning to reshape how information spreads during global conflicts. A new investigation by BBC reveals that AI-generated videos depicting the Israel–Iran conflict are circulating widely across social media, sometimes attracting thousands of views despite being completely fabricated.
According to the investigation conducted by the broadcaster’s fact-checking unit BBC Verify, several viral clips showing missile strikes and explosions in Israeli cities were not real battlefield footage but AI-generated simulations.
Many of these clips have been repeatedly reposted by different accounts, turning fabricated scenes into widely shared content that looks increasingly authentic with each iteration.
The trend highlights a growing challenge for both news organizations and social platforms: distinguishing between real war coverage and convincing synthetic media.
BBC Verify analysts identified a number of AI-generated clips that appear to show missiles striking Tel Aviv, complete with dramatic explosion sounds and night-time cityscapes.
Despite appearing realistic, these videos were created using generative AI tools rather than recorded from real events.
One of the most widely circulated clips was reused in more than 300 social media posts across multiple platforms. Collectively, those posts accumulated tens of thousands of views and shares.
Because many users encounter the clips without context, the videos often appear to be genuine war footage.
When reposted repeatedly across different accounts, the synthetic videos can quickly take on the appearance of authentic eyewitness recordings.

Beyond misinformation, the BBC report highlights another troubling aspect of the trend: some social media creators appear to be profiting from AI-generated war content.
Accounts posting the videos frequently gain followers, views, and engagement that can translate into advertising revenue or platform incentives.
In practice, this means fabricated clips of missile attacks or explosions can become a form of viral content designed primarily to attract attention rather than document real events.
The approach mirrors broader patterns seen across social platforms, where dramatic or shocking visuals often spread faster than verified information.
When those visuals are generated by AI rather than captured by cameras, the potential for manipulation increases significantly.
The BBC investigation also found that some users attempted to verify the viral videos by asking AI chat assistants whether the footage was authentic.
In several cases, people turned to Grok, the AI assistant integrated with the social platform X.
However, the responses were not always accurate.
In some instances, the AI assistant incorrectly suggested that the videos were real. Rather than resolving uncertainty, those responses sometimes amplified confusion among users trying to verify the footage.
The episode illustrates one of the current limitations of AI assistants: they can struggle to determine whether a viral video is genuine without reliable metadata or verification sources.
The rise of AI-generated war content is forcing social media platforms to reconsider how they moderate synthetic media.
Some platforms have begun experimenting with labels for AI-generated videos or implementing rules that require creators to disclose synthetic content.
However, policies remain inconsistent across the industry.
According to the BBC report, the broadcaster contacted companies including TikTok and Meta to ask whether they would introduce additional safeguards for AI-generated war footage.
At the time the report was published, neither company had provided a response.
Generative AI tools have improved rapidly in recent years, making it possible to create highly realistic video simulations of events that never happened.
These systems can generate city skylines, explosions, smoke clouds, and nighttime lighting effects that resemble real footage captured by smartphones or security cameras.
When paired with dramatic sound effects and uploaded to fast-moving social media feeds, the clips can easily pass as authentic recordings for viewers scrolling quickly through their timelines.
The challenge becomes even greater during ongoing conflicts, when real footage is scarce or emerging slowly and audiences are eager for updates.
The spread of AI-generated conflict videos signals a broader shift in how misinformation may evolve in the coming years.
Traditional disinformation campaigns often relied on edited images or misleading captions. AI video tools now allow creators to generate entirely fictional scenes that mimic real events.
For journalists, researchers, and platform moderators, verifying footage increasingly requires forensic analysis, location verification, and comparison with known imagery.
Organizations like BBC Verify have begun developing new workflows specifically designed to identify synthetic media.
But as AI tools continue improving, experts expect the challenge of distinguishing authentic war reporting from fabricated content to become even more complex.
The emergence of AI-generated war videos illustrates how rapidly the information environment surrounding conflicts is changing.
While the technology can be used for entertainment, storytelling, or education, it also creates opportunities for manipulation and viral misinformation.
For audiences, the lesson is becoming clear: not every dramatic video circulating during a conflict is necessarily real.
As generative AI tools become more accessible, the responsibility to verify and question viral footage will increasingly fall
Be the first to post comment!