In the past few months, the world has seen an increase in the use of artificial intelligence on media platforms. From AI news anchors to turning yourself into a Studio Ghibli character, and the horrific promotion of sexual abuse of women with the “Grok undress her” spree, the boom of AI comes with many more banes – the latest being misinformation in war scenarios.
In the past five days, we saw the United States of America, along with Israel, launch an attack on the Islamic Republic of Iran, an attack which has now brought the West Asia region to a standstill and on the brink of a regional war.
Track LIVE updates on the US-Israel-Iran conflict here
As the conflict continues to escalate, social media remains flooded with images from Iran, Lebanon, Israel, the UAE, Qatar, Kuwait, Bahrain and other countries in the Middle East as the US, Israel and Iran continue to trade attacks.
Amid the conflict, Iran’s newspaper Tehran Times made a post on social media platform X, showing damage caused to an American radar system in Qatar during an Iranian drone strike.
However, as per an in-depth analysis by the Financial Times, the image circulate by the official account of the Iranian daily had been altered by AI.
FT analysis revealed that the image was an AI-altered image of an area in Bahrain. While satellite imagery from Planet Labs showed damage to the American system, the image circulating online was false.
Despite this, the post on Tehran Times garnered nearly one million views on the social media platform.
AI aiding misinformation
A similar spree was seen during the 12-day war between Israel and Iran in June 2025. As per a report by BBC, several AI videos boasting of Iran’s military capabilities, damage to Israeli sites and more were circulated.
Over in Israel, pro-Israel accounts also shared disinformation online, sharing old clips of protests and gathering in Iran, claiming that Iranians were protesting against the Khamenei regime.
Online verification group, GeoConfirmed, highlighted the rise in fake and unrelated videos being shared under the pretext of the 12-day war between the two rival nations. And now, during the current conflict, the online verification group is back to the grind.
Incidents of fake satellite imagery, videos and other fake images were circulated during the May 2025 stand off between India and Pakistan, the Ukraine war, as well as during Israel’s war on Gaza.
GeoConfirmed’s most recent post debunks a fake tweet which claims that the strike on the Minab girls’ school was a failed IRGC launch, and was not launched by the US and Israel.
“This claim, with almost 11k likes, 5k retweets and 750.000+ views is WRONG based on GeoConfirmed geolocations,” the group wrote on X.
Over in the news industry, among the rush to be the first, fact-checking tends to take a back seat. Multiple AI-generated videos have been run on TV news channels, one such video allegedly shows an Iranian ballistic missile hitting Tel Aviv in Israel.
However, the video, debunked by Indian journalist, fact-checker and Alt-News founder Mohammed Zubair reveals that the video being shared was generated using artificial intelligence.
AI-generated videos and images are not the only problem during wartime. Old footage from unrelated incidents are also shared, creating more panic in the general public.
One such video on X alleges Tel Aviv was struck by Iranian missiles. The video shows collapsed buildings and broken roads. However, it was soon learnt that this viral video was in fact from the 2024 Turkey earthquakes.
With easy access to AI tools such as ChatGPT, Gemini, Grok and more, anyone can alter an image, generate a new image or video with a simple prompt.
Falsifying satellite images
Fake images and videos aside, a new problem arises with altered satellite images. Speaking to FT, Brady Africk, an independent open-source intelligence researcher and director of media relations at the American Enterprise Institute, said it is more difficult to identify a manipulated satellite image.
There is a large trust factor with satellite images due to the complex nature of the content and technology used to capture it.
However, even satellite images have not been spared by AI and would make conflict mapping only more difficult in the future.
“With a satellite image, you’re looking at buildings, roads, terrain — things that don’t have these inherent cues. And most people have no idea what a genuine satellite image is supposed to look like from a specific sensor at a specific resolution,” Henk van Ess, an expert in online research methods and author of the Digital Digging newsletter, told the UK-based publication.
X takes action against AI-generated content
In response to the surge of fake videos on the social media platform, X’s head of product Nikita Bier said the Musk-led company would step up efforts to curb AI-generated content.
“Starting now, users who post AI-generated videos of an armed conflict—without adding a disclosure that it was made with AI—will be suspended from Creator Revenue Sharing for 90 days. Subsequent violations will result in a permanent suspension from the program,” Bier wrote on X.
X has also upped its “community note” feature which helps fact check viral videos and images on the website. The community notes feature also alerts users who interacted with the concerned post of any false information it may have spread.
Several countries have also taken action against the spread of false information and images. In the United Arab Emirates, Dubai police have warned against the spread of rumours and disinformation, adding that anyone found guilty would face a fine of no less than 200,000 dirhams.