Pakistan man hacked 31 accounts on X to post fake AI videos during US-Iran war

As the conflict between the United States, Israel and Iran escalates, social media platforms are grappling with a surge of misinformation, including AI-generated war videos, with X revealing it recently dismantled a network operating from Pakistan that was posting fabricated conflict footage.

A damaged car is removed following Iranian missile barrage, amid the US-Israel conflict with Iran, in Bnei Brak, Israel March 3, 2026. (Reuters)

Nikita Bier, head of product at X, said the platform identified a user in Pakistan who had been running a coordinated network of accounts spreading artificial intelligence-generated war videos.

“Last night, we found a guy in Pakistan that was managing 31 accounts posting AI war videos,” Bier wrote on X. “All were hacked and the usernames were changed on Feb 27 to ‘Iran War Monitor’ or some derivative.”

According to Bier, the accounts were quickly taken down as part of the platform’s growing effort to detect and curb coordinated misinformation campaigns. “We are getting much faster at detecting this – and also eliminating the incentive to do this,” he added.

His response came to a X post which had posted a deepfake of an Iranian rocket hitting a ship in Israel’s Tel Aviv. In the bio, the person posting the video, Ahmed Hamdan, claimed to be a journalist from Gaza.

AI misinformation surges during conflict

The revelation comes amid a broader explosion of AI-driven misinformation during the ongoing West Asia crisis, where the United States and Israel launched strikes on Iran, triggering retaliatory attacks which have spread across the region.

As military exchanges intensify, social media platforms have been flooded with images and videos claiming to show strikes and damage across Iran, Israel and other parts of the Middle East. However, investigators and fact-checkers say many of these posts contain manipulated or entirely fabricated content.

In one case highlighted by the Financial Times, a satellite image circulated online, including by the official X account of Iran’s Tehran Times, claimed to show damage to an American radar system in Qatar following an Iranian drone strike.

Analysis by the newspaper found the image had been altered using artificial intelligence. While genuine satellite imagery confirmed that the radar site had suffered damage, the widely shared image was actually an AI-modified picture of a location in Bahrain.

Despite being false, the post attracted nearly one million views on X and remained online for more than two days.

Fake images, recycled footage add to confusion

The current conflict is not the first time AI-generated media has spread widely during wartime.

During the 12-day Israel-Iran conflict in June 2025, several AI-generated videos claiming to show Iranian military strength and damage to Israeli infrastructure circulated online, according to a BBC report.

Pro-Israel accounts also shared misleading posts, including old videos of protests that were falsely presented as demonstrations against Iran’s leadership.

Verification group GeoConfirmed has repeatedly flagged fake or mislabelled clips during the current fighting as well. In one recent case, the group debunked a viral claim that the deadly strike on a girls’ school in Minab was caused by a failed Iranian Revolutionary Guard missile launch rather than an attack by the United States and Israel.

The misleading post had already garnered nearly 11,000 likes and more than 750,000 views before being challenged.

In other cases, old footage has resurfaced as supposed evidence of new attacks. One widely shared video claimed to show Iranian missiles hitting Tel Aviv, but investigators later determined the clip was actually from the 2024 earthquakes in Turkey.

AI tools lowering barriers to misinformation

Experts say the rapid development of generative AI tools has dramatically lowered the barrier to producing convincing fake content.

“With a satellite image, you’re looking at buildings, roads, terrain – things that don’t have inherent cues,” said Henk van Ess, an expert in online research methods. “Most people have no idea what a genuine satellite image is supposed to look like from a specific sensor at a specific resolution.”

Brady Africk, an open-source intelligence researcher at the American Enterprise Institute, warned that manipulated satellite imagery could become a major problem for journalists and analysts trying to track conflicts.

“Satellite imagery can be manipulated just like other images. AI has made that tremendously easier,” Africk said.

Platforms and governments respond

In response to the spread of fake content, X said it is tightening enforcement against AI-generated war media posted without disclosure.

Under the new rules, users who post AI-generated videos of armed conflict without labelling them as such will be barred from the platform’s Creator Revenue Sharing programme for 90 days. Repeat violations could result in permanent removal from the programme.

The company has also expanded the use of its “Community Notes” feature, which allows users to add context and fact-checks to misleading posts.

Governments have also begun warning against the spread of misinformation. In the United Arab Emirates, Dubai Police cautioned residents against sharing rumours or unverified images related to security incidents, saying offenders could face fines of at least 200,000 dirhams.

Leave a Comment