while some of this content is being shared and promoted organically, it is also being boosted by iran’s professional disinformation agents. last year, radware, an israeli cybersecurity firm,
documented how tehran is using ai to enhance its networks of social media bots (e.g., creating ai-generated personas and deepfakes) and influence public opinion. last wednesday, the firm
released an update indicating that these networks have been deployed to support pro-iranian narratives in the current war.
several days ago, i
posted a video of the beaches of tel aviv that debunked the “israel is destroyed” narrative by showcasing people resiliently socializing, exercising and playing volleyball. it quickly went viral, amassing millions of views, leading to a deluge of commentators claiming it was fake, outdated or ai-generated.
some of these people seemed to be bad actors and claimed, impossibly, that they had seen this footage months or years before. others seemed earnestly suspicious, and had simply lost the capacity to trust online content.
but something strange happened: many commentators
asked ai chatbots to verify whether the video was authentic. when these chatbots, being unequipped for this task, gave inconsistent, conjecture-laden answers, this was taken as evidence of forgery. so not only is ai flooding the internet with deepfakes, it is being used as a shoddy fact-checker, too, and, in this way, has found yet another means to erode reality.