The troll farms and bot armies of Russia and Iran are taking over MAGA’s online world

A new report from the Network Contagion Research Institute (NCRI) details a coordinated influence operation by Russian and Iranian actors aimed at U.S. conservative audiences — especially online communities that identify with the MAGA movement. The campaign deploys inauthentic accounts, false-flag conspiracy narratives, and a handful of high-visibility American influencers to steer debate and widen ideological rifts.

From 1 May to 10 June 2025 researchers logged more than 675,000 posts promoting false-flag explanations for shootings, bombings and other violence. These claims usually allege that the U.S. government, Israel, or “globalists” staged events to tighten control or discredit conservatives. Most amplification appeared on X, Telegram and TikTok feeds with large pro-Trump followings.

The surge almost always begins with anonymous bot accounts. Many sport profile pictures of U.S. soldiers or bald eagles but show machine-translated phrasing typical of Russian sources. After a few days of generic pro-Trump memes, the accounts pivot to sharper disinformation wrapped in anti-Biden or anti-NATO rhetoric.

Within an hour of the 24 May murders of Israeli diplomats in Washington, MAGA-branded accounts—later tied to Russian or Iranian IP ranges—pushed the idea that the killings were a Mossad self-attack meant to drag America into war. Those messages drew 200,000 engagements before any official statement appeared.

The pattern repeated on 3 June, after a Jewish rally in Boulder was fire-bombed.

Bot clusters blamed a fictitious “Zionist op” to criminalize Christianity.

Many memes recycled the same watermark found on Russian Telegram channels linked to the GRU’s cyber unit.

NCRI also points to real personalities who reinforce the campaign.

  • Draven Noctis, a U.S. Army veteran with 180,000 followers, praises Russia and urges viewers to “see through the system.”
  • Jackson Hinkle, a rising X commentator, called the embassy attack “a CIA op to provoke Iran”.

His video was retweeted by more than 1,000 low-reputation accounts in ten minutes—strong evidence of scheduled bot activity.

Accounts typically spend weeks posting routine conservative content—Second Amendment slogans, anti-inflation memes—before unveiling more radical claims: that Trump is controlled by Zionists or that Ukrainian biolabs harvest organs. Engagement spikes once the tone shifts.

Early June brought a deep-fake video alleging to show Trump inside Jeffrey Epstein’s mansion. Digital forensics tied the clip to a Tehran-based bot farm; it trended for 36 hours on Telegram and X before removal. The episode underscored how the network alternates between boosting and degrading Trump, whichever approach stirs the most friction.

The objective is fracture, not electioneering. By making extreme views appear viral and pitting factions of the right against one another, foreign operators erode collective action. NCRI labels this “asset-adjacency”: real influencers act as catalytic nodes, while bots supply the numbers needed to game algorithms.

Iran’s contribution is smaller but growing. Its accounts emphasise Israel-related news, blending Gaza footage with MAGA captions before segueing into domestic conspiracies. Several were previously used to suppress U.S. voter turnout during 2020.

Techniques are evolving. Bots now mimic U.S. slang and even reference local sports scores. AI-generated avatars replace stolen photos, and captions are timed to exploit recommendation windows.

NCRI concludes that each crisis adds another layer of doubt—first about officials, then media, then fellow conservatives. Counter-measures lag because individual posts skirt explicit policy violations, relying on implication and repetition.

With another election cycle looming, researchers expect the tempo to rise. The playbook is set: infiltrate, imitate, amplify, polarise. Whether the flashpoint is a school shooting, a foreign-policy setback or a verdict, the same network will be ready with a pre-packaged false-flag narrative—complete with patriotic avatars and an unseen Kremlin or Tehran return address.

NCRI’s timeline lists at least five other flashpoints this spring where the same network surged: the Tennessee elementary-school shooting, sabotage of a North Carolina power substation, a freight-train derailment outside Baltimore, an evacuation during Independence Day parade rehearsals, and the attempted assassination of a federal judge in Texas. In each case, false-flag threads surfaced within thirty minutes, many tagging a distinctive three-emoji stack—⚠️🎯🇺🇸—first documented in Russia’s 2016 disinformation playbook.

Engagement patterns were nearly identical: a burst of 200–400 bot accounts posting the same link, a second wave of graphic memes, and finally a share by one or two real influencers that pushed the story into mainstream visibility. NCRI notes that 61 percent of those early-burst accounts have since been suspended, yet the narratives they introduced remain searchable and continue to circulate in smaller forums and private channels. Investigators also traced several custom URL-shorteners used in the campaigns to a Moscow reseller that once hosted content for the Internet Research Agency.

A separate technical annex shows how the same infrastructure now targets pro-life and Second Amendment advocacy groups. Researchers found 27 Facebook pages that recycled NRA talking points by day and, after midnight Moscow time, pivoted to stories claiming U.S. border agents were stockpiling children for organ trafficking. The pages bought low-cost ads through Qiwi wallets funded via a St Petersburg crypto exchange. NCRI argues that this mix of authentic-looking issue advocacy and lurid conspiracism is designed to push committed activists toward more radical echo chambers.

Altogether, the report frames the current phase of Russian and Iranian information warfare as less about electing allies than about eroding trust at the granular level—one hashtag, meme and influencer clip at a time.

Scroll to Top