
Screenshot of AI generated “Kirkinator” meme from Instagram Reels. | Source: Instagram
* Offensive AI content is being weaponised for crypto scams.
* High engagement from outrage or humour boosts visibility.
* Looser content moderation on major platforms and discussions.
A growing ecosystem of crypto scams is being promoted through deliberately offensive, AI-generated images and videos on social media platforms, particularly Instagram.
As the volume of offensive imagery accelerates heading into the new year, questions are being raised about how scammers are being rewarded for this behaviour — and why little appears to be done to stop it.
The Offensive Meme-to-Memecoin Pipeline
Anonymous creators on social media are rapidly generating violent or racist parody characters, many of which are directly linked to newly launched memecoins.
“These memes are not emerging organically,” said an internet culture researcher writing in the Financial Times under the name Etymology Nerd.
“They are being manufactured specifically to manipulate engagement and, ultimately, to extract money from unsuspecting investors.”
According to the report, scammers exploit a feedback loop in which shocking imagery generates high levels of comments, shares, and watch time — all metrics favoured by social media algorithms.
That visibility is then used to promote new tokens, often registered within minutes on low-barrier platforms such as Pump.fun.
Shock-Factor Rug Pulls
Among the most prominent examples are AI-generated characters such as “Kirkinator,” a cyborg parody depicting the late U.S. political activist Charlie Kirk.
Another example is a fictional android version of George Floyd, known as “George Droyd,” which gained significant traction across social media platforms.
Both characters were used to promote short-lived tokens that spiked sharply in value before collapsing through rug pulls.
“The more disturbing the content, the more likely it is to spread,” the Financial Times writer said.
“Violence, slurs, and transgressive imagery are being used as tools to game the algorithm.”
Even users with no interest in cryptocurrency inadvertently assist scammers by reacting with shock or amusement.
Increased engagement drives visibility, making it more likely that the associated token will reach active traders.
AI Memes and Adult Content
The rise of offensive AI-generated memes coincides with broader discussions about loosening restrictions on generative AI models.
In October, OpenAI CEO Sam Altman announced that the company would soon release a version of ChatGPT capable of providing “erotica for verified adults.”
While Altman reiterated that guardrails and safety remain a priority, some observers worry that reduced restrictions could enable more harmful uses of AI in the future.
Social media platforms such as X and Instagram also play a significant role in content moderation.
Meta, the parent company of Instagram and Facebook, rolled back some moderation policies this year following Donald Trump’s presidency.
These changes focused on allowing more “free speech” and fewer restrictions on culture-war-related topics.
Pump.fun and Scam Activity
In May, a report from Solidus Labs found that 98.6% of tokens launched on Pump.fun were created for rug pulls or pump-and-dump schemes.
Since launching at the beginning of 2024, the platform has seen more than seven million tokens created. Of those, only 97,000 maintained liquidity above $1,000.
Following the report, Pump.fun spokesperson Troy Gravitt disputed the findings:
“What Solidus Labs lacks is a basic understanding of memecoins,” he said. “98% of memecoins — like NFTs, tweets, Instagram posts, trading cards, and most art — are worth little in the long run.”
“That’s precisely the point. What is important is the availability of a functioning marketplace connecting motivated buyers and sellers, and the underlying cultural expression to which the market assigns value over time.”
Read more on CCN – Capital & Celeb News

