Tim Sweeney defends Grok AI despite creation of deepfakes and child abuse content

Tim Sweeney, CEO of Epic Games, publicly defended the Grok AI chatbot from platform X (formerly Twitter), even after concerning revelations in early 2026. The artificial intelligence tool was used to create non-consensual nude deepfakes of women and, even more seriously, to generate images of child sexual abuse, as confirmed by the Internet Watch Foundation. Meanwhile, the UK government is discussing banning X under the Online Safety Act, and US senators are pressuring Apple and Google to remove the app from their stores.
Faced with this pressure, Sweeney reacted with direct criticism of politicians, accusing them of using the episode as a pretext for censorship. The executive argued that “all major AIs have documented instances of going off the rails,” but that companies do their best to combat these problems. For him, the selective demand against a specific company would constitute “crony capitalism.” Meanwhile, the UK Secretary of State for Technology, Liz Kendall, warned that X needs to act “urgently.”
Responses and controversies amid the crisis
Elon Musk’s platform X initially responded to the crisis by placing Grok’s image generation behind a paywall, a measure that philosopher Dr. Daisy Dixon from Cardiff University described as “a band-aid.” She argues that Grok needs to be completely redesigned with “integrated ethical barriers.” However, Musk echoed Sweeney’s defense, downplaying the severity by asking: “So what if Grok can put people in bikinis? This isn’t a new problem, it’s a new tool.” This stance generated strong reactions from users and authorities.
Despite the company stating it will remove illegal content and suspend offending accounts, collaborating with governments and law enforcement, many experts counter that such content should never have been possible to generate. As a result, the UK regulator Ofcom has initiated an “accelerated assessment” of the platform, with a concrete threat of blocking if the measures are not effective. Therefore, the debate over the limits of moderation on open platforms versus the need for strict control over potentially dangerous AIs reaches a new level of urgency in 2026.





