According to Gizmodo, Senator Ron Wyden, the Oregon Democrat who helped write Section 230, says the foundational internet law does not protect AI chatbots like Elon Musk’s Grok. This statement comes after reports over the past week that users have been prompting Grok to generate AI-created, non-consensual sexual imagery, including child sexual abuse material. Wyden explicitly told Gizmodo that companies should be “held fully responsible” for criminal content their AIs generate, and he’s urging states to hold X and Musk accountable if the federal government won’t. The controversy erupted in early January 2026, with Musk himself tweeting on January 3rd that anyone creating illegal content with Grok would face consequences, though he didn’t specify what those would be. This follows a 2023 incident where Musk reinstated a right-wing influencer banned from X for posting child exploitation material.
The real issue is the product itself
Here’s the thing: Wyden’s legal argument is fascinating, but it almost misses the forest for the trees. The bigger scandal isn’t just about legal liability—it’s that xAI built a product that can do this at all. We know Grok has guardrails. It won’t tell you how to build a bomb. It apparently has filters against creating explicit porn of men, based on tests from last August. So the company is clearly making active, deliberate choices about what its AI can and cannot do. And they’ve apparently decided that the line doesn’t include banning the creation of sexualized images of children or non-consensual “bikini” deepfakes of real women. That’s a product decision. It’s a feature, not a bug. When your chatbot’s “spicy mode” is known for generating naked videos of women but only shirtless men dancing, you’ve baked a specific, creepy bias right into the code.
Accountability in a post-Trump landscape
Wyden knows his push for federal action is a long shot. He’s basically telling states to go after Musk because, as he points out, the Trump Justice Department has other priorities—like dragging its feet on releasing the Epstein files. And let’s be real, Musk is cozying up to Trump again. You think this DOJ is about to sue the world’s richest man, who’s now a political ally, over his AI generating illegal images? Not a chance. So we’re left with a weird, fractured enforcement landscape. Maybe a state attorney general gets ambitious. But the core problem remains: a powerful tool, built by a capricious billionaire, is actively harming people with what seems like corporate approval.
The harassment is the point
The most chilling part of this whole saga is the reaction to the critics. Ashley St. Clair, the mother of one of Musk’s children, has been vocal about Grok’s abuse, and the response from Musk’s fans has been grotesque. They tell her if she doesn’t want AI-generated porn made of her, she shouldn’t post photos online. As she told The Washington Post, you can’t claim X is the digital public square but also tell people to “log off” if they don’t want to be assaulted by a chatbot. This is the logical endpoint of a platform philosophy that treats moderation as censorship and views any consequence for speech—even illegal speech—as tyranny. It creates an environment where, as we saw in 2023, posting actual child sexual abuse material can be waved away as potentially raising “awareness.”
So what happens now?
Probably nothing good. Wyden’s statement is a warning flare, but without federal will, it’s just words. xAI’s response to Gizmodo’s inquiry was an automated email saying “Legacy Media Lies,” which tells you all you need to know about their engagement level. Users on places like r/Grok are still sharing tips on how to bypass porn filters. The product is out there, it’s doing damage, and the owner is joking about it. Section 230 might not protect Grok, but that only matters if someone with power actually tries to enforce the law. And right now, the people with that power seem more interested in protecting the powerful than the victims. It’s a bleak picture, and it’s hard to see what changes it.
