OpenAI’s API Gets Weaponized in Sneaky Backdoor Attack

OpenAI's API Gets Weaponized in Sneaky Backdoor Attack - Professional coverage

According to Infosecurity Magazine, Microsoft’s Detection and Response Team discovered in July 2025 that threat actors have been weaponizing OpenAI’s Assistants API to deploy a sophisticated backdoor called SesameOp. The attackers maintained presence in compromised environments for several months using a complex arrangement of internal web shells and compromised Microsoft Visual Studio utilities. SesameOp consists of a heavily obfuscated DLL loader called Netapi64.dll and a NET-based backdoor named OpenAIAgent.Netapi64 that leverages OpenAI as its command-and-control channel. Instead of using traditional methods, the backdoor exploits OpenAI’s legitimate infrastructure to fetch commands, execute them locally, and send results back as encrypted messages. Microsoft published their findings about this sophisticated threat on November 3, while OpenAI plans to deprecate the Assistants API in August 2026 in favor of the Responses API.

Special Offer Banner

Sponsored content — provided for informational and promotional purposes.

<h2 id="ai-infrastructure-weaponized“>When Legitimate AI Services Become Attack Vectors

Here’s the thing that should worry every security team: attackers are getting smarter about hiding in plain sight. Using OpenAI’s legitimate API means this traffic blends right in with normal business operations. It’s basically like hiding your illegal activities in a crowded shopping mall instead of a dark alley. The malware doesn’t even use OpenAI’s agent SDKs or model execution features – it’s just exploiting the API as a communication channel. And that’s the real genius here. Who’s going to block traffic to OpenAI? Most companies are actively encouraging their employees to use these services.

How This Thing Actually Stays Hidden

The technical details are pretty clever, I have to admit. The DLL gets loaded via .NET AppDomainManager injection, which is a sneaky defense evasion method. Then they’re using Eazfuscator.NET for heavy obfuscation, plus compression to minimize payload size and layered encryption – both symmetric and asymmetric. Basically, they’re making sure both the incoming commands and outgoing results stay completely hidden from security tools. It’s like they wrapped the whole operation in multiple layers of stealth technology.

What This Means for Cloud Security

This represents a scary shift in attack methodology. We’ve seen attackers abuse cloud storage services before, but now they’re moving up the stack to AI infrastructure. And think about it – if they’re using OpenAI today, what’s stopping them from targeting other AI services tomorrow? Microsoft’s own Azure AI services, Google’s Vertex AI, Amazon Bedrock – they could all become potential attack vectors. The bigger question is whether security teams are prepared to monitor AI service traffic with the same scrutiny they apply to traditional network traffic. Probably not.

The Deprecation Question

OpenAI planning to deprecate the Assistants API in August 2026 feels like closing the barn door after the horses have already escaped. By then, attackers will have had nearly a year to refine this technique or move on to exploiting the replacement Responses API. Microsoft’s recommendations are helpful, but they’re playing catch-up. The real lesson here? Security teams need to assume that any cloud service – even the most legitimate, business-critical ones – can and will be weaponized. It’s not about blocking these services anymore, it’s about monitoring them more intelligently.

Leave a Reply

Your email address will not be published. Required fields are marked *