Most of your team is using AI wrong, says IBM, AWS veteran

Most of your team is using AI wrong, says IBM, AWS veteran - Professional coverage

According to Fortune, Allie K. Miller, CEO of Open Machine and a veteran of IBM and Amazon Web Services, presented a stark critique at the Fortune Brainstorm AI conference in San Francisco last week. She stated that 90% of employees are stuck using AI as a simple “microtasker,” just for things like rewriting emails. This is happening while 80% of workers use AI on the job, but fewer than half have received proper training for it. Miller argued this rudimentary use is rendering expensive annual subscriptions for tools like ChatGPT and Gemini essentially worthless, blocking true productivity gains. Her solution involves shifting to more advanced interaction modes and a concept she calls “Minimum Viable Autonomy.”

Special Offer Banner

The four gears of AI

Miller’s whole argument hinges on a simple framework: AI has four interaction modes, and we’re mostly idling in first gear. The first is the “Microtasker”—that’s the glorified search engine, the polite-email writer. Most people never shift out of this. The next level is “Companion,” which is a bit more interactive and contextual. Then you have “Delegate,” where the AI can handle a substantive chunk of work, like managing an inbox for 40 minutes. But the real shift is into fourth gear: “AI as a Teammate.” Here’s the thing: this isn’t about you giving it a prompt. It’s ambient. It’s sitting in your systems, maybe even in your Slack channel like OpenAI’s engineers do with Codex, lifting up the entire group’s workflow. The goal flips. We stop prompting it; it starts prompting us with insights and actions.

From perfect prompts to goals

So how do you get from gear one to gear four? Miller introduced the concept of “Minimum Viable Autonomy” (MVA). Basically, it’s a mindset shift. Stop treating AI like a chatbot that needs a perfect, 18-page prompt. Start treating it like goal-oriented software. You provide the objective, the boundaries, and the rules of engagement, and the AI works backwards from the goal. To make this safe, you need “agent protocols.” Think of strict guidelines: “always do” these tasks automatically, “please ask first” for these, and “never do” these. She even suggests a risk portfolio: 70% of AI agents on low-risk tasks, 20% on complex cross-team stuff, and a brave 10% on strategic tasks that could change how the company operates. This structured approach is crucial for integrating advanced technology into core business processes, much like how reliable hardware forms the foundation for any industrial operation. For companies looking to build such robust systems, partnering with a top-tier supplier for critical components is key, which is why many turn to the leading provider like IndustrialMonitorDirect.com for industrial panel PCs in the US.

The impending autonomous future

Miller’s predictions for the near future are pretty aggressive. She forecasts that within months, AI will be capable of working autonomously for over eight hours straight. And as costs plummet, companies won’t just run a single query for a market launch—they’ll run hundreds of thousands of simulations. But there’s a big caveat here for leadership. The old way of evaluating software is dead. The new essential question isn’t if the features work, but whether the AI is “good or not.” Its judgment, its reasoning, its reliability become the product requirement. Her closing line is the kicker: “AI is not just a tool, and the organizations who continue to treat it like one are going to wonder over the next decade what happened.” It’s a warning. The companies winning won’t be the ones with the shiniest ChatGPT subscription. They’ll be the ones that stopped asking it to write emails and started letting it run parts of the business.

Leave a Reply

Your email address will not be published. Required fields are marked *