According to The Verge, AI companies are aggressively targeting students with free access to premium tools while largely avoiding responsibility for how those tools enable cheating. OpenAI offered ChatGPT Plus during finals, Perplexity pays $20 referrals for student downloads, and Google provides free access to its AI products. Meanwhile, AI agents can now automatically complete assignments on platforms like Canvas, with Perplexity even running ads showing its tool doing multiple-choice homework. When confronted, companies deflect responsibility—Perplexity’s CEO joked “Absolutely don’t do this” while reposting videos of his product cheating.
The business strategy behind student access
Here’s the thing: student giveaways aren’t charity—they’re brilliant customer acquisition strategy. Tech companies know that if they hook students young, they create lifelong users who’ll eventually pay for these services in their careers. Perplexity’s campus partner program and their $20 referral bonuses aren’t about education—they’re about market penetration. Get them while they’re young and broke, keep them when they’re employed and paying.
And the timing is everything. These companies are rolling out student programs right as usage among teens has doubled in just two years. They’re building their user base during the exact moment when education systems are completely unprepared to handle this technology. It’s like selling cigarettes in school playgrounds while arguing “we’re not responsible if kids smoke.”
The great responsibility dodge
What’s fascinating is how companies position themselves. OpenAI adds a “study mode” that doesn’t give answers while their VP of education says AI shouldn’t be an “answer machine.” But their core product literally exists to provide answers! Perplexity’s CEO can repost cheating videos with a wink while their spokesperson says “cheaters only cheat themselves.” It’s corporate gaslighting at its finest.
Instructure, which runs Canvas used by “every Ivy League school,” basically threw up their hands. They told educators they can’t block AI agents and that this is a “philosophical” problem, not a technical one. But wait—they’re the platform! They absolutely could build detection systems if they wanted to. Instead, they’re partnering with OpenAI while teachers struggle.
Who actually pays the price?
The real victims here are the education system and the students themselves. Teachers are now expected to become AI police while companies profit from the chaos. As research shows, students using these tools risk never developing critical thinking skills. They’re being set up for failure in the real world where you actually need to know how to think, not just generate answers.
And let’s be honest—the companies know exactly what they’re doing. When Perplexity runs ads showing their product doing homework, when Google markets Lens as a “lifesaver for school,” they’re not encouraging learning. They’re selling cheating as a feature. The cognitive dissonance is staggering.
Where does this leave us?
Basically, we’re in the worst of both worlds. Companies get to claim they’re “transforming education” while avoiding any responsibility for the consequences. Educators get stuck with enforcement while having zero control over the tools. And students? They’re the guinea pigs in an experiment where the companies running it have no liability for the outcomes.
The solution being proposed—”collaborative efforts” and “defining responsible use”—is corporate speak for “we’re not changing anything, but we’ll talk about it.” Meanwhile, the products are already in wild distribution, the cheating is happening at scale, and teachers are left holding the bag. It’s a perfect storm of profit motives meeting educational neglect, and students are caught in the middle.
