The Shortcut That Costs Us Everything
On the seductive logic of letting machines do the writing
There’s a seductive logic circulating in education right now, one that sounds progressive and even pragmatic: If AI can generate the writing, why not have students simply analyze it instead?
The argument takes various forms, but the underlying logic is consistent: if AI can handle the creation, let students focus on critique instead. Some frame this as “reversing Bloom’s taxonomy,” letting the machine handle creation while students do the higher-order work of evaluation and synthesis. Students become editors rather than authors. Consumers rather than producers.
To be clear: having students evaluate and critique AI-generated writing isn’t inherently problematic. I’ve seen teachers use it effectively as one strategy among many. The problem arises when this approach becomes the dominant mode, displacing rather than supplementing the generative work students need to do themselves. In too many contexts, what started as an occasional exercise is quietly becoming the default, diluting writing-focused strategies in favour of something that feels more manageable in an impossible situation.
It sounds elegant. It sounds efficient. And I understand why so many thoughtful educators have embraced it. They are trying to adapt, often heroically, to a volatile and bewildering context. This isn’t about blame. But the profession needs to pause and ask whether this particular adaptation, when relied upon too heavily, might cost us more than we realize.
Indeed, this may be one of the most dangerous ideas we’ve entertained in a generation. And it rests on a set of assumptions that deserve interrogation.
Assumption one: Students already possess the foundational literacies that make critical consumption possible.
Here’s what the ‘reverse Bloom’s’ crowd gets wrong about the taxonomy itself: they tend to treat creation as though it can be cleanly excised from the other levels. In Bloom’s framework, creation sits at the top precisely because it integrates and depends upon the capacities below it. But the relationship runs both ways. Analysis and evaluation are not abstract skills that exist independent of experience. The capacity to analyze writing emerges from having written. The capacity to evaluate arguments develops through having constructed them. Remove creation from the equation, and the other levels lose their grounding.
Anyone who has wrestled with a stubborn paragraph, who has discovered that their argument collapses under the weight of its own contradictions only by trying to write it out, knows this intimately. The friction of creation is the thinking. When we ask students to critique AI-generated text without having themselves engaged in the process of composition, we’re asking them to recognize good writing without ever having discovered how difficult good writing is.
Today’s students can still do this, at least in some measure, because they arrived in our classrooms having developed at least some writing capacity before the machines got good. But what happens to the student five or ten years from now who has been “reverse-Blooming” their way through school since grade three? What foundation will they stand on when they attempt to evaluate the very process they’ve been systematically excused from?
Assumption two: Critical thinking can exist independently of creative production.
This is perhaps the deeper error. We’ve somehow convinced ourselves that analysis and creation are separate cognitive acts, that we can nurture one while neglecting the other.
When I write, I am constantly evaluating: Does this word serve the idea? Does this argument hold? Have I considered the counterpoint?
Or consider what just happened in that last paragraph. I initially wrote “Does this argument work?” and then paused. Work felt too mechanical, too transactional for what I was trying to describe. I wanted something that conveyed structural integrity, something with more weight. I tried stand, but that felt static. Then hold, which suggested both strength and fragility, the sense that an argument either bears the weight you place on it or buckles. That single word choice took me somewhere I hadn’t anticipated, forced me to think about arguments as structures under tension. That’s the kind of micro-struggle that shapes thinking, and it only happens when the words are mine to wrestle with. If I had simply edited AI-generated prose, I would have encountered a work or hold already chosen for me. I might have nodded and moved on. The thinking that emerged from that friction would never have occurred.
The internal editor emerges through the practice of editing one’s own work, through the lived experience of failure and revision. Outsource the creation, and you outsource the very process through which critical capacity develops.
Assumption three: The training data will take care of itself.
Here’s an irony that should keep us up at night: The large language models we’re proposing to rely on have been trained predominantly on human-generated text. Every essay, article, story, and argument that made these systems possible was produced by humans who learned to write by actually writing. The models are, in effect, compressed distillations of centuries of human creative effort.
If we stop producing, if we systematically excuse generation after generation from the cognitive work of composition, what will the machines train on? AI-generated text, presumably. And then the next generation of AI will train on that AI-generated text. Researchers have a name for this: Model Autophagy Disorder, or MAD. The term evokes mad cow disease, which spread when cattle were fed the processed remains of other cattle. The parallel is grimly apt. Studies show that models trained on their own outputs begin to degrade, collapsing into narrower and narrower patterns, losing the richness and diversity of the original training data. Over just a few generations, the outputs become bland, repetitive, and eventually incoherent.
We’re not just opting out of a pedagogical practice. We’re potentially undermining the very substrate that makes these tools useful in the first place.
The synthetic landscape
There’s a deeper concern here, one that extends beyond the classroom.
Our students are growing up in a media environment of unprecedented synthetic saturation. They will encounter AI-generated articles, AI-generated images, AI-generated videos, AI-generated voices, all depicting things that never happened, spoken by people who never existed. The deepfake is no longer an edge case; it’s becoming the default texture of the information landscape.
In this environment, critical media literacy isn’t just a nice academic skill. It’s a survival capacity. And we’re proposing to develop it by removing the very experiences that might allow students to understand, at a visceral level, what synthetic content lacks. The student who has never struggled to find their own voice will have no ear for the synthetic. The student who has never tried to make something true will have no instinct for detecting the fabricated.
We keep saying ‘seeing is no longer believing.’ And then we design pedagogies that leave students with nothing but their eyes to rely on.
Where is the student?
If you know my work, you know I’m not arguing for some nostalgic retreat from technology. The question isn’t whether to use these tools but how, and more importantly, which human capacities we are unwilling to surrender in the process.
Some capacities are worth preserving precisely because they are difficult, precisely because they require struggle, precisely because there is no shortcut. Writing, the act of wrestling thought into language, of discovering what you think by attempting to articulate it, is one of these capacities.
The solution isn’t to ban AI or to ignore it. The solution is to ask: where is the student in this process? Learning requires presence. It requires that the learner be genuinely engaged with the material, wrestling with it, making decisions, encountering friction. That friction can be supported, scaffolded, accommodated in a thousand different ways. But it cannot be entirely eliminated without eliminating the learning itself.
What might we do instead?
It might mean designing assignments where AI is a collaborator in revision but not a replacement for initial generation. It might mean portfolios that document thinking over time, making the process visible. It might mean oral examinations, live demonstrations, and other forms of assessment that require presence and embodiment. It might mean being honest with students about why we’re asking them to do difficult things, not because we’re Luddites, but because we respect them enough to believe their minds are worth developing.
None of this is easy to assess. But that’s precisely the point. As William Bruce Cameron put it, “Not everything that can be counted counts, and not everything that counts can be counted.” We have a long history in education of defaulting to what is easy to measure rather than what matters. The capacity to think, to struggle, to find one’s voice through the friction of composition: these matter profoundly, even if they resist tidy quantification. The difficulty of assessment is not an argument for abandoning the thing being assessed.
Most of all, it means resisting the seductive efficiency of the shortcut. Yes, AI can generate text. Yes, students can analyze that text. But the student who only ever analyzes will become a critic without craft, a consumer without creation, a judge without judgment.
What we’re saying to learners
I keep thinking about what we’re signalling to young people when we tell them that their creative output doesn’t matter, that the machine can handle that part, that their job is simply to evaluate what the machines produce. What a strange message: Your thoughts aren’t worth the trouble of expression. Let the system speak for you.
We should be doing exactly the opposite. We should be telling them: Your voice matters. Your struggle to find it matters. The world needs what you, specifically, irreplaceably you, have to say. The machines can help. The machines cannot replace this.
The long arc
We’ve been here before, in a sense. For decades, we’ve insisted that writing in school was never really about the product, the essay, the poem. It was always about what the process did to the mind. Writing was a tool for thinking, a way of discovering what we believe by trying to articulate it. We said this so often it became almost cliché.
And now, at the very moment these tools might matter most, we’re considering tossing them away.
The shortcut that skips creation to arrive at critique is not a shortcut at all. It’s a path toward a kind of collective cognitive atrophy, a slow erosion of the very capacities we claim to be developing. If we take this seductive path now, we set in motion a ripple effect: students who never develop the struggle for voice, who grow into adults ill-equipped to navigate a synthetic landscape, who raise children in a world where the very idea of original expression seems quaint.
This is what’s at stake. Not efficiency. Not convenience. But the long arc of human literacy itself.
We still have a choice. And the choice we make now will echo forward, shaping not just what our students can do, but what they believe is worth doing at all.


Well said! Yes, respecting learners and having conversations with them about why it matters is so important. And the act of writing is so much more than just writing.
Thank-you Alec for sharing this with us. So relative to the work we are doing. The type of content I am looking for to expand my thinking.