The Art of Looking Twice
A viral MIT study, what it actually found, and the delicious irony of how we consume research.
Tuesday morning. Coffee in hand. My LinkedIn feed's going mental: "MIT study proves: AI makes you dumb!" Finally validation for that nagging feeling I'd had. Hit 'like' immediately.
Only problem: When something sounds too good to be true, it usually is.
I wanted the full story: methodology, limitations, sample size, the works.
What I discovered was rather amusing.
The great 9-student scare
They started with 54 students. Three groups, three essay writing sessions: brain-only, ChatGPT users, and a Google group. Then everyone switched approaches. Those who'd started with AI now had to write without it.
By session four, only 18 students remained.
The final analysis is based on 9 students who suddenly had to manage without AI. Unsurprisingly, they found it tough going.
In the limitations section (which nobody reads) it says: "Cannot generalize," "limited sample size," "specific geographical area."
Here's what gets me: knowing about these study limitations, would people have made such a fuss about this paper on social media? Probably not. Which makes me think the majority just used AI to summarise the paper without actually looking at the details. The irony is absolutely delicious.
How I actually tackled the MIT study
Step one: get the lay of the land. This thing's 200 pages, so I used a prompt to map out the territory:
You must condense this document without summarizing, without deleting key examples, tone, or causal logic, while maintaining logical flow and emotional resonance. Fidelity to meaning and tone always outweighs brevity.
Rather than getting some generic AI summary, you get a kind of super-editor that gives you the lay of the land.
Step two: read the sections that matter to me. Experiment design, prompts, participants, limitations... The devil is always in the details.
What the study actually reveals
Here's what did catch my attention: when the ChatGPT group had to write without AI in the fourth session, they still wrote like ChatGPT without even realizing it. In the study they called it the "contamination effect."
Kind of scary. What if I'm unconsciously slipping into ChatGPT-speak? All those "Let me delve into..." and "it's not X, it's Y" constructions. Someone please stage an intervention if I start sounding like that.
Also fascinating: students felt less ownership over their own writing. Makes perfect sense. Delegate the thinking, delegate the responsibility. In business, this becomes a proper nightmare: who's actually accountable for AI output?
Bottom line: AI as sparring partner? Brilliant. AI as brain substitute? Disaster.
The real issue
The MIT study shows one thing clearly: people who outsource their essays to ChatGPT lose the ability to write. It's like e-bikes: rely entirely on the motor and you'll be gasping when the battery dies.
We've all been there though. Sharing studies about AI's dangers while using AI to understand them in the first place.
Use AI to map the territory, then actually explore it yourself. Make it your research assistant, not your replacement brain.
Though judging by my cat's expression, this should be blindingly obvious to anyone with functioning neurons.
Cheers,
Steffi
Thank you for this! I’ve been traveling today and trying to respond to messages from people feeling very concerned about all the inaccurate hype around that article. You saved me from trying to write up an explanation!
Love this! & thanks for sharing your prompt.
I had all but given up on AI summaries for studies as I’ve found time and time again it would skip over crucial elements and I was better off just taking notes while I was reading