Your Brain on ChatGPT: What Happens When Students Let AI Write for Them?

For Arabic

Can ChatGPT make you smarter or just feel like it?
A recent study from MIT Media Lab and partner universities dives deep into this question by investigating what happens inside our brains when we use large language models (LLMs) like GPT-4 to write essays. The findings are eye-opening, revealing not just how we write with AI, but how it rewires our cognitive engagement, memory, and sense of ownership.

The Study

Title: Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing

Researchers: MIT Media Lab, Wellesley College, MassArt

Link: Read the full paper

This wasn’t just a casual survey or opinion piece. The team designed a rigorous, multimodal experiment:

Participants: 54 college students divided into three groups:

  • LLM Group: Used ChatGPT (GPT-4o)

  • Search Group: Used Google (no AI tools)

  • Brain-only Group: No external tools

Method:

  • Students wrote SAT-style essays across three sessions.

  • In a fourth session, some participants switched roles (e.g., from LLM to Brain-only).

  • During writing, their brain activity was recorded using EEG.

  • Essays were scored by human teachers and AI, and analyzed with natural language processing (NLP) tools.

  • Participants were also interviewed afterward.

Key Findings

1. Tool Use = Lower Brain Engagement

The study revealed a clear gradient: the more automated the tool, the less the brain had to work.

  • The Brain-only group showed the highest neural activity, especially in regions tied to memory and attention.

  • The Search group was intermediate.

  • The LLM group had the weakest cognitive signals, indicating reduced attention and mental effort.

2. Cognitive “Debt” Builds Over Time

In the fourth session, participants who started in the LLM group and were then asked to write without assistance continued to show low brain engagement. Even without the tool, they didn’t mentally re-engage at the same level. The researchers called this “cognitive debt” the lingering cognitive cost of tool dependency.

3. AI-Generated Text Feels Less Like Yours

LLM users:

  • Had lower recall of their own writing,

  • Struggled to accurately quote from essays they had “written” with AI,

  • And expressed less personal ownership over the content.

By contrast, the Brain-only group had a stronger sense of authorship and memory of their ideas.

4. Apparent Strengths vs. Real Engagement

Although not the central claim of the study, the LLM-generated essays were noticeably more polished grammatically cleaner, more structurally coherent, and highly aligned with standard academic rubrics. These are the traits most grading systems prioritise.

Meanwhile, the essays from the Brain-only group were more diverse, exploratory, and conceptually original and more likely to be remembered by their authors.

This contrast raises a critical systemic question:

Are we building (or maintaining) academic systems that reward fluency and structural conformity more than intellectual risk and original thought?

If an LLM, trained on past patterns and tuned to rubrics, can outperform a student who takes creative, unorthodox approaches even when that student demonstrates real insight what does that say about the values encoded into our grading practices?

The Systemic Contradiction: Rewarding Conformity, Penalising Originality

One of the more troubling implications though not directly stated in the paper is how educational systems may reward conformity over originality, even when we claim the opposite. The study shows that LLM-assisted essays were more polished and rubric-aligned, while Brain-only essays were more original and cognitively engaging. If graders believe all essays are student-written, the LLM ones may receive higher scores not because they reflect deeper learning, but because they align better with what grading systems reward.

This reflects a deeper contradiction:

We say we value deep thinking, creativity, and intellectual agency. But we assess work through standards that often privilege format, fluency, and familiar structure traits that AI can mimic easily.

The risk is systemic: students who take intellectual risks may be complimented for their ideas but penalised for “going off-rubric.” Meanwhile, students who prompt ChatGPT well enough to hit the expected shape of an answer may receive top marks, even with minimal effort.

This isn’t just a comment on one study—it’s a challenge to the educational model itself. If LLMs can outperform original thought within existing standards, maybe it’s time we reconsider what those standards are actually measuring.

What This Means for Education

This study isn’t just about AI. It is about how AI interacts with flawed systems. It shows that:

  • Polished, AI-assisted outputs don’t equate to deeper learning.

  • Students using LLMs may look like better writers while becoming less cognitively engaged.

  • There’s a long-term cost in memory, agency, and critical thinking when effort is outsourced too early or too often.

Key Takeaways

  • Cognitive effort matters. The process of writing without AI produces stronger mental traces and deeper learning.

  • Tool use isn't neutral. LLMs don’t just change outputs. They reshape how we think and remember.

  • We must rethink our rubrics. If we reward fluency alone, we may be training students to disengage from thinking itself.

Areej Abdulaziz

Areej Aljarba is a creative writer, visual artist, and UX professional.

https://www.areejalution.com/
Previous
Previous

هل يمكن أن يجعلك تشات جي بي تي أكثر ذكاءً بالفعل أم أنه يُشعِرك بذلك فقط؟

Next
Next