Cognitive UX in the Age of AI: What Designers Need to Know
A recap of three groundbreaking studies that reveal how AI reshapes human thought — and why design must go beyond ethics to meet this challenge.
We’re building AI systems that seem smarter, more fluent, more helpful. But behind the impressive performance lies something more troubling — and three powerful research papers help illuminate what’s really going on. Here’s a distilled recap of what these papers say. No interpretations yet — that’s for the next post.
Paper: “Outsourcing Cognitive Control to AI” (MIT)
Authors: Varma et al.
Main Finding: Using AI makes our brains less active.
People offload difficult thinking to AI tools (like ChatGPT) — even when the AI is less accurate.
Neuroimaging shows reduced activation in key brain areas responsible for cognitive control.
Result? We conserve effort, but lose engagement — like using GPS until we forget how to navigate.
This is not about laziness — it's about cognitive cost. When AI is “good enough,” our brain disengages.
Read previous post: Your Brain on ChatGPT
Paper: “The Illusion of Thinking”
Authors: Shojaee, Mirzadeh, Farajtabar et al.
Main Finding: LLMs sound smarter than humans — even when they’re not.
In blind tests, GPT-4 was preferred over real expert answers — including by AI researchers.
People consistently rated LLM responses as higher quality due to their style, not substance.
LLMs win on polish, coherence, and confidence — not depth.
This is the fluency illusion: We mistake smooth language for smart reasoning.
Read previous post: The Death of Depth
Paper: “Philosophy Eats AI” (Schrage & Kiron, MIT Sloan Review, 2025)
Authors: Michael Schrage & David Kiron
Main Argument: Philosophy — not just ethics — is essential to building strategically aligned AI.
To guide AI behavior, systems must be trained with:
Epistemology (What is knowledge?)
Ontology (What exists?)
Teleology (What is the goal?)
Ethics (What is good/right?)
Their provocative thesis: LLMs don’t just need more data — they need more meaning.
Read previous post: Philosophy Eats AI
Why This Matters
Together, these three papers reveal a deeper, shared concern:
We are designing AI systems that change how humans think, even as we misunderstand how AI thinks.
When AI becomes “good enough,” we stop engaging deeply.
When fluency outshines substance, we mistake polish for wisdom.
When we ignore philosophy, we build agents with power but no purpose.
In other words: We’re not just offloading tasks — we’re outsourcing meaning, evaluation, and attention. These systems don’t just reflect human cognition — they reshape it. And unless we become intentional about that shaping, we risk building a future optimized for convenience rather than understanding.
This isn’t just an AI problem. It’s a design problem. A thinking problem. A values problem.
And it’s time we asked better questions — not just about what AI can do, but what we want thinking to mean in an AI-saturated world.
Next: From Recap to Reflection
I’ve been sitting with these papers — not just reading, but feeling them through the lens of design, research, and lived experience.
The result?
I’m working on a framework to help us rethink how we evaluate, build, and align AI — not just ethically, but cognitively and intentionally.
If ethics is the floor, this next layer is the architecture.
Stay tuned: In the next post, I’ll share ten emerging themes that I believe can reshape how we design AI that thinks with us, not just for us.