The Real Risk of AI: Redefining Human Cognition
- •Neuroscientist Anil Seth argues AI systems lack subjective experience, being orthogonally distinct from human consciousness.
- •The primary risk involves 'cognitive borrowing,' where relying on AI erodes inherent human mental capabilities.
- •Cognitive offloading in routine tasks risks replacing genuine understanding with mere algorithmic fluency and superficial output.
For months, the discourse surrounding artificial intelligence has been dominated by a singular, hyperbolic concern: will these systems eventually become conscious? We frequently project human traits onto large language models, reading sentience into patterns and responses that mimic our own interior lives. However, in a compelling analysis, neuroscientist Anil Seth recently articulated a crucial distinction that should shift our focus: AI systems may be incredibly intelligent, but they lack the foundational biological architecture required for consciousness. They are not merely 'lesser' humans; they are something fundamentally different, operating on an entirely separate axis of existence.
Seth maps this divergence using a compelling two-axis framework: one for consciousness and one for intelligence. Humans exist along both, while AI systems extend almost exclusively along the intelligence axis, remaining entirely flat on the consciousness axis. These systems are, in a mathematical sense, orthogonal to us. They operate in a dimension that does not overlap with lived human experience. This is not a deficiency to be fixed, but a structural reality of the technology. The danger, therefore, is not that the machine will 'wake up' and threaten us, but that our unchecked reliance on it will slowly put our own cognition to sleep.
The real risk lies in what some researchers describe as 'cognitive borrowing.' When we offload the mental equivalent of 'activities of daily living'—such as writing, complex decision-making, and structural organization—to an AI, we create a void where our own processing should be. Much like muscle atrophy in a physical body that stops moving, our intellectual capacities degrade when the friction required for genuine understanding is removed. We mistake fluency for insight, and output for formation, prioritizing the ease of the generated response over the uneven, effortful path of true thought.
For students and lifelong learners, this is an urgent observation. Education is fundamentally a process of friction—the act of struggling with difficult concepts until they become part of your internal mental model. When AI provides an immediate, coherent, and seemingly finished product, it removes that necessary struggle. If you allow the system to do the cognitive heavy lifting, you are not just saving time; you are potentially outsourcing the very development of your independent judgment. The system rewards the path of least resistance, but true human thought requires exactly the resistance we are now learning to bypass.
We must stop worrying about the 'machine mind' and start auditing our own. As AI becomes deeply integrated into our digital environments, the question is not whether the machine will become human, but whether we will remain so. The challenge of the coming decade is to integrate these powerful tools without letting them alter the conditions that sustain our unique capacity for lived experience. If we forget that understanding requires cost, we may find ourselves with fluent machines and increasingly stagnant human minds.