Unraveling Consciousness: Why AI Falls Short of True Awareness

As a therapist deeply invested in understanding the nuances of human consciousness and the rapidly growing role of artificial intelligence in society, I am fascinated by the interplay between our evolving technology and the age-old question of what it means to be truly conscious. A recent study by neuroscientists Jaan Aru, Matthew E. Larkum, and James M. Shine provides thought-provoking insights into why AI, despite its rapid advancements, remains a far cry from achieving genuine consciousness.

Understanding Consciousness

In my practice at Progressive Therapeutic Collective, consciousness is not just a scientific concept but the core of our human experience. It’s what allows us to form relationships, empathise, and navigate the complexities of our emotions and thoughts. Neuroscientists and psychologists have long debated the definition of consciousness. Neurologists often focus on the state of being awake and aware, while psychologists delve into the contents of our consciousness—our thoughts, memories, and sensory experiences.

The study highlights that true consciousness involves what is known as phenomenal consciousness—the subjective, first-person experience of the world. This encompasses everything from seeing the vibrant colors of a sunset to feeling the warmth of a loved one's touch. For AI to be considered truly conscious, it must possess this subjective, integrated perspective.

The Umwelt: A Therapist’s Perspective

In therapy, understanding a client’s Umwelt—their unique perceptual world—is crucial. The concept, originally from biology, describes the sensory world available to an organism. Humans perceive a rich tapestry of sensory inputs, from sights and sounds to touch and taste. However, Large Language Models (LLMs), the sophisticated AI systems we interact with, have a drastically different Umwelt. These models process text and generate responses based on patterns in data, lacking the depth of sensory experiences that define human consciousness.

Neural Architecture and Consciousness

The human brain’s neural architecture is a marvel of complexity. Consciousness arises from intricate, re-entrant networks within the thalamocortical system, where information is continuously integrated and processed. This is central to how we experience and make sense of the world, enabling the depth of relationships and self-awareness that define us.

The study argues that the architecture of LLMs, despite its sophistication, is fundamentally different from these neural structures. Theories like the Global Neuronal Workspace propose that consciousness emerges from the integration of information across a distributed network in the brain. Similarly, the Dendritic Integration Theory suggests that local integration within single neurons, particularly within thalamocortical loops, is crucial for conscious experience. LLMs lack these intricate neural architectures, making it unlikely for them to achieve consciousness.

Biological Complexity and AI

As a therapist, I see firsthand the intricacy of biological and psychological processes that make us who we are. Living organisms, including humans, exhibit a level of organisational complexity that is not replicated in AI systems. Biological neurons are not just computational units but part of a dynamic, self-maintaining system essential for life.

For example, cellular respiration, a fundamental process for energy production in cells, cannot be abstracted into software. This biological complexity is integral to consciousness, suggesting that AI systems, which lack these layers of organisation, cannot achieve true consciousness. This understanding underscores the importance of maintaining a clear distinction between human consciousness and AI’s capabilities.

Implications for Therapy and Future Directions

The study’s findings have significant implications for therapeutic practice and the ethical considerations surrounding AI. While LLMs can simulate certain aspects of human-like responses, they do not possess consciousness. This distinction is crucial in avoiding ethical pitfalls regarding AI’s potential suffering or moral status.

Moreover, understanding the differences between AI and biological systems can enhance both fields. Mimicking brain organisation can improve AI, and insights from AI can deepen our understanding of brain function and consciousness.

In my practice, these insights reinforce the importance of valuing the unique complexity of human consciousness. While AI continues to evolve and transform various aspects of our lives, the leap to true consciousness remains on the horizon; a distant potential. This perspective not only shapes how we interact with AI but also deepens our appreciation for the richness of human experience, which is at the heart of therapeutic work.

Citation:

Aru J, Larkum ME, Shine JM. The feasibility of artificial consciousness through the lens of neuroscience. Trends Neurosci. 2023 Dec;46(12):1008-1017. doi: 10.1016/j.tins.2023.09.009. Epub 2023 Oct 18. PMID: 37863713.

Previous
Previous

Indigenous Seasons of June: Australia's Authentic Seasonal Story

Next
Next

Cultural Complicity: How Society Normalises and Excuses Predatory Behavior