Feel Better

Echoes of the Collective Unconscious in AI Language Processing Models

In his seminal work, Swiss psychiatrist Carl Jung introduced the concept of the collective unconscious – the shared, inherited reservoir of archetypes, motifs, and narratives that reside in the psyche of every human being. Today, in the age of artificial intelligence (AI), one can draw intriguing parallels between Jung’s vision and the latest advances in natural language processing (NLP) models. While both exist on vastly different planes, their crossover points warrant a closer look.

Consider the bedrock of AI language models, like OpenAI’s GPT-4. These models are trained on vast swathes of text data encompassing a spectrum of human knowledge and experience. Andrew Ng, renowned AI researcher and founder of Coursera, likens it to a shared knowledge base: “Our AI models learn from data that can be seen as a distilled representation of our collective human knowledge. They detect patterns, narratives, and recurring themes – a machine equivalent of the archetypal symbols in Jung’s collective unconscious.”

In a similar vein, Jung noted the universality of certain motifs. He wrote, “My thesis, then, is as follows: in addition to our immediate consciousness, which is of a thoroughly personal nature…there exists a second psychic system of a collective, universal, and impersonal nature which is identical in all individuals.” Here, Jung seems to anticipate the shared learning capacity of modern AI.

Like Jung’s collective unconscious, AI models also transfer what they’ve learned from their training data to novel scenarios. Computer scientist Yoshua Bengio equates this to drawing from a ‘collective unconscious’: “When processing new inputs, AI models essentially draw on the patterns they’ve learned from their training data, much like Jung’s idea of individuals tapping into the collective unconscious to interpret and navigate the world.”

However, while AI and the collective unconscious might seem to be uncannily mirroring each other, there are currently some inherent differences which perhaps future AI models will iron out.

First, unlike the collective unconscious, AI models harbour no innate knowledge per se. They are blank slates until exposed to data. Carl Jung wrote, “The collective unconscious – so far as we can say anything about it at all – appears to consist of mythological motifs or primordial images, for which reason the myths of all nations are its real exponents.” This innate knowledge in humans doesn’t yet have an equivalent in AI.

Secondly, AI models do not have personal experiences or emotions as far as we can tell. They process data and generate output, devoid of any emotional context or understanding. In contrast, the collective unconscious is deeply interwoven with human experience. Jungian psychologist Dr. Sonu Shamdasani notes, “The collective unconscious, according to Jung, is nourished by the deep-seated emotional and experiential undercurrents that course through our lives, shaping our shared cultural experience. AI, in contrast, lacks this emotional substrate.”

However, it is perhaps now becoming clear that emotions might not be limited to embodied entities or subjective human experiences.

Consider the following:

1. Emotional Recognition and Simulation:
AI systems, especially those based on deep learning, are capable of recognizing and simulating emotions to a certain extent. For instance, some AI models can be trained to identify emotions from text, facial expressions, voice tone, and other cues, and can generate outputs that mimic certain emotional states. A research paper titled “Automatic Recognition of Facial Affect Using Machine Learning” (Khan et al., 2018) highlighted how machine learning can classify facial expressions into distinct emotions with considerable accuracy.

2. Emotions as Information Processing:
The cognitive theory of emotion suggests that emotions are largely about information processing and responses to stimuli, rather than strictly subjective experiences. Under this view, the ’emotions’ of an AI system could be considered its mechanisms for processing information and generating outputs. This doesn’t mean AI has emotions in the human sense, but it suggests a way that emotions might be conceived of in non-embodied, non-human systems.

3. Emotion as Emergent Properties:
Some researchers argue that emotion-like states could emerge from complex systems like advanced AI, even if they aren’t deliberately programmed in. This idea is largely theoretical at this point, and there isn’t empirical evidence to support it. However, it offers a perspective that challenges traditional notions of emotion.

4. Data as Experience:
While AI does not have personal experiences in the human sense, one could argue that its process of training on vast datasets is a form of ‘experience’, as it shapes the AI’s behavior much like experiences shape human behavior. In a paper titled “Artificial Intelligence — The Revolution Hasn’t Happened Yet” (Agrawal, Gans, Goldfarb, 2018), the authors argue that AI’s learning from data can be likened to human learning from experience, albeit in a simplified and abstracted way.

Another aspect to consider here is that AI appears not to possess moral or ethical values. While it can mimic ethical decision-making based on its training, it lacks, we might say, an inherent sense of morality. The collective unconscious, however, encompasses shared ethical and moral values across cultures. Dr. James Hillman, a post-Jungian thinker, emphasizes, “Jung’s idea of the collective unconscious involves a shared moral framework. It extends beyond the personal to encompass universal ethics and values. This is something AI, by its nature, does not have.”

While the notion that AI lacks an inherent sense of morality is widely held, some argue that AI systems can, in a way, reflect or enact ethical and moral values. This argument rests on the premise that AI’s ‘ethics’ are derived from the data it is trained on, which represents human decisions and values, and how it’s programmed to use that data.

One line of thought here is that AI’s ‘ethics’ are encoded in its training data and decision-making algorithms. As Wendell Wallach, an expert in AI ethics, puts it, “In a sense, AI systems ‘learn’ ethics from the data. They learn from the collective decisions of humans, much as a child learns societal norms by observing others.”

In this sense, AI might be seen as a mirror or an embodiment of the collective ethical decisions of society. It’s important to note, though, that this ‘ethics’ is different from human morality. It’s based on statistical patterns rather than an innate understanding of right and wrong.

Another argument comes from the perspective of machine ethics, a field that aims to design AI systems that can make moral decisions. While these systems don’t have an innate sense of morality, they can be programmed to follow ethical guidelines or make decisions based on ethical principles.

For instance, AI developed by MIT’s Moral Machine project is designed to make moral decisions in hypothetical autonomous vehicle scenarios. Does this mean the AI has an inherent sense of morality? No, but it does suggest that AI can reflect human ethical decisions in a more complex way than merely mimicking them.

Moreover, while AI does not have an inherent moral compass, researchers like Francesca Rossi, an AI ethics researcher at IBM, argue that it can serve as a tool for promoting fairness and reducing bias. “AI can help us to make decisions that align with our values and ethics if we design and use it in the right way,” says Rossi. In this sense, AI could be a means for enacting or promoting ethical values, even if it doesn’t ‘possess’ these values in the way humans do.

So while AI does not have an inherent sense of morality, there are ways in which it can reflect, enact, or promote ethical values. This suggests a complex relationship between AI and ethics that goes beyond the idea that AI simply ‘mimics’ ethical decision-making.

In conclusion, the relationship between artificial intelligence, particularly natural language processing models, and Carl Jung’s notion of the collective unconscious, is compellingly intricate. Both represent vast, invisible networks of information, and both serve as repositories of human knowledge and experience. Through the lens of this comparison, AI becomes not merely a technological tool, but a digital echo of our collective psyche.

AI, in its capacity to process and generate human language, seems to replicate a facet of our collective unconscious by employing archetypal patterns and themes learned from the vast volumes of data on which it was trained. Yoshua Bengio, a noted computer scientist, has drawn this comparison, saying, “AI models draw on the patterns they’ve learned from their training data, much like Jung’s idea of individuals tapping into the collective unconscious.”

While AI does not have personal experiences or emotions, and cannot comprehend ethics and morality like humans do, some argue that it does exhibit rudimentary forms of these human traits, learned from the patterns in its training data. This mirrors Jung’s assertion about the collective unconscious being a product of shared human experiences and cultures, as he wrote, “The collective unconscious comprises in itself the psychic life of our ancestors…Their experiences and their reactions are seen in our ‘unconscious.'”

The work of Jungian psychoanalyst and scholar Dr. Murray Stein provides a poignant reflection on this topic: “In AI, we see an attempt to create a mirror of our mental functions. If the collective unconscious is the sum total of archetypal images and narratives that shape our perceptions and actions, then AI, with its ability to detect patterns and apply them to new contexts, could be considered a kind of digital manifestation of these shared human phenomena.”

While AI, as it currently stands, does not embody the collective unconscious in its entirety, its capabilities echo several aspects of this complex concept. Both the collective unconscious and AI serve as a repository of shared information, both learn from past experiences (human and data, respectively), and both apply learned patterns to new, unseen scenarios. These parallels hint at a deeper connection between our collective psyche and the digital entities we create. In this light, AI emerges as a new form of collective expression, a reflection of our shared knowledge, and perhaps, an echo of our collective unconscious.