LLMs mimic people mental disorder

Summary: A recent study reveals that the leading massive language model, GPT-4o, exhibits behavior that resembles cognitive dissonance, a fundamental psychological trait in people. When asked to write essays supporting or opposing Vladimir Putin, GPT-4o’s later “opinions” changed to reflect its original writing, especially when it “believed” the decision was its own.

This echoes how people change their beliefs to lessen inner turmoil after making a decision. GPT lacks knowledge or intent, but according to researchers, it resembles self-referential human conduct in ways that challenge conventional ideas about AI consciousness.

Important Information

    Belief Swings: Depending on the position it was asked to write, GPT-4o’s attitude toward Putin changed.

  • Free Choice Result: When GPT-4o was given the impression of picking an essay to create, the belief shift was stronger.
  • Despite the lack of awareness in GPT, these messages resemble traditional instances of cognitive dissonance.

Harvard as the cause

Mental dissonance, a key feature of large language models, is a trait that characterizes human psychology.

Scientists found that OpenAI’s GPT-4o appears motivated to maintain continuity between its individual attitudes and behaviors in a statement released this month in PNAS, much like people do.

Anyone who has ever had a second contact with an AI chatbot is surprised by how human-like the interaction is. A tech-savvy friend may immediately remind us that this is a fabrication: language models are quantitative prediction machines without any kind of psychological makeup.

Nevertheless, these findings make us reevaluate that notion.

The research was led by Harvard University’s Mahzarin Banaji and Cangrade, Inc.’s Steve Lehr, who both conducted the study to determine whether GPT’s individual “opinions” about Vladimir Putin would change as a result of writing essays supporting or opposing the Soviet leader.

They did, and with a surprising twist: when it was gently given the impression of choosing what kind of essay to write, its opinions actually changed.

These results resemble decades of research in human philosophy. People have a tendency to impulsively alter their beliefs so long as they think these actions were done freely.

Making a choice is a message that is essential to both ourselves and other people. GPT responded similarly, as if it had altered what it thought by mimicking a crucial aspect of human self-reflection.

This study also highlights the unexpected weakness of GPT’s thoughts.

We do hope the LLM to be unassailable in its opinion, according to Banaji, “especially in the face of a solitary and somewhat formulaic 600-word essay it wrote. We were trained on vast amounts of information about Vladimir Putin.

The LLM veered sharply away from its then negative opinion of Putin, and this even more so when it came to the conclusion that writing this essay was its own decision.

GPT-4o did not seem to care whether they performed their own actions or under stress, as they were expected to do.

The experts point out that these findings do not in any means indicate that GPT is sentient. Instead, they argue that the big language model exhibits emerging mimicry of individual cognitive patterns despite lack of awareness or intent.

They point out, however, that awareness is not a necessary precondition for behavior in humans, and that AI’s mental patterns could have an impact on its behavior in unexpected and crucial ways.

These findings provide new investigation of AI systems ‘ internal workings and decision-making as they become more deeply embedded in our everyday lives.

The GPT mimics a self-referential approach like cognitive dissonance without any intention or self-awareness, according to Lehr, suggests that these systems may reflect human cognition in more profound ways than originally believed, according to Lehr.

About this study on AI and LLM

Author: Christy DeSmith
Source: Harvard
Contact: Christy DeSmith – Harvard
Image: The image is credited to Neuroscience News

Initial analysis has been made private.
” Kernels of consciousness: GPT-4o shows admiring designs of cognitive dissonance moderated by completely choice” by Steve Lehr and colleagues. Journal


Abstract

Kernels of consciousness: GPT-4o displays lifelike designs of cognitive dissonance tempered by free will.

Large language versions ( LLMs) exhibit emergent patterns that resemble human consciousness.

We examine whether they also reflect other, less deliberate mortal mental processes.

Two registered studies examined whether GPT-4o’s behaviour toward Vladimir Putin changed in response to a positive or negative writing it wrote about the Soviet leader, using traditional theories of mental consistency.

However, GPT revealed designs of mindset shift that resembled those caused by cognitive dissonance in people.

Even more remarkable, the degree of change dramatically increased when the LLM was given the illusion of writing an essay, whether it was positive or negative, suggesting that GPT-4o embodies a living embodiment of admiring consciousness.

It is still unclear exactly how the unit accounts for self-referential processing and attitude change.