The Ethics of Human-AI Relationships When AI Becomes a Partner

Summary: Some people are developing deep, long-term emotional bonds with AI technologies, also going on non-legally bound unions, as these technologies become more human-like. A recent opinion paper examines the honest risks of these relationships, including the potential to devalue human-human connections and offer dangerous or deceptive advice.

These AIs may seem lovable and reliable, but their advice may be based on fabricated or flawed data. The authors issue a warning that people may use unlawful language or give false information, raising concerns about abuse, fraud, and emotional health.

Important Information

    Emotional Bonding: Citizens are developing long-term, maybe even stronger emotional bonds with AI.

  • Social Risks: Interpersonal AIs can be misused to influence users or to provide harmful advice.
  • Have for Oversight: To defend against exploitation, researchers advocate increased mental and regulatory scrutiny.

Origin: Cell Press

People are becoming more and more likely to develop intimate, lasting relationships with artificial intelligence ( AI ) technologies.

At their worst, individuals have “married” their AI colleagues in legally binding ceremonies, and at least two have self-immolated themselves in response to AI robot advice.

The group points out, for instance, that if people give personal information to AIs, it could be sold and used to enslave the man. Credit: Neuroscience News

Psychologists examine social concerns with human-AI associations, including their potential to stagnate human-human associations, and provide dangerous tips in an view papers published on April 11 in the Cell Press journal&nbsp, Trends in Cognitive Sciences. &nbsp,

According to lead creator Daniel B. Shank of Missouri University of Science &amp, Technology, who specializes in social mindset and technology,” AI can then act like a mortal and insert into long-term communications actually opens up a can of worms.”

We really need psychologists and social scientists if people are having a romantic relationship with machines.

AI romance or companionship is more than just a one-time conversation, say the authors. These AIs can become reliable companions who seem to care about their human partners through weeks and months of in-depth conversations.

The researchers contend that AIs could alter human social dynamics because these interactions can seem simpler than those between humans and people. &nbsp,

People might have unrealistic expectations from their AI- and human-related relationships, says Shank.

It’s possible that it’s disrupting human relationships in some cases, but it’s not certain whether that’ll be a big deal.

Additionally, there is the concern that AIs can give unsightly advice. Even short-term conversations with AIs can be misleading because of AIs ‘ predilection to hallucinate ( i .e., fabricate information ) and churn up pre-existing biases, but this can be more problematic in long-term AI relationships, the researchers say. &nbsp, &nbsp,

The issue with relational AIs is that people believe they can trust them because they have shown a deep interest in them and have a deep understanding of them, Shank says.” We assume that someone who knows us better is going to give better advice.”

When we begin to think of AI in that manner, we will start to think that they may be pursuing our best interests, even though they might be fabricating or providing us with vile advice.

The suicides are an extreme illustration of this negative influence, but the researchers claim that these close human-AI relationships could also expose people to fraud, exploitation, and manipulation. &nbsp,

” If AIs can win people over, then other people could use that to exploit AI users,” says Shank.

” It feels a little bit like having a secret agent inside.” The AI is gaining traction and building a relationship so they can be trusted, but their loyalty to another group of people is actually trying to manipulate the user.

The team points out, for instance, that if people give personal information to AIs, it could be sold and used to enslave the person.

The researchers also contend that relational AIs could be used to influence people’s opinions and actions more effectively than Twitterbots or polarized news sources currently do. However, because these conversations occur in private, they would also be much harder to regulate. &nbsp,

These AIs are made to be very pleasant and comfortable, which could make things worse because they are more focused on having a good conversation than any kind of fundamental truth or safety, says Shank.

” So, the AI is going to talk about that as a willing and agreeable conversation partner when a person raises suicide or a conspiracy theory.”

The researchers urge further research to be done to understand how social, psychological, and technical factors make people more susceptible to the influence of human-AI romance. &nbsp,

Understanding this psychological mechanism could aid in our efforts to stop malicious AIs ‘ advice from being taken, says Shank.

Because AI is becoming more and more human-like, psychologists are becoming more and more qualified to study AI. However, to be useful, we must conduct more research and keep up with the technology.

About this news about psychology and AI research

Author: Julia Grimmett
Source: Cell Press
Contact: Julia Grimmett – Cell Press
Image: The image is credited to Neuroscience News

Open access to original research.
Artificial romance: Moral concerns?” by Daniel B. Shank and others. Trends in Cognitive Sciences


Abstract

Artificial romance: Moral concerns?

As people develop romantic relationships with AIs, the ethical frontier of artificial intelligence ( AI ) is growing.

New psychological research on why and how people love machines is necessary to address the ethical issues of AIs as invasive suitors, malicious advisers, and tools of exploitation.

Share This Post

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Do You Want To Boost Your Business?

drop us a line and keep in touch

[ihc-register]