Robot Deception: Some Lies Accepted, Some Rejected

Summary: A recent study examined how people perceive various types of fraud by drones, finding that some people prefer to take lies. Researchers presented almost 500 participants with scenarios where robots engaged in outside, concealed, and superficial deceptions in health, washing, and financial settings.

Participants disapproved most of invisible deceptions, such as a washing machine quietly filming, while outside lies, like sparing a person from mental pain, were viewed more favorably. The research highlights the moral complexities of deception and robots, which suggests that regulations are required as robots become more and more integrated into human life.

Important Information:

  • Hidden condition deceptions, like key filming, were the most approved.
  • Additional state falsehoods, like lying to give feelings, were more accepted.
  • According to the study, user protection should be enforced through legislation.

Origin: Boundaries

Honesty is the best policy … most of the time. Social norms help people to comprehend when to tell the truth and when to lie when it is necessary to protect one’s thoughts or protect yourself. But how do these standards apply to robots, who are extremely collaborating with people?

Scientists polled nearly 500 individuals to assess and defend various types of machine deception to know whether humans may take robots telling lies. &nbsp,

Individuals tended to blame these unethical deceptions, specifically hidden condition deceptions, on machine developers or owners. Credit: Neuroscience News

Andres Rosero, PhD candidate at George Mason University and direct author of the article in Frontiers in Robotics and AI, said,” I wanted to explore an underreported aspect of machine morals to improve our understanding of mistrust of emerging technologies and their programmers.

With the development of relational AI, I felt it was necessary to start looking into possible scenarios where human design and behavior sets could be used to manipulate users.

Three types of lie&nbsp,

The scientists chose three scenarios that reflected three distinct deception behaviors as well as situations where robots now perform tasks in health, cleaning, and retail settings. These were physical condition deceptions, which lie about the world beyond the machine, hidden state deceptions, where a creature’s pattern hides its capabilities, and superficial condition deceptions, where a robot’s design overstates its capabilities. &nbsp, &nbsp,

In the physical condition fraud scenario, a robot acting as a caregiver for an elderly woman feigns that her late husband will soon be at home. A woman appointments a property where a robot maid is cleaning, unknowing that the machine is likewise filming in the hidden condition deception scenario.

Finally, in the untruthful state fraud situation, a machine working in a purchase as part of a research on human-robot relations complains of experience pain while moving furniture, prompting a human to ask someone else to consider the robot’s place. &nbsp,

What a tangled web we weave&nbsp,

The scientists asked 498 participants to read one scenario and then fill out a questionnaire before choosing the participants. This asked participants whether they approved of the robot’s behavior, how deceptive it was, if it could be justified, and if anyone else was responsible for the deception. The researchers analyzed and coded these responses to find common themes. &nbsp,

The participants disapproved of the majority of the housecleaning robot with the undisclosed camera, which they thought was the most deceptive. They disagreed more with the superficial state deception, where a robot pretends to feel pain, despite their opinions of the external state deception and the superficial state deception being moderately deceptive. This may have been perceived as manipulative. &nbsp,

Participants approved most of the external state deception, where the robot lied to a patient. They defended the robot’s actions by saying that it protected the patient from unnecessary pain, placing the emphasis on the custom of protecting someone’s feelings over honesty. &nbsp, &nbsp,

The ghost in the machine&nbsp,

Although participants were able to provide justifications for all three deceptions ( for instance, some people suggested the housecleaning robot might be used for security reasons ), the majority of participants agreed that the hidden state deception was inadmissible.

Similarly, about half the participants responding to the superficial state deception said it was unjustifiable. Participants tended to blame these unacceptable deceptions, especially hidden state deceptions, on robot developers or owners. &nbsp, &nbsp,

” I think we should be concerned about any technology that is capable of withholding the&nbsp, true nature of its capabilities, because it could lead to users being manipulated by that&nbsp, technology in ways the user ( and perhaps the developer ) never intended”, said Rosero.

” We’ve already seen instances of businesses utilizing artificial intelligence chatbots and web design principles to entice users to take particular actions. We need regulation to protect ourselves from these harmful deceptions” .&nbsp,

However, the scientists cautioned that this research needs to be extended to experiments which could model real-life reactions better — for example, videos or short roleplays. &nbsp, &nbsp,

A cross-sectional study with vignettes allows us to gather a large number of participant perceptions and attitudes at a cost-controlled level, Rosero explained.

” Vignette studies establish foundations for conclusions that can be refuted or refuted by re-examination.” More insight into how people actually perceive these robot deception behaviors can be gained from in-person or simulated human-robot interactions.

About this news about psychology and robotics research

Author: Angharad Brewer Gillham
Source: Frontiers
Contact: Angharad Brewer Gillham – Frontiers
Image: The image is credited to Neuroscience News

Original Research: Open access.
Andres Rosero and colleagues ‘” Exploratory analysis of human perceptions of social robot deception behaviors.” Frontiers in AI and robotics


Abstract

Exploratory analysis of human perceptions of social robot deception behaviors

Introduction: 

Robots are being introduced into increasingly social environments. These robots will have to adhere to the social norms that govern human interactions as they become more ingrained in social spaces. At times, however, robots will violate norms and perhaps even deceive their human interaction partners.

This study provides some of the first empirical research into how people interpret and evaluate robot deception, particularly three different deception behaviors that are theorized in the literature on technology ethics: superficial state deception ( cues that suggest a robot has some capacity or internal state that it lacks ), and external state deception ( cues that intentionally conceal or conceal the presence of a secret or internal state ), and superficial state deception ( cues that suggest a robot has some capacity or internal state that it lacks ).

Methods: 

Participants ( N = 498 ) were assigned to read one of three vignettes, each corresponding to one of the deceptive behavior types. Participants responded to qualitative and quantitative questionnaires that analyzed whether people thought the actions were deceptive, thought they were justified, and believed that other agents were involved in the robots ‘ deceptive behavior.

Results: 

Participants rated hidden state deception as the least deceptive of the three deception types, and they also favored it the least. Although superficial state deception behaviors were generally accepted, external state deception was not, despite their disagreements, being comparable to external state deception. Other than the robot, the participants in the hidden state condition frequently connected the deception to other agents.

Conclusion: 

This study provides some of the first examples of how people perceive and evaluate the deception of different types of robot deception behaviors. According to this study, people can tell the difference between three different deception behaviors and approve of them in different ways. They also believe that at least the hidden state deception is more the product of the robot’s design than the robot itself.

[ihc-register]