Election Debunking Takes Place When? New Model Provides Insight

Summary: A computing model-based study examines factors that influence whether attempts to discredit electoral results will persuade people to alter their positions. The concept demonstrates that debunking is more successful when people are less confident in their unique beliefs and believe that the authority is impartial and driven by accuracy.

Refutation is most effective when an expert opposes a conceited discrimination, such as a usually biased news store supporting an unforeseen outcome. These insights might be helpful for upcoming elections in fostering public discussion regarding vote legitimacy.

Important Facts:

  • When people are less sure of their beliefs, refutation is more powerful.
  • An impartial power or one who challenges a perceived discrimination can influence perceptions.
  • The concept shows debunking usually fails, but you work under certain conditions.

Origin: MIT

People who are dubious about the outcome&nbsp may be influenced by figures of authority who support one area or the other when an election result is disputed. Those numbers can be separate monitors, political numbers, or media organizations.

However, these “debunking” efforts do n’t always have the desired effect, and in some cases, they can lead people to cling more tightly to their original position.

A mathematical model that examines the factors that influence whether rebuttal initiatives will persuade people to change their beliefs about the validity of an vote has been developed by neuroscientists and social scientists at MIT and the University of California at Berkeley.

Their findings point to the possibility that debunking you succeed when done properly, despite it happening most of the time.

For example, the model demonstrated that effective debunking is more likely when people are less sure of their original convictions and if they think the authority is impartial or fervently driven by a wish for accuracy.

It’s also helpful when an expert backs up a result that upends a preconceived notion of discrimination, like Fox News ‘ assertion that Joseph R. Biden had won the 2020 U.S. national poll. &nbsp,

” When people see an act of rebuttal, they treat it as a human behavior and know it the way they understand human activities — that is, as something someone did for their own reasons”, says Rebecca Saxe, &nbsp, the John W. Jarve Professor of Brain and Cognitive Sciences, a part of MIT’s McGovern Institute for Brain Research, &nbsp, and the senior author of the study.

” We’ve used a very simple, basic concept of how individuals understand other people’s behavior, and found that that’s all you need to explain this complicated phenomenon”.

The results could have an impact as the country prepares for the Nov. 5 presidential poll, as they help to identify the factors that are most likely to influence how well voters would choose to accept the outcome of the election. &nbsp,

MIT grad student Setayesh Radkani is the direct author of the paper, which appears today in a particular election-themed issue of the journal&nbsp, PNAS Nexus. An additional author of the study is 18-year-old Marika Landau-Wells, a former MIT doctorate who is currently an associate professor of political science at the University of California at Berkeley.

Modeling desire

In their work on election rebuttal, the MIT group took a tale approach, building on Saxe’s considerable work studying” theory of mind” —&nbsp, how individuals think about the thoughts and motivations of other people.

In her PhD thesis, Radkani has been working on a computational model of the cognitive processes that take place when people see others being punished by an authority. Depending on their prior opinions of the action and the authority, not everyone interprets punitive actions in the same way.

Some people believe that the authority is acting legally to punish a mistake, while others believe that the authority is overreaching and issuing an unfair punishment.

Saxe and Radkani had the idea to apply the model to how people react to an authority trying to influence their political beliefs last year, after taking part in an MIT workshop on the topic of polarization in societies.

Landau-Wells, who completed her PhD in political science before working as a postdoc in Saxe’s lab, suggested using the model to refute popular perceptions regarding the legitimacy of an election result.

Based on Bayesian inference, Radkani’s computational model continuously updates its predictions of people’s beliefs as they learn new information. Debunking is viewed in this way as an action that a person takes for their own interests.

People who take notice of the authority’s statement then interpret what the person said and did. Based on that interpretation, people may or may not change their own beliefs about the election result. &nbsp,

Additionally, the model does not assume that any beliefs are necessarily incorrect or that any group of people is acting irrationally.

According to Radkani,” the only assumption that we made is that there are two groups in society who have different opinions on a subject: one believes the election was stolen, and the other believes not,”

” Other than that, these groups are similar. They discuss what the different motives of the authority are and how motivated the authority is by each of those motives. They also share their beliefs about the authority.

More than 200 different scenarios in which an authority attempts to refute a belief held by one group regarding the validity of an election outcome were modeled by the researchers. &nbsp,

The researchers continuously altered the group’s original beliefs ‘ levels of certainty and altered the groups ‘ perceptions of the authority’s motivations. In some cases, groups believed that the authority was acting in a way that promoted accuracy, but in others they did not.

The researchers also altered the perceptions of whether a particular viewpoint was being taken advantage of by the group, as well as how strongly the group’s beliefs were held.

Building consensus

In each case, the researchers used the model to forecast how each group would respond to a collection of five statements made by authoritative figures to persuade them that the election had been legitimate.

The researchers discovered that beliefs remained polarized in the majority of the scenarios studied, and some even polarized even further. According to the researchers, this polarization may also apply to fresh topics that are unrelated to the election’s original context.

However, under some circumstances, the debunking was successful, and beliefs converged on an accepted outcome. When people were initially more unsure of their original beliefs, this was more likely to occur. &nbsp,

” When people are very, very certain, they become hard to move. So, in essence, a lot of this authority debunking does n’t matter”, Landau-Wells says. ” However, there are a lot of people who are in this uncertain band.

” They have doubts, but they do n’t have firm beliefs. We’re in a situation where the model says you can influence people’s beliefs and steer them in the direction of the truth, which is one of the lessons from this paper.

If people believe that the authority is impartial and highly motivated by accuracy, another factor that may cause belief convergence is. When an authority makes a claim that conflicts with their perceived bias, making it even more persuasive when, say, Republican governors claim that elections in their states were fair despite the Democratic candidate’s victory. &nbsp,

grassroots efforts have been made to train nonpartisan election observers who can prove that an election was legitimate as the 2024 presidential election draws near. These types of organizations may be well-positioned to help sway people who might have doubts about the election’s legitimacy, the researchers say.

” They’re trying to train to people to be independent, unbiased, and committed to the truth of the outcome more than anything else. That is the kind of things you want. We want them to succeed in being seen as independent.

” We want them to succeed as being seen as truthful, because in this space of uncertainty, those are the voices that can move people toward an accurate outcome”, Landau-Wells says.

Funding: The research was funded, in part, by the Patrick J. McGovern Foundation and the Guggenheim Foundation.

About this news about computational neuroscience

Author: Abby Abazorius
Source: MIT
Contact: Abby Abazorius – MIT
Image: The image is credited to Neuroscience News

Original Research: Open access.
How rational inference about authority debunking can curtail, sustain, or spread belief polarization” by Rebecca Saxe et al. PNAS Nexus


Abstract

How rational inference about authority debunking can curtail, sustain, or spread belief polarization

In polarized societies, polarized groups of people have divergent viewpoints on a variety of subjects. Aiming to reduce polarization, authorities may use debunking to lend support to one perspective over another.

Debunking by authorities gives all observers shared information, which could reduce disagreement. In practice, however, debunking may have no effect or could even contribute to further polarization of beliefs.

We developed a cognitively inspired model of observers ‘ rational inferences from an authority’s debunking. After observing each debunking attempt, simulated observers simultaneously update their beliefs about the perspective underlying the debunked claims and about the authority’s motives, using an intuitive causal model of the authority’s decision-making process.

We varied the observers ‘ prior beliefs and uncertainty systematically. Simulations generated a range of outcomes, from belief convergence ( less common ) to persistent divergence ( more common ).

In many simulations, observers who initially believed in a certain way about the authority later developed polarized beliefs about the authority’s biases and commitment to truth.

These polarized beliefs limited the authority’s influence on fresh topics, allowing for the spread of belief polarization. We discuss the model’s effects on how people view elections.

[ihc-register]