Extreme Political Views Drive Higher Belief in Misinformation

Summary: A recent study reveals that people with serious social views are more likely to experience and consider misinformation online. According to the study, propaganda can be spread across all political parties, but it is most noticeable among those who have conservative or liberal extremes.

These individuals tend to see fake news quick in its flow, making fast interventions crucial. According to the findings, efforts to combat misinformation may concentrate on the users who are most prone to it and implement changes as quickly as possible.

Important Facts:

  • Users who are socially extraordinary are more likely to spread false information.
  • Misconceptions reaches these users earlier, making rapid interventions crucial.
  • Targeted interventions help to lessen misinformation more efficiently than large approaches.

Origin: NYU

Social spectators have been hampered by the increase of net misinformation, a problem that has grown as Election Day approaches. Nevertheless, while the spread of false news perhaps pose&nbsp, dangers, a new study finds that its effect is not common. Consumers with serious political sights are more likely than others to experience and believe false information. &nbsp,

” Misinformation is a serious issue on social media, but its impact is not uniform”, says Christopher K. Tokita, the lead author of the study, conducted by New York University’s Center for Social Media and Politics ( CSMaP ).

The earlier initiatives were used, the more likely they were to be successful, the one lesson from these models was. Credit: Neuroscience News

The results, which are published in the journal&nbsp, PNAS Nexus, also point out that the most effective way to stop the spread of misinformation is to immediately apply treatments and to pin them toward users who are most likely to be prone to these lies.

Recent social media initiatives are usually very slow to stop exposure among those most susceptible to it because these intense users even tend to see misinformation first on, according to Zeve Sanderson, executive director of CSMaP. &nbsp,

Existing methods for determining contact to and effect of online misinformation rely on analyzing opinions or stocks. But, these fail to fully capture the real effects of misinformation, which depends not just on disperse, but also on whether people actually believe the misleading information.

To address this inaccuracy, Tokita, Sanderson, and their colleagues used Twitter ( now” X” ) data to calculate not just how many users were exposed to a particular news story but also how many were likely to believe it. &nbsp,

According to Joshua A. Tucker, co-director of CSMaP and NYU professor of politics, one of the paper’s authors, “what is mainly revolutionary about our view to this study is that the approach combines social media data monitoring the spread of both real information and misinformation on Twitter with surveys that assessed whether Americans believed the content of these articles.”

This enables us to track both the spread of false information and the susceptibility to believing it in the same article in the same study.

The researchers calculated the spread of those articles across Twitter from the time of their initial publication by analyzing 139 news articles ( November 2019 to February 2020 ), of which 102 were rated as true and 37 were rated as false or misleading by professional fact-checkers. &nbsp,

This sample of popular articles was drawn from five types of news streams: mainstream left-leaning publications, mainstream right-leaning publications, low-quality left-leaning publications, low-quality right-leaning publications, and low-quality publications without an apparent ideological lean.

Within 48 hours of publication, each article was sent to a team of professional fact checkers to verify the accuracy of the articles. The fact-checkers rated each article as” true” or “false/misleading” .&nbsp,

The researchers combined two different types of data to assess the extent of exposure to and belief in these articles. First, they used Twitter data to determine which users were potentially exposed to each of the articles. Using an established method, they derived an inferred ideology from the well-known news and political accounts they follow. &nbsp,

Second, they conducted real-time surveys of each article as it became available online to assess the likelihood that these exposed users would accept a claim as true. These surveys asked regular internet users in America to determine whether the article was true or false, as well as provide demographic data, including their ideology.

The authors derived the proportion of people within each ideological category who believed the article to be true from this survey data. With these figures for each article, they could determine how many Twitter users are exposed and willing to accept the claim as true. &nbsp,

Overall, the findings revealed that while false news was widely dissented across the political spectrum, those with more extreme ideologies ( both conservative and liberal ) were much more likely to both see and believe it. Crucially, these users, who are receptive to misinformation, tend to encounter it early in its spread through Twitter. &nbsp, &nbsp,

The study’s authors were able to simulate the effects of various types of interventions designed to stop the spread of misinformation using the research design. The earlier interventions were used, the more likely they were to be effective, the one takeaway from these simulations was.

Another was that “visibility” interventions, which are aimed at reducing misinformation sharing among users by making them less likely to post it on a platform’s platform, were more likely to reduce the reach of misinformation to susceptible users.

Tokita, a data scientist in the tech industry, believes that understanding who is most likely to be receptive to misinformation, not just who is exposed to it, is essential to developing better strategies to combat misinformation online.

Other authors for the study included CSMaP researchers Jonathan Nagler and Richard Bonneau, as well as Kevin Aslett, a CSMaP postdoctoral researcher and University of Central Florida professor at the time of the study who is currently employed as a researcher in the tech sector, and William P. Godel, a graduate student at NYU and currently a researcher in the tech sector.

A graduate research fellowship from the National Science Foundation ( DGE1656466 ) provided funding for the study.

About this news from psychology research

Author: James Devitt
Source: NYU
Contact: James Devitt – NYU
Image: The image is credited to Neuroscience News

Original Research: Closed access.
measuring misinformation’s tolerance on a social media platform at a large scale” by Christopher K. Tokita and others. PNAS Nexus


Abstract

measuring misinformation’s tolerance on a social media platform at a large scale

Measuring the impact of online misinformation is challenging. Traditional measures, like user views or shares on social media, are insufficient because not everyone who is exposed to misinformation is equally likely to believe it.

To address this problem, we created a method that combines observational Twitter data with survey data to improvise the number of people who are both likely to believe and be exposed to a particular news story.

We tested this theory by applying this technique to 139 viral news articles, finding that users who are exposed and receptive to false news have more extreme ideologies.

These receptive users are also more likely than those who are undoubtedly mistaken to believe it to be true to the first.

This disparity between overall user exposure and receptive user exposure highlights the difficulty of interpreting exposure or interaction data solely to assess the impact of misinformation, as well as the difficulty of putting together effective interventions.

We then conducted data-driven simulations of common interventions used by social media platforms to demonstrate how our approach can address this challenge.

We find that these interventions only moderately successful in reducing user perception of misinformation, and their effectiveness quickly declines unless soon implemented to stop misinformation from spreading.

By focusing on the exposure of users who are likely to believe it, our paper provides more accurate estimates of misinformation’s impact and provides recommendations for effective social media mitigation tactics.

[ihc-register]