Summary: AI-generated reports make clinical studies more available and increase public trust in scientists. Scientists used GPT-4 to create reduced descriptions that were simpler to read and comprehend than human-written ones.
Scientists whose function was described in plainer terms were rated as more reliable and trustworthy by participants. While tempting, using AI in technology conversation raises ethical concerns about precision, transparency, and possible oversimplification.
Important Facts:
- Descriptions created by AI help students understand intricate studies more effectively.
- Scientific credibility and trust are boosted by simpler speech.
- The need for accountability and nuance loss are among the moral concerns raised by AI use.
Origin: Michigan State University
Have you ever read a technological breakthrough with the impression that it was written in another language?
If you’re like , most Americans, new medical information can prove challenging to understand — especially if you try to tackle a research essay in a study book.
The ability to communicate and comprehend difficult information is more important than ever in a time when technological literacy is essential for informed decision-making. The difficulty of comprehending medical terminology may be one of the reasons confidence has been declining for years.
New research , from David Markowitz, associate professor of conversation at Michigan State University, factors to a potential answer: using artificial intelligence, or AI, to improve knowledge conversation.
His work demonstrates that AI-generated summaries does help restore confidence in scientists and, in turn, encourage , greater community engagement with medical issues , — simply by making clinical information more friendly.
People frequently rely on research to make decisions in their everyday life, from choosing what foods to eat to making crucial health care decisions. This is especially important.
Actions are excerpts from an essay previously published in , The Conversation.
How did simpler, AI-generated reports affect the general government’s interpretation of academic research?
Artificial intelligence is make summaries of academic papers that make difficult information more natural for the common compared with human-written summaries, according to Markowitz’s new study, which was  , published in PNAS Nexus.
Summaries created by AI improved the perception of scientists as well as improving public understanding of science.
Markowitz used a popular large language model, GPT-4 by OpenAI, to create simple summaries of scientific papers, this kind of text is often called a significance statement.
Instead of the summaries produced by the researchers who had done the work, the AI-generated summaries used simpler language, which, according to a readability index, was easier to read and used more common words, such as” job” rather than “occupation.”
In one experiment, he found that readers of the AI-generated statements had a better understanding of the science, and they provided more detailed, accurate summaries of the content than readers of the human-written statements.
How did simpler, AI-generated summaries affect the general public’s perception of scientists?
In another experiment, participants rated the scientists who provided the research more honestly and trustworthy than those whose work was presented in more nuanced terms.
In both experiments, participants did not know who wrote each summary. The simpler texts were always AI-generated, and the complex texts were always human-generated. Ironically, when I asked participants who they thought wrote each summary, they replied that the more complex ones were written by AI and the simpler ones were written by people.
What do we still need to learn about AI and science communication?
As AI develops, its influence on science communication may grow, especially if generative AI is used more frequently or accepted by journals. Indeed, the academic publishing field is still establishing , norms regarding the use of AI. AI could encourage more people to read scientifically, reducing the complexity of the subject matter.
Although the benefits of science communication produced by AI are perhaps obvious, ethical factors must also be taken into account. There is a chance that relying on AI to make scientific content simpler may remove some nuance, which could lead to confusion or oversimplification.
There’s always the chance of errors, too, if no one pays close attention. Additionally, transparency is critical. To prevent potential biases, readers should be aware when AI is used to create summaries.
AI tools can aid in this process, and simple science descriptions are preferable to and more advantageous than complex ones. No AI is required, but scientists could work harder to reduce jargon and communicate clearly to accomplish the same objectives.
About this research on AI and science communication
Author: Alex Tekip
Source: Michigan State University
Contact: Alex Tekip – Michigan State University
Image: The image is credited to Neuroscience News
Original Research: Open access.
” From complexity to clarity: How AI enhances perceptions of scientists and the public’s understanding of science” by David Markowitz et al. PNAS Nexus
Abstract
From complexity to clarity: How AI enhances perceptions of scientists and the public’s understanding of science
This article evaluated the impact of utilizing generative AI to improve public understanding of science and simplify communication.
This study first examined the differences between linguistic simplicity between lay summary of journal articles produced by AI and those produced by PNAS in follow-up experiments.
Specifically, study 1a analyzed simplicity features of , PNAS , abstracts (scientific summaries ) and significance statements ( lay summaries ), observing that lay summaries were indeed linguistically simpler, but effect size differences were small.
Without fine tuning, study 1b used a large language model, GPT-4, to generate significance statements based on paper abstracts.
Study 2 experimentally demonstrated that simply-written generative pre-trained transformer ( GPT ) summaries facilitated more favorable perceptions of scientists ( they were perceived as more credible and trustworthy, but less intelligent ) than more complexly written human , PNAS , summaries.
Crucially, study 3 experimentally demonstrated that participants comprehended scientific writing better after reading simple GPT summaries compared to complex , PNAS , summaries.
After reading GPT summaries, participants also recited scientific papers in a more in-depth and precise manner, according to their own words.
In order to promote its integration into scientific dissemination, AI has the potential to engage scientific communities and the general public through a straightforward heuristic.