AI Admissions Essays Harmonize with Preferential Male Writing Patterns

Summary: Researchers analyzed AI-generated and human-written school enrollment essays, finding that AI-generated articles resemble those written by female students from wealthy background. Essays from private university applicants typically used longer words and had less variation in writing styles than those from humans.

The research raises questions about how AI can muddy a person’s sense of authenticity when writing admissions essays. Individuals are encouraged to use AI as a tool to enhance, not remove, their private narrative in reading.

Important Facts:

  • AI-generated essays most match writing from wealthy, female students.
  • Compared to human-written articles, AI essays had long, narrower topics.
  • Individuals are advised to use AI to enhance specific expression, never change it.

Origin: Cornell University

Researchers discovered that the AI-generated articles in thousands of human-written school admissions essays are most similar to those written by male students with higher social position and higher levels of social opportunity.

The&nbsp, report, published in the&nbsp, Journal of Big Data, likewise found the AI-generated reading is also less complex than that written by people.

AJ Alvero, associate study professor of information technology at Cornell University and co-corresponding author of the study, reportedly asked the study’s author to find out what these patterns in human-written articles look like in a ChatGPT globe. How does the powerful connection between human writing and personality compare to that in AI-written essays?

More than 25 000 university admissions essays were created using GPT-3.5 and GPT-4, which were prompted to listen to the same article questions as the individual candidates, according to Alvero and the team in a comparison of the writing styles of more than 150, 000 college admissions essays submitted to the University of California system and an architectural software at an elite East Coast private school.

For their analysis, the researchers used the Linguistic Inquiry and Word Count, a program that counts the frequencies of writing features, such as punctuation and pronoun usage, and cross-references those counts with an external dictionary.

Alvero and the team discovered that while the writing styles of large language models ( LLMs) do n’t necessarily represent any particular group in social comparison studies, they do” sound” in terms of word choice and usage, most of which are male students from more exclusive backgrounds and locations.

For example, AI was found on average to use longer words ( six or more letters ) than human writers. Also, AI-generated writing tended to have less variety than essays written by humans, although it more closely resembled essays from private-school applicants than those from public-school students.

Additionally, humans and AI tend to write about affiliations ( with groups, people, organizations and friends ) at similar rates – despite the AI not actually having any affiliations.

LLMs like ChatGPT will be used in all kinds of settings, including college admissions, as they become more popular and sophisticated.

Students are likely to be using AI to help them write these essays, according to Rene Kizilcec, associate professor of information science at Cornell and co-author of the paper.” Probably not asking it to just write the entire thing, but rather asking it for help and feedback,” said Kizilcec.

” But even then, the assumptions these models may make may not be well matched with the values, the kind of linguistic language used to express those students as genuinely as they do.”

It’s important to keep in mind that if you use an AI to write an essay, it’s likely going to sound more generic and less like you, he said. Additionally, students should be aware that those reading these essays wo n’t find it difficult to identify individuals who have used AI extensively. The key is to use it to support students ‘ own narratives and enhance the messages they want to convey, not to replace their own voices.

About this research on AI and LLM

Author: Becka Bowyer
Source: Cornell University
Contact: Becka Bowyer – Cornell University
Image: The image is credited to Neuroscience News

Original Research: Open access.
Large language models, social demography, and hegemony: comparing authorship in human and synthetic text” by AJ Alvero et al. Journal of Big Data


Abstract

Large language models, social demography, and hegemony: comparing authorship in human and synthetic text

Large language models have gained popularity over a short period of time because they can produce text that resembles human writing across a range of industries and tasks. Additionally, the widespread use and popularity of this technology position it in a fundamental way of changing how written language is viewed and interpreted.

Additionally, it is true that spoken language has long been used to maintain societal power and hegemony, particularly through” correct” languageforms and social identity concepts.

However, as text and writing become even more important as human communication shifts, it is crucial to understand how these processes might change and who is more likely to see their writing styles reflect back on them through contemporary AI.

We therefore ask the following question: &nbsp, who&nbsp, does generative AI write like?

To address this, we contrast the writing styles found in over 150, 000 college admissions essays submitted to a sizable public university system with those from an engineering program at a top private university, which had a corpus of over 25, 000 essays created using GPT-3.5 and GPT-4 to the same writing standards.

We find that human-authored essays exhibit more variability across various individual writing style features ( e. g., verb usage ) than AI-generated essays. Overall, we find that the AI-generated essays most closely resemble those written by men with higher social status.

These findings highlight significant authorship differences between humans and AI, which could affect how writing is evaluated. This research calls for better alignment research on control strategies.

[ihc-register]