Bias of AI-Generated Content: An Examination of News Produced by Large Language Models

by   Xiao Fang, et al.

Large language models (LLMs) have the potential to transform our lives and work through the content they generate, known as AI-Generated Content (AIGC). To harness this transformation, we need to understand the limitations of LLMs. Here, we investigate the bias of AIGC produced by seven representative LLMs, including ChatGPT and LLaMA. We collect news articles from The New York Times and Reuters, both known for their dedication to provide unbiased news. We then apply each examined LLM to generate news content with headlines of these news articles as prompts, and evaluate the gender and racial biases of the AIGC produced by the LLM by comparing the AIGC and the original news articles. We further analyze the gender bias of each LLM under biased prompts by adding gender-biased messages to prompts constructed from these news headlines. Our study reveals that the AIGC produced by each examined LLM demonstrates substantial gender and racial biases. Moreover, the AIGC generated by each LLM exhibits notable discrimination against females and individuals of the Black race. Among the LLMs, the AIGC generated by ChatGPT demonstrates the lowest level of bias, and ChatGPT is the sole model capable of declining content generation when provided with biased prompts.


page 1

page 5

page 8

page 12

page 13

page 14


Viable Threat on News Reading: Generating Biased News Using Natural Language Models

Recent advancements in natural language generation has raised serious co...

Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and Addressing Sociological Implications

Gender bias in artificial intelligence (AI) and natural language process...

On the Unintended Social Bias of Training Language Generation Models with Data from Local Media

There are concerns that neural language models may preserve some of the ...

Smiling Women Pitching Down: Auditing Representational and Presentational Gender Biases in Image Generative AI

Generative AI models like DALL-E 2 can interpret textual prompts and gen...

Analysis of Bias in Gathering Information Between User Attributes in News Application

In the process of information gathering on the web, confirmation bias is...

Unmasking Nationality Bias: A Study of Human Perception of Nationalities in AI-Generated Articles

We investigate the potential for nationality biases in natural language ...

Gender bias and stereotypes in Large Language Models

Large Language Models (LLMs) have made substantial progress in the past ...

Please sign up or login with your details

Forgot password? Click here to reset