Cognitive network science reveals bias in GPT-3, ChatGPT, and GPT-4 mirroring math anxiety in high-school students

05/22/2023
by   Katherine Abramski, et al.
0

Large language models are becoming increasingly integrated into our lives. Hence, it is important to understand the biases present in their outputs in order to avoid perpetuating harmful stereotypes, which originate in our own flawed ways of thinking. This challenge requires developing new benchmarks and methods for quantifying affective and semantic bias, keeping in mind that LLMs act as psycho-social mirrors that reflect the views and tendencies that are prevalent in society. One such tendency that has harmful negative effects is the global phenomenon of anxiety toward math and STEM subjects. Here, we investigate perceptions of math and STEM fields provided by cutting-edge language models, namely GPT-3, Chat-GPT, and GPT-4, by applying an approach from network science and cognitive psychology. Specifically, we use behavioral forma mentis networks (BFMNs) to understand how these LLMs frame math and STEM disciplines in relation to other concepts. We use data obtained by probing the three LLMs in a language generation task that has previously been applied to humans. Our findings indicate that LLMs have an overall negative perception of math and STEM fields, with math being perceived most negatively. We observe significant differences across the three LLMs. We observe that newer versions (i.e. GPT-4) produce richer, more complex perceptions as well as less negative perceptions compared to older versions and N=159 high-school students. These findings suggest that advances in the architecture of LLMs may lead to increasingly less biased models that could even perhaps someday aid in reducing harmful stereotypes in society rather than perpetuating them.

READ FULL TEXT

page 7

page 8

page 9

page 11

page 12

page 13

page 15

research
02/24/2022

Capturing Failures of Large Language Models via Human Cognitive Biases

Large language models generate complex, open-ended outputs: instead of o...
research
08/24/2023

Mind vs. Mouth: On Measuring Re-judge Inconsistency of Social Bias in Large Language Models

Recent researches indicate that Pre-trained Large Language Models (LLMs)...
research
07/18/2020

Mapping computational thinking mindsets between educational levels with cognitive network science

Computational thinking is a way of reasoning about the world in terms of...
research
08/20/2022

Cognitive Modeling of Semantic Fluency Using Transformers

Can deep language models be explanatory models of human cognition? If so...
research
09/15/2023

Casteist but Not Racist? Quantifying Disparities in Large Language Model Bias between India and the West

Large Language Models (LLMs), now used daily by millions of users, can e...
research
09/20/2019

Quantifying the Impact of Cognitive Biases in Question-Answering Systems

Crowdsourcing can identify high-quality solutions to problems; however, ...

Please sign up or login with your details

Forgot password? Click here to reset