Towards Measuring the Representation of Subjective Global Opinions in Language Models

by   Esin Durmus, et al.

Large language models (LLMs) may not equitably represent diverse global perspectives on societal issues. In this paper, we develop a quantitative framework to evaluate whose opinions model-generated responses are more similar to. We first build a dataset, GlobalOpinionQA, comprised of questions and answers from cross-national surveys designed to capture diverse opinions on global issues across different countries. Next, we define a metric that quantifies the similarity between LLM-generated survey responses and human responses, conditioned on country. With our framework, we run three experiments on an LLM trained to be helpful, honest, and harmless with Constitutional AI. By default, LLM responses tend to be more similar to the opinions of certain populations, such as those from the USA, and some European and South American countries, highlighting the potential for biases. When we prompt the model to consider a particular country's perspective, responses shift to be more similar to the opinions of the prompted populations, but can reflect harmful cultural stereotypes. When we translate GlobalOpinionQA questions to a target language, the model's responses do not necessarily become the most similar to the opinions of speakers of those languages. We release our dataset for others to use and build on. Our data is at We also provide an interactive visualization at


page 6

page 7

page 20


Whose Opinions Do Language Models Reflect?

Language models (LMs) are increasingly being used in open-ended contexts...

Speaking Multiple Languages Affects the Moral Bias of Language Models

Pre-trained multilingual language models (PMLMs) are commonly used when ...

Knowledge of cultural moral norms in large language models

Moral norms vary across cultures. A recent line of work suggests that En...

Questioning the Survey Responses of Large Language Models

As large language models increase in capability, researchers have starte...

Measuring Asymmetric Opinions on Online Social Interrelationship with Language and Network Features

Instead of studying the properties of social relationship from an object...

AI-Augmented Surveys: Leveraging Large Language Models for Opinion Prediction in Nationally Representative Surveys

How can we use large language models (LLMs) to augment surveys? This pap...

ChatGPT-Crawler: Find out if ChatGPT really knows what it's talking about

Large language models have gained considerable interest for their impres...

Please sign up or login with your details

Forgot password? Click here to reset