Whose Opinions Do Language Models Reflect?

The focus on ChatGPT has made it more clear than ever that language models have major social, economic,and political ramifications.
— Whose Opinions Do Language Models Reflect? (2023), Stanford Human Centered Artificial Intelligence (HAI)

New report from Stanford Human Centered Artificial Intelligence (HAI) highlights key concerns. Since the introduction of ChatGPT in November 2022, language models have garnered significant attention in various sectors. As chatbots and AI language models become increasingly integrated into our daily lives, critical questions are arising about whose opinions these models actually reflect.

According to the report language models, with the right training, can mimic certain demographic groups tendencies. For example, they can be conditioned to support the views of a presidential candidate for whom certain people might vote. These are significant findings for researchers, the users of language models, and policymakers.

The report stresses the urgency for further research to refine the evaluation of language models, enabling policymakers and regulators to make quantitative assessments. This research is crucial to understanding and addressing issues related to replication, subjectivity, bias, fairness, and toxicity.

In conclusion, as language models continue to evolve, their impact on society intensifies. Policymakers face the challenge of ensuring these models reflect the views of diverse demographic groups and do not inadvertently exacerbate polarization or create echo chambers. Rigorous research and evaluation are essential to guide these developments and ensure that the impact of language models remains in line with our societal values and expectations.

Key Points:

➜ Language models are shaped by a variety of inputs and opinions, from the individuals whose views are included in the training data to crowd workers who manually filter that data.

➜ The researchers found that language models fine-tuned with human feedback—meaning models that went through additional training with human input—were less representative of the general public’s opinions than models that were not fine-tuned.

➜ It is possible to steer a language model toward the opinions of a particular demographic group by asking the model to respond as if it were a member of that group, but this can lead to undesirable side effects, such as exacerbating polarization and creating echo chambers.

➜ The report highlights the need for further research on the evaluation of language models that can help policymakers and regulators quantitatively assess language model behavior and compare it to human preferences and opinions.


Another report The 2023 AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. A distinct pattern is that the industry (Chinese and American) has taken over the development of AI. Until 2014, most significant machine learning models were released by academia. Since then, industry has taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. Building state-of-the-art AI systems increasingly requires large amounts of data, compute, and money, resources that industry actors inherently possess in greater amounts compared to nonprofits and academia.

Key points from the AI Index Report

  • The US and China continue to dominate AI research.

  • Industry races ahead of academia

  • Policymaker interest in AI is on the rise, with a growing number of bills containing “artificial intelligence” being passed into law globally

  • The number of incidents concerning the misuse of AI is rapidly rising.

  • The environmental impact of AI is a growing concern and is now being analyzed as part of the AI technical progress report.

  • Fairness, bias, and ethics in machine learning continue to be topics of interest among both researchers and practitioners.

  • AI-related education is on the raise.

Read the full AI Index Report

Previous
Previous

NEW EU FUndamental RIGHTs REPORT reveals: THE DISTURBING DANISH PARADOX

Next
Next

There's a #MeToo before and after the pandemic