Large language models (LLMs) as agents for augmented democracy DOI

Jairo F. Gudiño,

Umberto Grandi, César A. Hidalgo

et al.

Philosophical Transactions of the Royal Society A Mathematical Physical and Engineering Sciences, Journal Year: 2024, Volume and Issue: 382(2285)

Published: Nov. 13, 2024

We explore an augmented democracy system built on off-the-shelf large language models (LLMs) fine-tuned to augment data citizens' preferences elicited over policies extracted from the government programmes of two main candidates Brazil's 2022 presidential election. use a train-test cross-validation set-up estimate accuracy with which LLMs predict both: subject's individual political choices and aggregate full sample participants. At level, we find that out more accurately than 'bundle rule', would assume citizens always vote for proposals candidate aligned their self-reported orientation. population show probabilistic by LLM provides accurate non-augmented alone. Together, these results indicate policy preference using can capture nuances transcend party lines represents promising avenue research augmentation. This article is part theme issue 'Co-creating future: participatory cities digital governance'.

Language: Английский

Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences DOI

Jairo F. Gudiño,

Umberto Grandi, César A. Hidalgo

et al.

Published: Dec. 22, 2019

We explore the capabilities of an augmented democracy system built on off-the-shelf LLMs fine-tuned data summarizing individual preferences across 67 policy proposals collected during 2022 Brazilian presidential election.We use a train-test cross-validation setup to estimate accuracy with which predict both: subject's political choices and aggregate full sample participants.At level, out predictions lie in range 69%-76% are significantly better at predicting liberal college educated population we using adaptation Borda score compare ranking obtained from probabilistic participants LLMs.We find that predicts than samples alone when these represent less 30% 40% total population.These results indicate potentially useful for construction systems democracy.

Language: Английский

Citations

275

Robustness of large language models in moral judgements DOI Creative Commons
Soyoung Oh, Vera Demberg

Royal Society Open Science, Journal Year: 2025, Volume and Issue: 12(4)

Published: April 1, 2025

With the advent of large language models (LLMs), there has been a growing interest in analysing preferences encoded LLMs context morality. Recent work tested on various moral judgement tasks and drawn conclusions regarding alignment between humans. The present contribution critically assesses validity method results employed previous for eliciting judgements from LLMs. We find that are confounded by biases presentation options LLM responses highly sensitive to prompt formulation variants as simple changing ‘Case 1’ 2’ ‘(A)’ ‘(B)’. Our hence indicate cannot be upheld. make recommendations more sound methodological setups future studies.

Language: Английский

Citations

0

Large language models (LLMs) as agents for augmented democracy DOI

Jairo F. Gudiño,

Umberto Grandi, César A. Hidalgo

et al.

Philosophical Transactions of the Royal Society A Mathematical Physical and Engineering Sciences, Journal Year: 2024, Volume and Issue: 382(2285)

Published: Nov. 13, 2024

We explore an augmented democracy system built on off-the-shelf large language models (LLMs) fine-tuned to augment data citizens' preferences elicited over policies extracted from the government programmes of two main candidates Brazil's 2022 presidential election. use a train-test cross-validation set-up estimate accuracy with which LLMs predict both: subject's individual political choices and aggregate full sample participants. At level, we find that out more accurately than 'bundle rule', would assume citizens always vote for proposals candidate aligned their self-reported orientation. population show probabilistic by LLM provides accurate non-augmented alone. Together, these results indicate policy preference using can capture nuances transcend party lines represents promising avenue research augmentation. This article is part theme issue 'Co-creating future: participatory cities digital governance'.

Language: Английский

Citations

0