Adaptive political surveys and GPT-4: Tackling the cold start problem with simulated user interactions DOI Creative Commons
Fynn Bachmann,

Daan van der Weijden,

Lucien Heitz

и другие.

PLoS ONE, Год журнала: 2025, Номер 20(5), С. e0322690 - e0322690

Опубликована: Май 22, 2025

Adaptive questionnaires dynamically select the next question for a survey participant based on their previous answers. Due to digitalisation, they have become viable alternative traditional surveys in application areas such as political science. One limitation, however, is dependency data train model selection. Often, training (i.e., user interactions) are unavailable priori . To address this problem, we (i) test whether Large Language Models (LLM) can accurately generate interaction and (ii) explore if these synthetic be used pre-train statistical of an adaptive survey. evaluate approach, utilise existing from Swiss Voting Advice Application (VAA) Smartvote two ways: First, compare distribution LLM-generated real assess its similarity. Second, performance questionnaire that randomly initialised with one pre-trained suitability training. We benchmark results against “oracle” perfect prior knowledge. find off-the-shelf LLM (GPT-4) generates answers perspective different parties. Furthermore, demonstrate initialising significantly reduce error predicting responses increase candidate recommendation accuracy VAA. Our work emphasises considerable potential LLMs create improve collection process LLM-affine surveys.

Язык: Английский

Thinking democracy in a digital age DOI
Jennifer Forestal, Susan Bickford, Nicole Curato

и другие.

Contemporary Political Theory, Год журнала: 2025, Номер unknown

Опубликована: Май 3, 2025

Язык: Английский

Процитировано

0

Adaptive political surveys and GPT-4: Tackling the cold start problem with simulated user interactions DOI Creative Commons
Fynn Bachmann,

Daan van der Weijden,

Lucien Heitz

и другие.

PLoS ONE, Год журнала: 2025, Номер 20(5), С. e0322690 - e0322690

Опубликована: Май 22, 2025

Adaptive questionnaires dynamically select the next question for a survey participant based on their previous answers. Due to digitalisation, they have become viable alternative traditional surveys in application areas such as political science. One limitation, however, is dependency data train model selection. Often, training (i.e., user interactions) are unavailable priori . To address this problem, we (i) test whether Large Language Models (LLM) can accurately generate interaction and (ii) explore if these synthetic be used pre-train statistical of an adaptive survey. evaluate approach, utilise existing from Swiss Voting Advice Application (VAA) Smartvote two ways: First, compare distribution LLM-generated real assess its similarity. Second, performance questionnaire that randomly initialised with one pre-trained suitability training. We benchmark results against “oracle” perfect prior knowledge. find off-the-shelf LLM (GPT-4) generates answers perspective different parties. Furthermore, demonstrate initialising significantly reduce error predicting responses increase candidate recommendation accuracy VAA. Our work emphasises considerable potential LLMs create improve collection process LLM-affine surveys.

Язык: Английский

Процитировано

0