Learning Chain of Counterfactual Thought for Bias-Robust Vision-Language Reasoning DOI
Yifeng Zhang, Ming Jiang, Qi Zhao

et al.

Lecture notes in computer science, Journal Year: 2024, Volume and Issue: unknown, P. 334 - 351

Published: Oct. 28, 2024

Language: Английский

TUN-GCA: A Novel Approach for Organ Segmentation in Nasopharyngeal Carcinoma CT Images DOI

W.Q. Che,

Ziyuan Ye, Rongting Huang

et al.

Communications in computer and information science, Journal Year: 2025, Volume and Issue: unknown, P. 368 - 381

Published: Jan. 1, 2025

Language: Английский

Citations

0

The impact of different task contexts on emergency responders’ trust and usage intention of artificial intelligence DOI
Xiangying Zou, Pei‐Luen Patrick Rau,

Zhangfei Bai

et al.

Ergonomics, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 15

Published: May 15, 2025

Proper use of artificial intelligence (AI) can significantly enhance emergency responders' performance. However, they do not always trust or appropriately AI. This study examined in AI and usage intention under different rescue pressures uncertainty from the perspective perceived capability. was conducted two phases: first, questionnaire data were collected 99 firefighters; second, semi-structured interviews with 12 participants. Results revealed that pressure affected capability, whereas influenced self-capability. Rescue which subsequently impacted trust, ultimately, intention. These findings explain process through impacts willingness to also explores psychological mechanisms provides valuable recommendations for designers develop systems suitable responders.

Language: Английский

Citations

0

Large language models (LLMs) as research Subjects: Status, opportunities and challenges DOI
Chenguang Zhao,

Meirewuti Habule,

Wei Zhang

et al.

New Ideas in Psychology, Journal Year: 2025, Volume and Issue: 79, P. 101167 - 101167

Published: May 24, 2025

Language: Английский

Citations

0

Bias Mitigation in Primary Healthcare Artificial Intelligence Models: A Scoping Review (Preprint) DOI Creative Commons
Maxime Sasseville, Steven Ouellet, Caroline Rhéaume

et al.

Journal of Medical Internet Research, Journal Year: 2024, Volume and Issue: 27, P. e60269 - e60269

Published: Nov. 7, 2024

Background Artificial intelligence (AI) predictive models in primary health care have the potential to enhance population by rapidly and accurately identifying individuals who should receive services. However, these also carry risk of perpetuating or amplifying existing biases toward diverse groups. We identified a gap current understanding strategies used assess mitigate bias algorithms related individuals’ personal protected attributes. Objective This study aimed describe attempts, strategies, methods AI within care, identify groups attributes considered, evaluate results approaches on both reduction model performance. Methods conducted scoping review following Joanna Briggs Institute (JBI) guidelines, searching Medline (Ovid), CINAHL (EBSCO), PsycINFO Web Science databases for studies published between January 1, 2017, November 15, 2022. Pairs reviewers independently screened titles abstracts, applied selection criteria, performed full-text screening. Discrepancies regarding inclusion were resolved consensus. Following reporting standards we extracted data objectives, features, targeted groups, mitigation used, results. Using mixed appraisal tool, appraised quality studies. Results After removing 585 duplicates, 1018 abstracts. From remaining 189 articles, included 17 The most frequently investigated race (or ethnicity), examined 12 studies, sex (often as gender), typically classified “male versus female” 10 categorized into four clusters: (1) modifying datasets, (2) sourcing from electronic records, (3) developing tools with “human-in-the-loop” approach, (4) ethical principles informed decision-making. Algorithmic preprocessing methods, such relabeling reweighing data, along natural language processing techniques that extract unstructured notes, showed greatest mitigation. Other at enhancing fairness group recalibration application equalized odds metric. sometimes exacerbated prediction errors across led overall miscalibrations. Conclusions suggest are more easily mitigated when open-sourced, multiple stakeholders engaged, during algorithm’s stage. Further empirical include broader range Indigenous peoples Canada, needed validate expand upon findings. Trial Registration OSF Registry osf.io/9ngz5/; https://osf.io/9ngz5/ International Registered Report Identifier (IRRID) RR2-10.2196/46684

Language: Английский

Citations

2

Bias Mitigation in Primary Healthcare Artificial Intelligence Models: Scoping Review (Preprint) DOI Creative Commons
Maxime Sasseville, Steven Ouellet, Caroline Rhéaume

et al.

Published: May 6, 2024

BACKGROUND Artificial intelligence (AI) predictive models in primary healthcare can potentially lead to benefits for population health. Algorithms identify more rapidly and accurately who should receive care health services, but they could also perpetuate or exacerbate existing biases toward diverse groups. We noticed a gap actual knowledge about which strategies are deployed assess mitigate bias groups, based on their personal protected attributes, algorithms. OBJECTIVE To describe attempts, strategies, methods used artificial models. groups attributes have been considered. evaluate the results attenuation AI performance of these methods. METHODS conducted scoping review informed by Joanna Briggs Institute (JBI) recommendations. An experienced librarian developed search strategy four databases (Medline (OVID), CINAHL (EBSCO), PsycInfo Web Science) sources published between 2017-01-01 2022-11-15. imported data Covidence pairs reviewers independently screened titles abstracts, applied selection criteria, performed full-text screening. Any discrepancies regarding inclusion studies were resolved through consensus. Based reporting standards care, we extraction - study objectives, models’ main features, concerned, mitigation deployed, results. Using Mixed-Methods Appraisal Tool (MMAT), appraised quality studies. RESULTS After removing 585 duplicates, 1018 abstracts. From remaining 189 after exclusion, excluded 172 full texts included 17 The most investigated Race (or Ethnicity) (12/17), Sex (mostly identified as Gender studies), using binary “male vs female” (10/17) grouped according attempts into following categories: 1) datasets, 2) sourcing such Electronic Health Records, 3) developing tools with “human-in-the-loop” 4) identifying ethical principles decision-making. Mathematical algorithmic preprocessing methods, changing labeling reweighing, along natural language processing method from unstructured notes, showed greatest potential. Other enhance model fairness include group recalibration application equalized odds metric, either exacerbated predictions errors resulted overall miscalibrations. CONCLUSIONS Results suggests that be easily mitigated when open-sourced, multiple stakeholders involved, during algorithm’ stage. Further empirical studies, considering nonbinary gender identities Indigenous peoples Canada, needed confirm expand this knowledge. CLINICALTRIAL OSF Registries qbph8; https://osf.io/qbph8 INTERNATIONAL REGISTERED REPORT RR2-10.2196/46684

Language: Английский

Citations

0

Chatsos: Vector Database Augmented Generative Question Answering Assistant in Safety Engineering DOI
Haiyang Tang,

Dongping Chen,

Qingzhao Chu

et al.

Published: Jan. 1, 2024

Language: Английский

Citations

0

Human-like object concept representations emerge naturally in multimodal large language models DOI Creative Commons
Huiguang He, Changde Du, Kaicheng Fu

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Aug. 13, 2024

Abstract The conceptualization and categorization of natural objects in the human mind have long intrigued cognitive scientists neuroscientists, offering crucial insights into perception cognition. Recently, rapid development Large Language Models (LLMs) has raised attractive question whether these models can also develop human-like object representations through exposure to vast amounts linguistic multimodal data. In this study, we combined behavioral neuroimaging analysis methods uncover how concept LLMs correlate with those humans. By collecting large-scale datasets 4.7 million triplet judgments from LLM Multimodal (MLLM), were able derive low-dimensional embeddings that capture underlying similarity structure 1,854 objects. resulting 66-dimensional found be highly stable predictive, exhibited semantic clustering akin mental representations. Interestingly, interpretability dimensions suggests MLLM developed conceptual Further demonstrated strong alignment between identified model neural activity patterns many functionally defined brain ROIs (e.g., EBA, PPA, RSC FFA). This provides compelling evidence LLMs, while not identical human, share fundamental commonalities reflect key schemas knowledge. study advances our understanding machine intelligence informs more artificial systems.

Language: Английский

Citations

0

Learning Chain of Counterfactual Thought for Bias-Robust Vision-Language Reasoning DOI
Yifeng Zhang, Ming Jiang, Qi Zhao

et al.

Lecture notes in computer science, Journal Year: 2024, Volume and Issue: unknown, P. 334 - 351

Published: Oct. 28, 2024

Language: Английский

Citations

0