Acceptable Use Policies for Foundation Models DOI

Kevin Klyman

Опубликована: Ноя. 26, 2024

Policymakers hoping to regulate foundation models have focused on preventing specific objectionable uses of AI systems, such as the creation bioweapons, deepfakes, and child sexual abuse material. Effectively blocking these can be difficult in case they are general-purpose technologies that principle used generate any type content. Nevertheless, model developers been proactive this area, adopting broad acceptable use policies prohibit many dangerous select themselves part their terms service or licenses. As 2023 Foundation Model Transparency Index, researchers at Stanford Center for Research Models catalogued 10 leading developers. All companies publicly disclose permitted, restricted, prohibited models, but there is little additional information available about how implemented. Only 3 enforce policy, while only 2 give justification users when policy. We provide background a preliminary analysis 30 developers’ policies, discussion policy considerations related attempts restrict models.

Язык: Английский

"This Journey is Never Truly Over, For the Ball I Carry is Always Moving": Future Obituaries and End-of-Life First Design DOI
Jaydon Farao, Ajit G. Pillai, Hafeni Mthoko

и другие.

Опубликована: Апрель 23, 2025

Язык: Английский

Процитировано

0

Envisioning Possibilities and Challenges of AI for Personalized Cancer Care DOI Creative Commons
Elaine Wei San Kong, Kuo-Ting Huang, Aakash Gautam

и другие.

Опубликована: Ноя. 11, 2024

The use of Artificial Intelligence (AI) in healthcare, including caring for cancer survivors, has gained significant interest. However, gaps remain our understanding how such AI systems can provide care, especially ethnic and racial minority groups who continue to face care disparities. Through interviews with six we identify critical current healthcare as a lack personalized insufficient cultural linguistic accommodation. AI, when applied was seen way address these issues by enabling real-time, culturally aligned, linguistically appropriate interactions. We also uncovered concerns about the implications AI-driven personalization, data privacy, loss human touch caregiving, risk echo chambers that limit exposure diverse information. conclude discussing trade-offs between AI-enhanced personalization need structural changes go beyond technological solutions, leading us argue should begin asking, ''Why personalization?''

Язык: Английский

Процитировано

0

Acceptable Use Policies for Foundation Models DOI

Kevin Klyman

Опубликована: Ноя. 26, 2024

Policymakers hoping to regulate foundation models have focused on preventing specific objectionable uses of AI systems, such as the creation bioweapons, deepfakes, and child sexual abuse material. Effectively blocking these can be difficult in case they are general-purpose technologies that principle used generate any type content. Nevertheless, model developers been proactive this area, adopting broad acceptable use policies prohibit many dangerous select themselves part their terms service or licenses. As 2023 Foundation Model Transparency Index, researchers at Stanford Center for Research Models catalogued 10 leading developers. All companies publicly disclose permitted, restricted, prohibited models, but there is little additional information available about how implemented. Only 3 enforce policy, while only 2 give justification users when policy. We provide background a preliminary analysis 30 developers’ policies, discussion policy considerations related attempts restrict models.

Язык: Английский

Процитировано

0