Published: April 24, 2025
Language: Английский
Published: April 24, 2025
Language: Английский
Published: Jan. 1, 2024
The advent of Generative AI, particularly through Large Language Models (LLMs) like ChatGPT and its successors, marks a paradigm shift in the AI landscape. Advanced LLMs exhibit multimodality, handling diverse data formats, thereby broadening their application scope. However, complexity emergent autonomy these models introduce challenges predictability legal compliance. This paper analyses regulatory implications European Union context, focusing on liability, privacy, intellectual property, cybersecurity. It examines adequacy existing proposed EU legislation, including Artificial Intelligence Act (AIA), addressing posed by general particular. identifies potential gaps shortcomings legislative framework proposes recommendations to ensure safe compliant deployment generative models.
Language: Английский
Citations
28Critical Perspectives on Accounting, Journal Year: 2024, Volume and Issue: 99, P. 102722 - 102722
Published: Feb. 22, 2024
New large language models (LLMs) like ChatGPT have the potential to change qualitative research by contributing every stage of process from generating interview questions structuring publications. However, it is far clear whether such 'assistance' will enable or deskill and eventually displace researcher. This paper sets out explore implications for recently emerged capabilities LLMs; how they acquired their seemingly 'human-like' 'converse' with us humans, in what ways these are deceptive misleading. Building on a comparison different 'trainings' humans LLMs, first traces human-like qualities LLM human proclivity project communicative intent into onto LLMs' purely imitative capacity predict structure communication. It then goes detail which communication misleading relation absolute 'certainty' LLMs 'converse', intrinsic tendencies 'hallucination' 'sycophancy', narrow conception 'artificial intelligence', complete lack ethical sensibility responsibility, finally feared danger an 'emergence' 'human-competitive' 'superhuman' capabilities. The concludes noting dangers widespread use as 'mediators' self-understanding culture. A postscript offers brief reflection only can do researchers.
Language: Английский
Citations
17SSRN Electronic Journal, Journal Year: 2024, Volume and Issue: unknown
Published: Jan. 1, 2024
The advent of Generative AI, particularly through Large Language Models (LLMs) like ChatGPT and its successors, marks a paradigm shift in the AI landscape. Advanced LLMs exhibit multimodality, handling diverse data formats, thereby broadening their application scope. However, complexity emergent autonomy these models introduce challenges predictability legal compliance. This paper delves into regulatory implications European Union context, analyzing aspects liability, privacy, intellectual property, cybersecurity. It critically examines adequacy existing proposed EU legislation, including Artificial Intelligence Act (AIA) draft, addressing posed by general particular. identifies potential gaps shortcomings legislative framework proposes recommendations to ensure safe compliant deployment generative models, ensuring they align with EU's evolving digital landscape standards.
Language: Английский
Citations
11Royal Society Open Science, Journal Year: 2024, Volume and Issue: 11(8)
Published: Aug. 1, 2024
Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce responses are plausible, helpful confident, but contain factual inaccuracies, misleading references biased information. These subtle mistruths poised cumulatively degrade homogenize knowledge over time. This article examines the existence feasibility legal duty for LLM providers create ‘tell truth’. We argue should be required mitigate careless better align their with through open, processes. define against ‘ground truth’ related including hallucinations, misinformation disinformation. assess truth-related obligations EU human rights law Artificial Intelligence Act, Digital Services Product Liability Directive Directive. Current frameworks limited, sector-specific duties. Drawing on duties science academia, education, archives libraries, German case which Google was held liable defamation caused autocomplete, we propose pathway narrow- general-purpose LLMs.
Language: Английский
Citations
11AI, Journal Year: 2025, Volume and Issue: 6(1), P. 12 - 12
Published: Jan. 15, 2025
The growing interest in advanced large language models (LLMs) like ChatGPT has sparked debate about how best to use them various human activities. However, a neglected issue the concerning applications of LLMs is whether they can reason logically and follow rules novel contexts, which are critical for our understanding LLMs. To address this knowledge gap, study investigates five (ChatGPT-4o, Claude, Gemini, Meta AI, Mistral) using word ladder puzzles assess their logical reasoning rule-adherence capabilities. Our two-phase methodology involves (1) explicit instructions regarding solve then evaluate rule understanding, followed by (2) assessing LLMs’ ability create while adhering rules. Additionally, we test implicitly recognize avoid HIPAA privacy violations as an example real-world scenario. findings reveal that show persistent lack systematically fail puzzle Furthermore, all except Claude prioritized task completion (text writing) over ethical considerations test. expose flaws rule-following capabilities, raising concerns reliability tasks requiring strict reasoning. Therefore, urge caution when integrating into fields highlight need further research capabilities limitations ensure responsible AI development.
Language: Английский
Citations
1Computer Law & Security Review, Journal Year: 2024, Volume and Issue: unknown, P. 106066 - 106066
Published: Oct. 1, 2024
Language: Английский
Citations
8Artificial Intelligence and Law, Journal Year: 2024, Volume and Issue: unknown
Published: Dec. 3, 2024
Language: Английский
Citations
6International Journal of the Legal Profession, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 26
Published: Nov. 5, 2024
Two of the forces that have most shaped landscape in higher education over last few years been sustainability on one hand and Artificial Intelligence (AI) other. Bringing these phenomena together, this article examines multifaceted challenges posed by integrating AI into legal education. The integration has promised to revolutionise field, offering unprecedented opportunities for efficiency innovation. However, transformative shift is accompanied ethical challenges. This explores important issues arising from adoption education, emphasising equity, ethics long-term viability AI-driven initiatives. Strategies promote fairness, inclusivity practices are explored, alongside resource allocation schemes, digital divide mitigation sustainable faculty training programmes. Ethical considerations examined, focusing biases policies promoting equity inclusion. also discusses challenge regulatory compliance an evolving landscape. intersection adoption, necessitates a conscientious principles navigate dilemmas responsibly. By addressing challenges, can lead way shaping future where serves as valuable tool.
Language: Английский
Citations
5Revue française d administration publique, Journal Year: 2025, Volume and Issue: n° 186(2), P. 541 - 555
Published: Jan. 2, 2025
Citations
0Journal of Lipid and Atherosclerosis, Journal Year: 2025, Volume and Issue: 14(1), P. 77 - 77
Published: Jan. 1, 2025
Dyslipidemia dramatically increases the risk of cardiovascular diseases, necessitating appropriate treatment techniques. Generative AI (GenAI), an advanced technology that can generate diverse content by learning from vast datasets, provides promising new opportunities to address this challenge. GenAI-powered frequently asked questions systems and chatbots offer continuous, personalized support addressing lifestyle modifications medication adherence, which is crucial for patients with dyslipidemia. These tools also help promote health literacy making information more accessible reliable. GenAI helps healthcare providers construct clinical case scenarios, training materials, evaluation tools, supports professional development evidence-based practice. Multimodal analyzes food images nutritional deliver dietary recommendations tailored each patient's condition, improving long-term management those Moreover, using image generation enhances visual quality educational materials both professionals, allowing create real-time, customized aids. To apply successfully, must develop GenAI-related abilities, such as prompt engineering critical GenAI-generated data.
Language: Английский
Citations
0