Artificial Intelligence, the ChatGPT Large Language Model: Assessing the Accuracy of Responses to the Gynaecological Endoscopic Surgical Education and Assessment (GESEA) Level 1-2 knowledge tests DOI
Matteo Pavone,

Livia Palmieri,

Nicolò Bizzarri

et al.

Facts Views and Vision in ObGyn, Journal Year: 2024, Volume and Issue: 16(4), P. 449 - 456

Published: Dec. 1, 2024

Background: In 2022, OpenAI launched ChatGPT 3.5, which is now widely used in medical education, training, and research. Despite its valuable use for the generation of information, concerns persist about authenticity accuracy. Its undisclosed information source outdated dataset pose risks misinformation. Although it used, AI-generated text inaccuracies raise doubts reliability. The ethical such technologies crucial to uphold scientific accuracy Objective: This study aimed assess doing GESEA tests 1 2. Materials Methods: 100 multiple-choice theoretical questions from certifications 2 were presented ChatGPT, requesting selection correct answer along with an explanation. Expert gynaecologists evaluated graded explanations Main outcome measures: showed a 59% responses, 64% providing comprehensive explanations. It performed better Level (64% accuracy) than (54% questions. Conclusions: versatile tool medicine research, offering knowledge, promoting evidence-based practice. widespread use, has not been validated yet. found response rate, highlighting need validation considerations. Future research should investigate ChatGPT’s truthfulness subspecialty fields as gynaecologic oncology compare different versions chatbot continuous improvement. What new? Artificial intelligence (AI) great potential However, validity outputs remains unverified. aims evaluate responses generated by enhance critical this tool.

Language: Английский

Minimally Invasive Rectal Surgery: Current Status and Future Perspectives in the Era of Digital Surgery DOI Open Access
Marta Goglia, Matteo Pavone, Vito D’Andrea

et al.

Journal of Clinical Medicine, Journal Year: 2025, Volume and Issue: 14(4), P. 1234 - 1234

Published: Feb. 13, 2025

Over the past two decades, minimally invasive approaches in rectal surgery have changed landscape of surgical interventions, impacting both malignant and benign pathologies. The dynamic nature cancer treatment owes much to innovations techniques, reflected expanding literature on available modalities. Local excision, facilitated by surgery, offers curative potential for patients with early T1 cancers favorable pathologic features. For more complex cases, laparoscopic robotic demonstrated significant efficacy provided precise, durable outcomes while reducing perioperative morbidity enhancing postoperative recovery. Additionally, advancements imaging, instrumentation, enhanced recovery protocols further optimized patient care. integration multidisciplinary care has also emerged as a cornerstone treatment, emphasizing collaboration among surgeons, oncologists, radiologists deliver personalized, evidence-based This narrative review aims elucidate current techniques pathologies, spanning conditions, exploring future directions field, including role artificial intelligence next-generation platforms.

Language: Английский

Citations

1

Using Large Language Models in the Diagnosis of Acute Cholecystitis: Assessing Accuracy and Guidelines Compliance DOI
Marta Goglia,

Arianna Cicolani,

Francesco Maria Carrano

et al.

The American Surgeon, Journal Year: 2025, Volume and Issue: unknown

Published: March 12, 2025

Background Large language models (LLMs) are advanced tools capable of understanding and generating human-like text. This study evaluated the accuracy several commercial LLMs in addressing clinical questions related to diagnosis management acute cholecystitis, as outlined Tokyo Guidelines 2018 (TG18). We assessed their congruence with expert panel discussions presented guidelines. Methods ChatGPT4.0, Gemini Advanced, GPTo1-preview on ten questions. Eight derived from TG18, two were formulated by authors. Two authors independently rated each LLM’s responses a four-point scale: (1) accurate comprehensive, (2) but not (3) partially accurate, inaccurate, (4) entirely inaccurate. A third author resolved any scoring discrepancies. Then, we comparatively analyzed performance ChatGPT4.0 against newer large (LLMs), specifically Advanced GPTo1-preview, same set delineate respective strengths limitations. Results provided consistent for 90% It delivered “accurate comprehensive” answers 4/10 (40%) 5/10 (50%). One response (10%) was “partially inaccurate.” demonstrated higher some yielded similar percentage inaccurate” responses. Notably, neither model produced “entirely answers. Discussion LLMs, such ChatGPT demonstrate potential accurately regarding cholecystitis. With awareness limitations, careful implementation, ongoing refinement, could serve valuable resources physician education patient information, potentially improving decision-making future.

Language: Английский

Citations

0

Exploring the integration of AI in library services: Perspectives and considerations DOI
Nihar K. Patra, Panorea Gaitanou, Bolaji David Oladokun

et al.

Business Information Review, Journal Year: 2025, Volume and Issue: unknown

Published: May 12, 2025

This study aims to explore the integration of ChatGPT and similar AI language models within library contexts, specifically focusing on their use by professionals. It understand demographic profile users, awareness usage impact these services. A questionnaire was employed as primary methodology for data collection. method preferred it offers advantage reaching a larger more diverse sample size, which is particularly beneficial in research where target population often includes wide range users with varying needs backgrounds. The begins an analysis participants, followed examining using chi-square test. further investigates encounters prevalent AI-powered tools libraries, highlighting roles enhancing various explores geographic distribution among Most participants are male aged 25-44 hold advanced degrees primarily from India. There significant link between Artificial Intelligence like virtual assistants automated cataloguing systems. seen enhance customer service (86.67%) assistance (58.67%) though privacy (65%) ethical concerns (60%) prevalent. Despite minimal concern about replacing librarians (8%) there strong support (57%) integrating Chat-GPT into next generation Overall, this highlights high level optimism toward libraries while considering high-level ethical, practical concerns. thorough provides insightful information state art, awareness, attitudes, implications around libraries. clarifies advantages well difficulties, encompassing issues real-world effects operations.

Language: Английский

Citations

0

Editorial: Future frontiers in the management of metastatic colorectal cancer DOI Creative Commons
Francesco Giovinazzo, Gaetano Gallo, Marta Goglia

et al.

Frontiers in Oncology, Journal Year: 2024, Volume and Issue: 14

Published: Oct. 4, 2024

Language: Английский

Citations

0

Artificial Intelligence, the ChatGPT Large Language Model: Assessing the Accuracy of Responses to the Gynaecological Endoscopic Surgical Education and Assessment (GESEA) Level 1-2 knowledge tests DOI
Matteo Pavone,

Livia Palmieri,

Nicolò Bizzarri

et al.

Facts Views and Vision in ObGyn, Journal Year: 2024, Volume and Issue: 16(4), P. 449 - 456

Published: Dec. 1, 2024

Background: In 2022, OpenAI launched ChatGPT 3.5, which is now widely used in medical education, training, and research. Despite its valuable use for the generation of information, concerns persist about authenticity accuracy. Its undisclosed information source outdated dataset pose risks misinformation. Although it used, AI-generated text inaccuracies raise doubts reliability. The ethical such technologies crucial to uphold scientific accuracy Objective: This study aimed assess doing GESEA tests 1 2. Materials Methods: 100 multiple-choice theoretical questions from certifications 2 were presented ChatGPT, requesting selection correct answer along with an explanation. Expert gynaecologists evaluated graded explanations Main outcome measures: showed a 59% responses, 64% providing comprehensive explanations. It performed better Level (64% accuracy) than (54% questions. Conclusions: versatile tool medicine research, offering knowledge, promoting evidence-based practice. widespread use, has not been validated yet. found response rate, highlighting need validation considerations. Future research should investigate ChatGPT’s truthfulness subspecialty fields as gynaecologic oncology compare different versions chatbot continuous improvement. What new? Artificial intelligence (AI) great potential However, validity outputs remains unverified. aims evaluate responses generated by enhance critical this tool.

Language: Английский

Citations

0