Assessing the documentation of publicly available medical image and signal datasets and their impact on bias using the BEAMRAD tool DOI Creative Commons

Maria Galanty,

Dieuwertje Luitse, Sijm H. Noteboom

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: Dec. 30, 2024

Medical datasets are vital for advancing Artificial Intelligence (AI) in healthcare. Yet biases these on which deep-learning models trained can compromise reliability. This study investigates stemming from dataset-creation practices. Drawing existing guidelines, we first developed a BEAMRAD tool to assess the documentation of public Magnetic Resonance Imaging (MRI); Color Fundus Photography (CFP), and Electrocardiogram (ECG) datasets. In doing so, provide an overview that may emerge due inadequate dataset documentation. Second, examine current state medical images signal data. Our research reveals there is substantial variance image datasets, even though guidelines have been imaging. indicates subject individual discretionary decisions. Furthermore, find aspects such as hardware data acquisition details commonly documented, while information regarding annotation practices, error quantification, or limitations not consistently reported. risks having considerable implications abilities users detect potential sources bias through respective develop reliable robust be adapted clinical practice.

Language: Английский

Segment anything model for medical images? DOI
Yuhao Huang, Xin Yang,

Lian Liu

et al.

Medical Image Analysis, Journal Year: 2023, Volume and Issue: 92, P. 103061 - 103061

Published: Dec. 7, 2023

Language: Английский

Citations

200

SegRap2023: A benchmark of organs-at-risk and gross tumor volume Segmentation for Radiotherapy Planning of Nasopharyngeal Carcinoma DOI
Xiangde Luo,

Jia Fu,

Yunxin Zhong

et al.

Medical Image Analysis, Journal Year: 2025, Volume and Issue: 101, P. 103447 - 103447

Published: Jan. 2, 2025

Language: Английский

Citations

4

Towards more precise automatic analysis: a systematic review of deep learning-based multi-organ segmentation DOI Creative Commons

Xiaoyu Liu,

Linhao Qu, Ziyue Xie

et al.

BioMedical Engineering OnLine, Journal Year: 2024, Volume and Issue: 23(1)

Published: June 8, 2024

Abstract Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step computer-aided diagnosis, surgical navigation, radiation therapy. In past few years, with a data-driven feature extraction approach end-to-end training, automatic deep learning-based multi-organ methods have far outperformed traditional become new research topic. This review systematically summarizes latest this field. We searched Google Scholar for papers published January 1, 2016 to December 31, 2023, using keywords “multi-organ segmentation” “deep learning”, resulting 327 papers. followed PRISMA guidelines paper selection, 195 studies were deemed be within scope review. summarized two main aspects involved segmentation: datasets methods. Regarding datasets, we provided overview existing public conducted in-depth analysis. Concerning methods, categorized approaches into three major classes: fully supervised, weakly supervised semi-supervised, based on whether they require complete label information. achievements these terms accuracy. discussion conclusion section, outlined current trends segmentation.

Language: Английский

Citations

10

HaN-Seg: The head and neck organ-at-risk CT and MR segmentation challenge DOI Creative Commons
Gašper Podobnik, Bulat Ibragimov, Elias Tappeiner

et al.

Radiotherapy and Oncology, Journal Year: 2024, Volume and Issue: 198, P. 110410 - 110410

Published: June 24, 2024

To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit information computed tomography (CT) magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head Neck Organ-at-Risk CT MR Segmentation Challenge. challenge task was to automatically segment 30 organs-at-risk (OARs) HaN region in 14 withheld test cases given availability 42 publicly available training cases. Each case consisted one contrast-enhanced T1-weighted image same patient, with up corresponding reference OAR delineation masks. performance evaluated terms Dice similarity coefficient (DSC) 95-percentile Hausdorff distance (HD95), statistical ranking applied each metric by pairwise comparison submitted using Wilcoxon signed-rank test. While 23 teams registered challenge, only seven their final phase. top-performing team achieved a DSC 76.9 % HD95 3.5 mm. All participating utilized architectures based on U-Net, winning leveraging rigid registration combined network entry-level concatenation both modalities. This simulated real-world clinical scenario providing non-registered images varying fields-of-view voxel sizes. Remarkably, segmentation surpassing inter-observer agreement dataset. These results set benchmark future research this dataset paired multi-modal general.

Language: Английский

Citations

10

vOARiability: Interobserver and intermodality variability analysis in OAR contouring from head and neck CT and MR images DOI Creative Commons
Gašper Podobnik, Bulat Ibragimov, Primož Peterlin

et al.

Medical Physics, Journal Year: 2024, Volume and Issue: 51(3), P. 2175 - 2186

Published: Jan. 17, 2024

Abstract Background Accurate and consistent contouring of organs‐at‐risk (OARs) from medical images is a key step radiotherapy (RT) cancer treatment planning. Most approaches rely on computed tomography (CT) images, but the integration complementary magnetic resonance (MR) modality highly recommended, especially perspective OAR contouring, synthetic CT MR image generation for MR‐only RT, MR‐guided RT. Although has been recognized as valuable OARs in head neck (HaN) region, accuracy consistency resulting contours have not yet objectively evaluated. Purpose To analyze interobserver intermodality variability HaN performed by observers with different level experience same patients. Methods In final cohort 27 patients, up to 31 were obtained radiation oncology resident (junior observer, JO) board‐certified oncologist (senior SO). The then evaluated terms variability, characterized agreement among (JO SO) when selected (CT or MR), modalities MR) contoured observer SO), both Dice coefficient (DC) 95‐percentile Hausdorff distance (HD ). Results mean (±standard deviation) was 69.0 ± 20.2% 5.1 4.1 mm, while 61.6 19.0% 6.1 4.3 mm DC HD , respectively, across all OARs. Statistically significant differences only found specific registration resulted target error 1.7 0.5 which considered valid analysis variability. Conclusions was, general, similar modalities, did considerably affect performance. However, results indicate that an difficult contour regardless whether it image, may be important factor are deemed contour. Several can also attributed adherence guidelines, poor visibility without distinctive boundaries either images. considerable observed OARs, concluded almost degree modality, works favor

Language: Английский

Citations

5

ScribblePrompt: Fast and Flexible Interactive Segmentation for Any Biomedical Image DOI
Hallee E. Wong, Marianne Rakic, John V. Guttag

et al.

Lecture notes in computer science, Journal Year: 2024, Volume and Issue: unknown, P. 207 - 229

Published: Nov. 9, 2024

Language: Английский

Citations

4

Automatic segmentation of MRI images for brain radiotherapy planning using deep ensemble learning DOI
SA Yoganathan, Tarraf Torfeh, Satheesh Paloor

et al.

Biomedical Physics & Engineering Express, Journal Year: 2025, Volume and Issue: 11(2), P. 025007 - 025007

Published: Jan. 17, 2025

Abstract Background and Purpose : This study aimed to develop evaluate an efficient method automatically segment T1- T2-weighted brain magnetic resonance imaging (MRI) images. We specifically compared the segmentation performance of individual convolutional neural network (CNN) models against ensemble approach advance accuracy MRI-guided radiotherapy (RT) planning. Materials Methods . The evaluation was conducted on a private clinical dataset publicly available (HaN-Seg). Anonymized MRI data from 55 cancer patients, including T1-weighted, T1-weighted with contrast, images, were used in dataset. employed EDL strategy that integrated five independently trained 2D networks, each tailored for precise tumors organs at risk (OARs) scans. Class probabilities obtained by averaging final layer activations (Softmax outputs) networks using weighted-average method, which then converted into discrete labels. Segmentation evaluated Dice similarity coefficient (DSC) Hausdorff distance 95% (HD95). model also tested HaN-Seg public comparison. Results demonstrated superior both datasets. For dataset, achieved average DSC 0.7 ± 0.2 HD95 4.5 2.5 mm across all segmentations, significantly outperforming yielded values ≤0.6 ≥14 mm. Similar improvements observed Conclusions Our shows consistently outperforms CNN datasets, demonstrating potential learning enhance accuracy. These findings underscore value applications, particularly RT

Language: Английский

Citations

0

A systematic review of the role of artificial intelligence in automating computed tomography-based adaptive radiotherapy for head and neck cancer DOI Creative Commons
E. Mastella, Francesca Calderoni, Luigi Manco

et al.

Physics and Imaging in Radiation Oncology, Journal Year: 2025, Volume and Issue: 33, P. 100731 - 100731

Published: Jan. 1, 2025

Language: Английский

Citations

0

Rep-MedSAM: Towards Real-Time and Universal Medical Image Segmentation DOI
Mu-Xin Wei, Shuqing Chen,

Silin Wu

et al.

Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 57 - 69

Published: Jan. 1, 2025

Language: Английский

Citations

0

Enhanced nnU-Net Architectures for Automated MRI Segmentation of Head and Neck Tumors in Adaptive Radiation Therapy DOI Creative Commons

Jessica Kächele,

Maximilian Zenk, Maximilian Rokuss

et al.

Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 50 - 64

Published: Jan. 1, 2025

Language: Английский

Citations

0