Опубликована: Янв. 1, 2024
Язык: Английский
Опубликована: Янв. 1, 2024
Язык: Английский
International Journal of Computer Assisted Radiology and Surgery, Год журнала: 2024, Номер 19(8), С. 1615 - 1625
Опубликована: Июнь 20, 2024
Язык: Английский
Процитировано
7Results in Engineering, Год журнала: 2024, Номер 23, С. 102745 - 102745
Опубликована: Авг. 18, 2024
Язык: Английский
Процитировано
4Healthcare Technology Letters, Год журнала: 2025, Номер 12(1)
Опубликована: Янв. 1, 2025
Abstract Surgical scene segmentation is essential for enhancing surgical precision, yet it frequently compromised by the scarcity and imbalance of available data. To address these challenges, semantic image synthesis methods based on generative adversarial networks diffusion models have been developed. However, often yield non‐diverse images fail to capture small, critical tissue classes, limiting their effectiveness. In response, a class‐aware model (CASDM), novel approach which utilizes maps as conditions tackle data proposed. Novel mean squared error self‐perceptual loss functions defined prioritize critical, less visible thereby quality relevance. Furthermore, authors' knowledge, they are first generate multi‐class using text prompts in fashion specify contents. These then used CASDM images, datasets training validating models. This evaluation assesses both downstream performance, demonstrates strong effectiveness generalisability producing realistic image‐map pairs, significantly advancing across diverse challenging datasets.
Язык: Английский
Процитировано
0Algorithms, Год журнала: 2025, Номер 18(3), С. 155 - 155
Опубликована: Март 9, 2025
The lack of extensive, varied, and thoroughly annotated datasets impedes the advancement artificial intelligence (AI) for medical applications, especially colorectal cancer detection. Models trained with limited diversity often display biases, when utilized on disadvantaged groups. Generative models (e.g., DALL-E 2, Vector-Quantized Adversarial Network (VQ-GAN)) have been used to generate images but not colonoscopy data intelligent augmentation. This study developed an effective method producing synthetic image data, which can be train advanced diagnostic robust detection treatment. Text-to-image synthesis was performed using fine-tuned Visual Large Language (LLMs). Stable Diffusion DreamBooth Low-Rank Adaptation produce that look authentic, average Inception score 2.36 across three datasets. validation accuracy various classification Big Transfer (BiT), Fixed Resolution Residual Next Generation (FixResNeXt), Efficient Neural (EfficientNet) were 92%, 91%, 86%, respectively. Vision Transformer (ViT) Data-Efficient Image Transformers (DeiT) had rate 93%. Secondly, segmentation polyps, ground truth masks are generated Segment Anything Model (SAM). Then, five (U-Net, Pyramid Scene Parsing (PSNet), Feature (FPN), Link (LinkNet), Multi-scale Attention (MANet)) adopted. FPN produced excellent results, Intersection Over Union (IoU) 0.64, F1 0.78, a recall 0.75, Dice coefficient 0.77. demonstrates strong performance in terms both overlap metrics, particularly results balanced capability as shown by high coefficient. highlights how AI-generated improve analysis, is critical early
Язык: Английский
Процитировано
02022 International Joint Conference on Neural Networks (IJCNN), Год журнала: 2024, Номер unknown, С. 1 - 7
Опубликована: Июнь 30, 2024
Язык: Английский
Процитировано
2Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 56 - 66
Опубликована: Янв. 1, 2024
Язык: Английский
Процитировано
12022 International Joint Conference on Neural Networks (IJCNN), Год журнала: 2024, Номер 27, С. 1 - 8
Опубликована: Июнь 30, 2024
Язык: Английский
Процитировано
0Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 647 - 658
Опубликована: Янв. 1, 2024
Язык: Английский
Процитировано
0Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 47 - 56
Опубликована: Янв. 1, 2024
Язык: Английский
Процитировано
0Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 732 - 742
Опубликована: Янв. 1, 2024
Язык: Английский
Процитировано
0