Deep learning prediction of steep and flat corneal curvature using fundus photography in post-COVID telemedicine era DOI
Joon Yul Choi,

Hyung-Su Kim,

Jin Kuk Kim

et al.

Medical & Biological Engineering & Computing, Journal Year: 2023, Volume and Issue: 62(2), P. 449 - 463

Published: Oct. 27, 2023

Language: Английский

Deep Network-Based Comprehensive Parotid Gland Tumor Detection DOI
Kubilay Muhammed Sünnetci, Esat Kaba, Fatma Beyazal Çeliker

et al.

Academic Radiology, Journal Year: 2023, Volume and Issue: 31(1), P. 157 - 167

Published: June 3, 2023

Language: Английский

Citations

56

Impact of harmonization on the reproducibility of MRI radiomic features when using different scanners, acquisition parameters, and image pre-processing techniques: a phantom study DOI Creative Commons
Ghasem Hajianfar, Seyyed Ali Hosseini, Sara Bagherieh

et al.

Medical & Biological Engineering & Computing, Journal Year: 2024, Volume and Issue: 62(8), P. 2319 - 2332

Published: March 27, 2024

Abstract This study investigated the impact of ComBat harmonization on reproducibility radiomic features extracted from magnetic resonance images (MRI) acquired different scanners, using various data acquisition parameters and multiple image pre-processing techniques a dedicated MRI phantom. Four scanners were used to acquire an nonanatomic phantom as part TCIA RIDER database. In fast spin-echo inversion recovery (IR) sequences, several durations employed, including 50, 100, 250, 500, 750, 1000, 1500, 2000, 2500, 3000 ms. addition, 3D spoiled gradient recalled echo (FSPGR) sequence was investigate flip angles (FA): 2, 5, 10, 15, 20, 25, 30 degrees. Nineteen compartments manually segmented. Different approaches pre-process each image: Bin discretization, Wavelet filter, Laplacian Gaussian, logarithm, square, square root, gradient. Overall, 92 first-, second-, higher-order statistical extracted. also applied features. Finally, Intraclass Correlation Coefficient (ICC) Kruskal-Wallis’s (KW) tests implemented assess robustness The number non-significant in KW test ranged between 0–5 29–74 for 31–91 37–92 three times tests, 0–33 34–90 FAs, 3–68 65–89 IRs before after harmonization, with techniques, respectively. ICC over 90% 0–8 6–60 11–75 17–80 3–83 9–84 3–49 3–63 use IRs, FAs has great However, majority scanner-robust is robust IR FA. Among effective MR images, one scanner have negligible might affect large extent. significantly Graphical

Language: Английский

Citations

10

Flight traits of dengue-infected Aedes aegypti mosquitoes DOI Creative Commons
Nouman Javed, Adam J. López-Denman, Prasad N. Paradkar

et al.

Computers in Biology and Medicine, Journal Year: 2024, Volume and Issue: 171, P. 108178 - 108178

Published: Feb. 19, 2024

Understanding the flight behaviour of dengue-infected mosquitoes can play a vital role in various contexts, including modelling disease risks and developing effective interventions against dengue. Studies on locomotor activity have often faced challenges terms methodology. Some studies used small tubes, which impacted natural movement mosquitoes, while others that cages did not capture three-dimensional flights, despite naturally flying three dimensions. In this study, we utilised Mask RCNN (Region-based Convolutional Neural Network) along with cubic spline interpolation to comprehensively track Aedes aegypti mosquitoes. This analysis considered number parameters as characteristics mosquito flight, duration, Euclidean distance, speed, volume (space) covered during flights. The accuracy achieved for detection tracking was 98.34% 100% resting Notably, interpolated data accounted only 0.31%, underscoring reliability results. Flight traits results revealed exposure dengue virus significantly increases duration (p-value 0.0135 × 10−3) flights 0.029) whilst decreasing total compared uninfected study observe any evident impact distance 0.064) speed aegypti. These highlight intricate relationship between infection aegypti, providing valuable insights into transmission dynamics. focused mosquitoes; future research explore other arboviruses behaviour.

Language: Английский

Citations

9

Reinforced Collaborative-Competitive Representation for Biomedical Image Recognition DOI
Junwei Jin, S. Kevin Zhou, Yanting Li

et al.

Interdisciplinary Sciences Computational Life Sciences, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 22, 2025

Language: Английский

Citations

1

Emotion detection from ECG signals with different learning algorithms and automated feature engineering DOI
Faruk Enes Oğuz, Ahmet Alkan, Thorsten Schöler

et al.

Signal Image and Video Processing, Journal Year: 2023, Volume and Issue: 17(7), P. 3783 - 3791

Published: May 30, 2023

Language: Английский

Citations

20

An adaptive ensemble deep learning framework for reliable detection of pandemic patients DOI
Muhammad Shahid Iqbal, Rizwan Ali Naqvi, Roohallah Alizadehsani

et al.

Computers in Biology and Medicine, Journal Year: 2023, Volume and Issue: 168, P. 107836 - 107836

Published: Dec. 7, 2023

Language: Английский

Citations

14

Classification of breast lesions in ultrasound images using deep convolutional neural networks: transfer learning versus automatic architecture design DOI Creative Commons
Alaa AlZoubi, Feng Lu, Yicheng Zhu

et al.

Medical & Biological Engineering & Computing, Journal Year: 2023, Volume and Issue: 62(1), P. 135 - 149

Published: Sept. 22, 2023

Abstract Deep convolutional neural networks (DCNNs) have demonstrated promising performance in classifying breast lesions 2D ultrasound (US) images. Exiting approaches typically use pre-trained models based on architectures designed for natural images with transfer learning. Fewer attempts been made to design customized specifically this purpose. This paper presents a comprehensive evaluation learning solutions and automatically networks, analyzing the accuracy robustness of different recognition three folds. First, we develop six DCNN (BNet, GNet, SqNet, DsNet, RsNet, IncReNet) Second, adapt Bayesian optimization method optimize CNN network (BONet) lesions. A retrospective dataset 3034 US collected from various hospitals is then used evaluation. Extensive tests show that BONet outperforms other models, exhibiting higher (83.33%), lower generalization gap (1.85%), shorter training time (66 min), less model complexity (approximately 0.5 million weight parameters). We also compare diagnostic all against by experienced radiologists. Finally, explore saliency maps explain classification decisions models. Our investigation shows can assist comprehending decisions. Graphical

Language: Английский

Citations

11

Certainty weighted voting-based noise correction for crowdsourcing DOI
Huiru Li, Liangxiao Jiang, Chaoqun Li

et al.

Pattern Recognition, Journal Year: 2024, Volume and Issue: 150, P. 110325 - 110325

Published: Feb. 8, 2024

Language: Английский

Citations

4

Emotion Fusion-Sense (Emo Fu-Sense) – A novel multimodal emotion classification technique DOI
Muhammad Umair, Nasir Rashid, Umar Shahbaz Khan

et al.

Biomedical Signal Processing and Control, Journal Year: 2024, Volume and Issue: 94, P. 106224 - 106224

Published: March 28, 2024

Language: Английский

Citations

4

iDDCS: Cervical intraepithelial neoplasia severity detection during vagina birth using artificial intelligence approach DOI
Akanksha Kapruwan, Sachin Sharma, Himanshu Rai Goyal

et al.

Multidisciplinary Science Journal, Journal Year: 2025, Volume and Issue: 7(8), P. 2025192 - 2025192

Published: Feb. 2, 2025

To diagnose cervical cancer, this paper uses patient Pap tests to look for abnormal tissue growth in the cervix region of pregnant women. By undergoing routine identify any precancers and treating them, cancer can be avoided. The test scans cells at unusual or dysplasia-related alterations. However, traditional manual examination smears under microscope is vulnerable error. effectively categorize cells, a brand-new framework built on powerful features Support vector machines using convolutional neural network (SVM) models are proposed paper. performance algorithms was assessed based various evaluation metrics, including accuracy, sensitivity, specificity, false positive rate, negative predictive value, F score, error training time. Among three CNN tested, Faster R-CNN achieved an accuracy 93.7%, SSD reached 95.9%, YOLOv8 had highest 96.6% image detection. For SVM algorithm’s classification detection capabilities, average rates CIN1, CIN2, CIN3 were 90%, 89%, 81%, respectively, as per dataset. results suggested that CNN-SVM model with robust might used CIN CIN-C early stages. This novel method most effective among other unsupervised methods diagnosis. We recommend revolutionary approach combines NLP technology iDDCS Another goal research use extract relevant information from medical records, pathology reports, clinical statistics.

Language: Английский

Citations

0