SSRN Electronic Journal, Journal Year: 2024, Volume and Issue: unknown
Published: Jan. 1, 2024
Language: Английский
SSRN Electronic Journal, Journal Year: 2024, Volume and Issue: unknown
Published: Jan. 1, 2024
Language: Английский
Deleted Journal, Journal Year: 2024, Volume and Issue: 20(2), P. 573 - 583
Published: April 8, 2024
According to the characteristics of learning style, this paper classifies traditional cultural classics according school. Audio, video, text, pictures and other forms are collected match students' styles. The system uses three-layer structure My Batis+ Spring MVC, Ajax+ Free marker+ JS technology, MySQL database, Apache Tomcat8 manager, Eclipse development platform, WebStorm network front-end tools. In addition, auxiliary tools such as Camtasia Studio PhotoshopCS5 used generate prototype systems for culture learning. Its data processing is fast.
Language: Английский
Citations
0Published: Aug. 22, 2024
Language: Английский
Citations
0Journal of Engineering Research and Reports, Journal Year: 2024, Volume and Issue: 26(12), P. 24 - 46
Published: Nov. 27, 2024
This study investigates the efficacy of synthetic data in mitigating bias artificial intelligence (AI) model training, focusing on demographic inclusivity and fairness. Using Generative Adversarial Networks (GANs), datasets were generated from UCI Adult Dataset, COMPAS Recidivism MIMIC-III Clinical Database. Logistic regression models trained both original to evaluate fairness metrics predictive accuracy. Fairness was assessed through parity equality opportunity, which measure balanced prediction rates equitable outcomes across groups. Fidelity diversity evaluated using statistical tests such as Kolmogorov-Smirnov (KS) Kullback-Leibler (KL) divergence, along with Inception Score, quantifies data. The results revealed significant improvements for datasets. For dataset, increased 0.72 0.89, opportunity rose 0.65 0.83, without compromising accuracy (0.82 AUC-ROC compared 0.83 data). Based findings, this research recommends employing GANs generating bias-sensitive domains enhance ensure AI models. Furthermore, integrating human-in-the-loop (HITL) systems is critical monitor address residual biases during generation. Standardized validation frameworks, including fidelity tests, should be adopted transparency consistency applications. These practices can enable organizations leverage effectively while maintaining ethical standards development deployment.
Language: Английский
Citations
0SSRN Electronic Journal, Journal Year: 2024, Volume and Issue: unknown
Published: Jan. 1, 2024
Language: Английский
Citations
0SSRN Electronic Journal, Journal Year: 2024, Volume and Issue: unknown
Published: Jan. 1, 2024
Language: Английский
Citations
0