Enhancing Cross Language for English-Telugu pairs through the Modified Transformer Model based Neural Machine Translation DOI Open Access

Vaishnavi Sadula,

D. Ramesh

International Journal of Computational and Experimental Science and Engineering, Год журнала: 2025, Номер 11(2)

Опубликована: Апрель 16, 2025

Cross-Language Translation (CLT) refers to conventional automated systems that generate translations between natural languages without human involvement. As the most of resources are mostly available in English, multi-lingual translation is badly required for penetration essence education deep roots society. Neural machine (NMT) one such intelligent technique which usually deployed an efficient process from source language another language. But these NMT techniques substantially requires large corpus data achieve improved process. This bottleneck makes apply mid-resource compared its dominant English counterparts. Although some benefit established systems, creating low-resource a challenge due their intricate morphology and lack non-parallel data. To overcome this aforementioned problem, research article proposes modified transformer architecture improve efficiency NMT. The proposed framework, consist Encoder-Decoder enhanced version with multiple fast feed forward networks multi-headed soft attention networks. designed extracts word patterns parallel during training, forming English–Telugu vocabulary via Kaggle, effectiveness evaluated using measures like Bilingual Evaluation Understudy (BLEU), character-level F-score (chrF) Word Error Rate (WER). prove excellence model, extensive comparison existing architectures performance metrics analysed. Outcomes depict has shown improvised by achieving BLEU as 0.89 low WER when models. These experimental results promise strong hold further experimentation based

Язык: Английский

Deep Learning Fusion for Student Academic Prediction Using ARLMN Ensemble Model DOI Open Access

B. Vaidehi,

K. Arunesh

International Journal of Computational and Experimental Science and Engineering, Год журнала: 2025, Номер 11(2)

Опубликована: Март 21, 2025

The realization of accurate student performance prognostication within the educational domain presents a critical capability for timely implementation intervention strategies and supplementary support mechanisms. This research proposes Adaptive Recurrent Logistic Memory Network (ARLMN), an innovative approach academic prediction. ARLMN combines Neural (RNN), Long Short-Term (LSTM) network, Sigmoid Plus - Activation Function(S-AAF). integrated system achieves impressive accuracy approximately 98%. By incorporating these methodologies, this model captures temporal dependencies patterns in data, including academic, demographic, emotional information. Pre-processing involves standardizing features before feeding them into RNN LSTM models, which are then combined using S-AAF classifier robust predictions. Experimental results demonstrate effectiveness approach, achieving high forecasting performance. identifying factors that impact performance, empowers educators to proactively intervene ensure success.

Язык: Английский

Процитировано

2

AI-Driven Cybersecurity: Enhancing Threat Detection and Mitigation with Deep Learning DOI Open Access
V. Saravanan, Khushboo Tripathi,

K Santhosh.

и другие.

International Journal of Computational and Experimental Science and Engineering, Год журнала: 2025, Номер 11(2)

Опубликована: Март 23, 2025

AI-driven cybersecurity has emerged as a transformative solution for combating increasingly sophisticated cyber threats. This research proposes an advanced deep learning-based framework aimed at enhancing threat detection and mitigation performance. Leveraging Convolutional Neural Networks (CNNs) Long Short-Term Memory (LSTM) architectures, the proposed model effectively identifies anomalies classifies potential threats with high accuracy minimal false positives. The was rigorously evaluated using real-time network traffic datasets, demonstrating notable increase in by 18.5%, achieving of 97.4%, compared to traditional machine learning methods (78.6%). Additionally, response time significantly reduced 25%, while computational overhead decreased 30%, overall system responsiveness. Experimental results further show 40% reduction downtime incidents due faster identification proactive approach thus provides substantial improvements security performance metrics, underscoring its robust dynamic landscapes

Язык: Английский

Процитировано

2

Hyper Capsule LSTM-Gated GAN with Bayesian Optimized SVM for Cloud-based Stock Market Price Prediction in Big Data Environments DOI Open Access

M. Nivetha,

C. Jeganathan, Abdelkrim Khadir

и другие.

International Journal of Computational and Experimental Science and Engineering, Год журнала: 2025, Номер 11(2)

Опубликована: Март 25, 2025

In the modern era, big data is a brand-new and developing buzzword. With significant expansion of finance business growth forecast, stock market dynamic, ever-evolving, unpredictable, fascinatingly promising specialty. This study presents novel approach for enhancing forecast accuracy through optimal feature selection combined with deep learning techniques. By employing an Artificial intelligence method to identify select most features influencing prices, we mitigate risks overfitting improve model interpretability. To propose advanced methodology called Hyper Capsule LSTM Gated Generative Adverbial Network (HCG-GAN) Bayesian Optimized Support Vector Machine (BOSVM) price prediction, which well-suited time-series data. A comparative analysis conducted evaluate performance our against traditional prediction methods. The preliminary process takes place in pricing log normalization using Min-max z-score normalizer. Then Active distinction impact rate (ASDIR) estimated find scaling factor mean changes. proposed compared that benchmark models CNN-LSTM, DLSTMNN, ANN-RF evaluation metrics accuracy, precision, recall, F1-score, AUC-ROC, PR-AUC, MCC. Results indicate integration not only boosts but also ensures robustness volatility. work contributes growing body literature on artificial applications finance, offering insights can significantly enhance trading strategies investment decisions.

Язык: Английский

Процитировано

0

Enhancing Cross Language for English-Telugu pairs through the Modified Transformer Model based Neural Machine Translation DOI Open Access

Vaishnavi Sadula,

D. Ramesh

International Journal of Computational and Experimental Science and Engineering, Год журнала: 2025, Номер 11(2)

Опубликована: Апрель 16, 2025

Cross-Language Translation (CLT) refers to conventional automated systems that generate translations between natural languages without human involvement. As the most of resources are mostly available in English, multi-lingual translation is badly required for penetration essence education deep roots society. Neural machine (NMT) one such intelligent technique which usually deployed an efficient process from source language another language. But these NMT techniques substantially requires large corpus data achieve improved process. This bottleneck makes apply mid-resource compared its dominant English counterparts. Although some benefit established systems, creating low-resource a challenge due their intricate morphology and lack non-parallel data. To overcome this aforementioned problem, research article proposes modified transformer architecture improve efficiency NMT. The proposed framework, consist Encoder-Decoder enhanced version with multiple fast feed forward networks multi-headed soft attention networks. designed extracts word patterns parallel during training, forming English–Telugu vocabulary via Kaggle, effectiveness evaluated using measures like Bilingual Evaluation Understudy (BLEU), character-level F-score (chrF) Word Error Rate (WER). prove excellence model, extensive comparison existing architectures performance metrics analysed. Outcomes depict has shown improvised by achieving BLEU as 0.89 low WER when models. These experimental results promise strong hold further experimentation based

Язык: Английский

Процитировано

0