BlockStream Solutions: Enhancing Cloud Storage Efficiency and Transparency through Blockchain Technology DOI Creative Commons

Rama Krishna K,

M. Pounambal,

Jaibir Singh

et al.

International Journal of Electrical and Electronics Engineering, Journal Year: 2024, Volume and Issue: 11(7), P. 134 - 147

Published: July 31, 2024

This paper introduces the BlockStream model, a novel integration of blockchain technology into cloud storage systems aimed at addressing core challenges security, efficiency, and transparency. The research methodology encompasses comprehensive system design implementation, utilizing synthetic datasets for performance evaluation against traditional solutions. Key findings reveal that model significantly enhances with data deduplication rates space utilization surpassing existing models by up to 15%. Moreover, it achieves notable reduction in retrieval times, improving 7.14% over most efficient systems, demonstrates superior security capabilities, particularly resistance DDoS attacks unauthorized access prevention, markedly outperforming baseline models. significance this lies its potential revolutionize paradigms, offering scalable, secure, user-centric management solution. Quantitatively, not only reduces average times from 400ms 320ms compared current leading solutions but also elevates robustness levels previously unattained, marking significant advancement field. These enhancements, underpinned decentralized, immutable, transparent nature blockchain, present compelling case architecture operation systems.

Language: Английский

CoralMatrix: A Scalable and Robust Secure Framework for Enhancing IoT Cybersecurity DOI Open Access

Srikanth Reddy Vutukuru,

Srinivasa Chakravarthi Lade

International Journal of Computational and Experimental Science and Engineering, Journal Year: 2025, Volume and Issue: 11(1)

Published: Jan. 7, 2025

In the current age of digital transformation, Internet Things (IoT) has revolutionized everyday objects, and IoT gateways play a critical role in managing data flow within these networks. However, dynamic extensive nature networks presents significant cybersecurity challenges that necessitate development adaptive security systems to protect against evolving threats. This paper proposes CoralMatrix Security framework, novel approach employs advanced machine learning algorithms. framework incorporates AdaptiNet Intelligence Model, which integrates deep reinforcement for effective real-time threat detection response. To comprehensively evaluate performance this study utilized N-BaIoT dataset, facilitating quantitative analysis provided valuable insights into model's capabilities. The results demonstrate robustness across various dimensions cybersecurity. Notably, achieved high accuracy rate approximately 83.33%, highlighting its effectiveness identifying responding threats real-time. Additionally, research examined framework's scalability, adaptability, resource efficiency, diverse cyber-attack types, all were quantitatively assessed provide comprehensive understanding suggests future work optimize larger adapt continuously emerging threats, aiming expand application scenarios. With proposed algorithms, emerged as promising, efficient, effective, scalable solution Cyber Security.

Language: Английский

Citations

11

Lighting-Resilient Pedestrian Trajectory Prediction: A Hybrid Vision Transformer and Convolutional LSTMApproach with Dynamic Lighting Augmentation DOI Creative Commons

J. Premasagar,

Sudha Pelluri

Research Square (Research Square), Journal Year: 2025, Volume and Issue: unknown

Published: March 10, 2025

Abstract Pedestrian trajectory prediction in dynamic and variable lighting environments presents significant challenges for traditional models, which often struggle to maintain the accuracy robustness under such conditions. To address these limitations, we propose a novel hybrid model that integrates Vision Transformers (ViTs) with convolutional Long Short-Term Memory (ConvLSTM) networks. This leverages global contextual awareness of ViTs spatiotemporal modeling capabilities ConvLSTM enhance accuracy. The proposed is further strengthened by incorporating condition augmentation contrastive learning, improves its generalization across diverse real-world scenarios. Our extensive evaluation using KAIST Multispectral Dataset demonstrates significantly outperforms existing including social-LSTM S-GAN, key performance metrics. Specifically, achieves low Mean Squared Error (MSE) 0.035 Root (RMSE) 0.187, along an Average Displacement (ADE) 0.25 meters Final (FDE) 0.40 meters. Additionally, model's Trajectory Consistency Score (TCS) 0.92 Lighting Variability Robustness (LVR) score 0.88 underscore ability accurate consistent predictions varying Although sets new benchmark pedestrian prediction, it requires substantial computational resources training may require optimization deployment real-time applications. Future work will focus on enhancing extreme weather conditions occlusions, as well improving efficiency. study contributes advancement offering robust adaptable solution complex environments.

Language: Английский

Citations

0

Transformers-Based Multimodal Deep Learning for Real-Time Disaster Forecasting and Adaptive Climate Resilience Strategies DOI Open Access
Srinivasa Rao Dhanikonda,

Madhavi Pingili,

P Jayaselvi

et al.

International Journal of Computational and Experimental Science and Engineering, Journal Year: 2025, Volume and Issue: 11(2)

Published: April 3, 2025

Real time forecasting of disasters needs to be advanced and easy because with increasing their frequency severity. Traditional prediction can only made traditional disaster methods: numerical weather (NWP) models remote sensing techniques, which are computationally inefficient, data sparse cannot adapt dynamic environmental changes. In order overcome these limitations, this research presents a Transformer Based Multimodal Deep Learning Model combine the existing multiple sources ranging from satellite imagery, IoT sensor networks, meteorological observations etc., social media analytics. The model employs multimodal fusion strategy, enabling feature selection seamless integration heterogeneous streams. contrast conventional deep learning such as CNNs LSTMs, transformer based has excellent ability towards long-range dependency, reducing latency light inference better computational efficiency. results proven 94% accurate, 91% precise 40% reduction in inferencer real time, makes it suitable for forecasting. advancement methodologies one serves contribute AI driven resilience. We will also work on future form variants, more integration, explainable (XAI) techniques interpretability scalability. Finding have implications transformative potential climate adaptation serve robust foundation next generation early warning systems risk mitigation across sectors.

Language: Английский

Citations

0

Machine Learning-Based Optimization for 5G Resource Allocation Using Classification and Regression Techniques DOI Open Access

E. V. N. Jyothi,

Jaibir Singh,

Suman Rani

et al.

International Journal of Computational and Experimental Science and Engineering, Journal Year: 2025, Volume and Issue: 11(2)

Published: April 24, 2025

The rapid evolution of 5G networks necessitates efficient and adaptive resource allocation strategies to enhance network performance, minimize latency, optimize bandwidth utilization. This study systematically evaluates multiple machine learning (ML) models, including Neural Networks, Support Vector Machines (SVM), Decision Trees, Ensemble Learning, Regression-based approaches, determine the most effective techniques for allocation. classification-based models demonstrated superior performance in predicting congestion states, with Boosted Trees achieving highest accuracy (94.1%), outperforming Bagged (92.7%) RUS (93.8%). Among SVM classifiers, Gaussian exhibited (92.3%), highlighting its robustness handling non-linearly separable data. Levenberg-Marquardt-trained Networks (93.4%) outperformed overall accuracy, emphasizing deep learning’s effectiveness hierarchical feature representation. Meanwhile, regression-based particularly Gradient Boosting (R² = 0.96, MSE 4.92), best predictive continuous optimization, surpassing Random Forest 0.94, 6.85) Polynomial Regression 0.92, 9.21). integration Self-Organizing Maps (SOMs) unsupervised clustering further improved segmentation. Future research should explore Deep Reinforcement Learning (DRL) autonomous optimization Explainable AI (XAI) interpretability real-world deployments.

Language: Английский

Citations

0

Enhancing Lossless Image Compression through Smart Partitioning, Selective Encoding, and Wavelet Analysis DOI Open Access

Sri Raghavendra M,

Bindu Swetha Pasuluri, K. Sreenivasulu

et al.

International Journal of Electronics and Communication Engineering, Journal Year: 2024, Volume and Issue: 11(5), P. 207 - 219

Published: May 31, 2024

This paper presents a cutting-edge algorithmic framework for lossless image compression, directly addressing the limitations and quality compromises inherent in existing compression models. Traditional approaches often fail to effectively balance efficiency with retention across various complexities, leading degraded fidelity. Our proposed distinguishes itself by adeptly integrating smart partitioning, selective encoding, wavelet coefficient analysis, thereby achieving marked improvements without sacrificing quality. Essential framework's efficacy is methodical approach preprocessing, which ensures images are an optimal state processing. Through rigorous evaluation against industry standards such as JPEG2000 PNG, model demonstrated exceptional performance enhancements: ratios up 4.2:1, enhancing Peak Signalto-Noise Ratios (PSNR) 49 dB low complexity images, maintaining Structural Similarity Index (SSIM) values high 0.99. These quantitative outcomes not only underline model's superior capability but also its robustness preserving structural perceptual of varying complexities. The significance this research lies potential redefine benchmarks within domain, evidenced metrics. Further exploration into machine learning partitioning automation, real-time adaptive encoding mechanisms, expanded applicability promises optimize further. Ultimately, study lays foundational stone future advancements digital management, critical need high-efficiency, quality-conserving solutions.

Language: Английский

Citations

0

BlockStream Solutions: Enhancing Cloud Storage Efficiency and Transparency through Blockchain Technology DOI Creative Commons

Rama Krishna K,

M. Pounambal,

Jaibir Singh

et al.

International Journal of Electrical and Electronics Engineering, Journal Year: 2024, Volume and Issue: 11(7), P. 134 - 147

Published: July 31, 2024

This paper introduces the BlockStream model, a novel integration of blockchain technology into cloud storage systems aimed at addressing core challenges security, efficiency, and transparency. The research methodology encompasses comprehensive system design implementation, utilizing synthetic datasets for performance evaluation against traditional solutions. Key findings reveal that model significantly enhances with data deduplication rates space utilization surpassing existing models by up to 15%. Moreover, it achieves notable reduction in retrieval times, improving 7.14% over most efficient systems, demonstrates superior security capabilities, particularly resistance DDoS attacks unauthorized access prevention, markedly outperforming baseline models. significance this lies its potential revolutionize paradigms, offering scalable, secure, user-centric management solution. Quantitatively, not only reduces average times from 400ms 320ms compared current leading solutions but also elevates robustness levels previously unattained, marking significant advancement field. These enhancements, underpinned decentralized, immutable, transparent nature blockchain, present compelling case architecture operation systems.

Language: Английский

Citations

0