Optimizing Building Energy Management with Deep Reinforcement Learning for Smart and Sustainable Infrastructure DOI Creative Commons

Nabeel S. Alsharafa,

R. Suguna,

Raguru Jaya Krishna

et al.

Journal of Machine and Computing, Journal Year: 2024, Volume and Issue: unknown, P. 381 - 391

Published: April 5, 2024

This study develops a new technique for optimising Energy Consumption (EC) and occupant satisfaction in business centres using Building Management Systems (BEMS) that implement Deep Reinforcement Learning (DRL). Models (EMM) are growing increasingly advanced vital intelligent power systems due to the demand energy efficiency adoption of Renewable Sources (RES), which subject variability. Flawed problems typical effects traditional BEMS their unpredictability failure adapt environments. In this intended investigation, DRL framework is demonstrated may evolve its decision-making real-time control savings, electricity, HVAC through input from environment it operates. A pair significant metrics, namely cost room temperature stability, employed assess effectiveness model compared provided by conventional rule-driven predictive systems. As investigated with different baseline models, experimental findings proved approach significantly reduced electricity while maintaining stable levels comfort.

Language: Английский

Granular Ball Fuzzy Neighborhood Rough Sets- Based Feature Selection via Multiobjective Mayfly Optimization DOI
Lin Sun,

Hanbo Liang,

Weiping Ding

et al.

IEEE Transactions on Fuzzy Systems, Journal Year: 2024, Volume and Issue: 32(11), P. 6112 - 6124

Published: Aug. 8, 2024

Language: Английский

Citations

4

Energy-efficient 3D deployment of AUV-enabled mobile relay in underwater acoustic sensor networks DOI
Hengyu Xu, Fang Ye,

Qian Sun

et al.

Ocean Engineering, Journal Year: 2025, Volume and Issue: 325, P. 120795 - 120795

Published: March 5, 2025

Language: Английский

Citations

0

Latency aware computation offloading and throughput maximization in DL/UL for IoT applications in fog networks DOI
Rabeea Basir, Humayun Zubair Khan, Naveed Ahmad Chughtai

et al.

The Journal of Supercomputing, Journal Year: 2025, Volume and Issue: 81(5)

Published: March 29, 2025

Language: Английский

Citations

0

Self-Improved Optimization Aided Bi-Gru Model for Resource Deployment & Deep Learning-Based Attack Detection on Cloud Data Centers DOI
B. Prabha, Tiago Zonta, Mithileysh Sathiyanarayanan

et al.

Published: Jan. 1, 2025

Language: Английский

Citations

0

Multi-Objective Reinforcement Learning for Virtual Machines Placement in Cloud Computing DOI Open Access

Chayan Bhatt,

Sunita Singhal

International Journal of Advanced Computer Science and Applications, Journal Year: 2024, Volume and Issue: 15(3)

Published: Jan. 1, 2024

The rapid demand for cloud services has provoked providers to efficiently resolve the problem of Virtual Machines Placement in cloud. This paper presents a VM using Reinforcement Learning that aims provide optimal resource and energy management data centers. provides better decision-making as it solves complexity caused due tradeoff among objectives hence is useful mapping requested on minimum number Physical Machines. An enhanced Tournament-based selection strategy along with Roulette Wheel sampling been applied ensure optimization goes through balanced exploration exploitation, thereby giving solution quality. Two heuristics have used ordering VM, considering impact CPU memory utilizations over placement. Moreover, concept Pareto approximate set considered both are prioritized according perspective users. proposed technique implemented MATLAB 2020b. Simulation analysis showed VMRL performed preferably well shown improvement 17%, 20% 18% terms consumption, utilization fragmentation respectively comparison other multi-objective algorithms.

Language: Английский

Citations

1

Hybrid Crow Search and Particle Swarm Algorithmic optimization based CH Selection method to extend Wireless Sensor Network operation DOI Creative Commons

Vinoth Kumar P,

K. Venkatesh

Journal of Machine and Computing, Journal Year: 2024, Volume and Issue: unknown, P. 290 - 307

Published: April 5, 2024

In ad hoc wireless sensor networks, the mobile nodes are deployed to gather data from source and transferring them base station for reactive decision making. This process of forwarding attributed by incurs huge loss energy which has possibility minimizing network lifetime. this context, cluster-based topology is determined be optimal reducing in WSNs. The selection CH using hybrid metaheuristic algorithms identified significant mitigate quick exhaustion entire network. paper explores concept Crow Search Particle Swarm Optimization Algorithm-based Selection (HCSPSO-CHS) mechanism proposed with merits Flower Pollination Algorithm (FPA) integrated (CSA) efficient selection. It further adopted an improved PSO achieving sink node mobility improve delivery packets nodes. HCSPSO-CHS approach assessed influential factors like residual energy, inter intra-cluster distances, proximity grade during facilitated better search converged towards best global solution, such that frequent avoided maximum level. outcomes suggested simulation confirm performance depending on number active 23.18%, prevent death 23.41% augmented lifetime 33.58% independent rounds transmission.

Language: Английский

Citations

0

Optimizing Building Energy Management with Deep Reinforcement Learning for Smart and Sustainable Infrastructure DOI Creative Commons

Nabeel S. Alsharafa,

R. Suguna,

Raguru Jaya Krishna

et al.

Journal of Machine and Computing, Journal Year: 2024, Volume and Issue: unknown, P. 381 - 391

Published: April 5, 2024

This study develops a new technique for optimising Energy Consumption (EC) and occupant satisfaction in business centres using Building Management Systems (BEMS) that implement Deep Reinforcement Learning (DRL). Models (EMM) are growing increasingly advanced vital intelligent power systems due to the demand energy efficiency adoption of Renewable Sources (RES), which subject variability. Flawed problems typical effects traditional BEMS their unpredictability failure adapt environments. In this intended investigation, DRL framework is demonstrated may evolve its decision-making real-time control savings, electricity, HVAC through input from environment it operates. A pair significant metrics, namely cost room temperature stability, employed assess effectiveness model compared provided by conventional rule-driven predictive systems. As investigated with different baseline models, experimental findings proved approach significantly reduced electricity while maintaining stable levels comfort.

Language: Английский

Citations

0