Computers & Industrial Engineering, Год журнала: 2021, Номер 161, С. 107621 - 107621
Опубликована: Авг. 13, 2021
Язык: Английский
Computers & Industrial Engineering, Год журнала: 2021, Номер 161, С. 107621 - 107621
Опубликована: Авг. 13, 2021
Язык: Английский
Computer-Aided Civil and Infrastructure Engineering, Год журнала: 2023, Номер 39(5), С. 656 - 678
Опубликована: Май 12, 2023
Abstract During a deep excavation project, monitoring the structural health of adjacent buildings is crucial to ensure safety. Therefore, this study proposes novel probabilistic reinforcement learning (PDRL) framework optimize plan minimize cost and excavation‐induced risk. First, Bayesian‐bi‐directional general regression neural network built as model describe relationship between ground settlement foundation pit safety state building, along with actions in dynamic manner. Subsequently, double Q‐network method, which can capture realistic features management problem, trained form closed decision loop for continuous strategies. Finally, proposed PDRL approach applied real‐world case No. 14 Shanghai Metro. This estimate time‐variant probability damage occurrence maintenance update building. According strategy via PDRL, begins middle stage rather than on first day project if there full confidence quality data. When uncertainty level data rises, starting might shift an earlier date. It worth noting that method adequately robust address uncertainties embedded environment model, thus contributing optimizing achieving cost‐effectiveness risk mitigation.
Язык: Английский
Процитировано
32Advanced Intelligent Systems, Год журнала: 2023, Номер 5(7)
Опубликована: Март 15, 2023
The past decades have seen the rapid development of tactile sensors in material, fabrication, and mechanical structure design. advancement has heightened expectation sensor functions, thus put forward a higher demand for data processing. However, conventional analysis techniques not kept pace with still suffer from some severe drawbacks, like cumbersome models, poor efficiency, expensive costs. Machine learning, its prominent ability big fast processing speed, can offer many possibilities analysis. Herein, machine learning employed signals are reviewed. Supervised unsupervised analog covered, spike summarized. Furthermore, applications robotic perception human activity monitoring presented. Finally, current challenges future prospects sensors, data, algorithms, benchmarks discussed.
Язык: Английский
Процитировано
25Environmental Chemistry Letters, Год журнала: 2024, Номер unknown
Опубликована: Сен. 5, 2024
Язык: Английский
Процитировано
11Deleted Journal, Год журнала: 2024, Номер 6(2)
Опубликована: Фев. 2, 2024
Abstract The rotary inverted pendulum system (RIPS) is an underactuated mechanical with highly nonlinear dynamics and it difficult to control a RIPS using the classic models. In last few years, reinforcement learning (RL) has become popular method. RL powerful potential systems high non-linearity complex dynamics, such as RIPS. Nevertheless, for not been well studied there limited research on development evaluation of this paper, algorithms are developed swing-up stabilization single-link (SLRIP) compared methods PID LQR. A physical model SLRIP created MATLAB/Simscape Toolbox, used dynamic simulation in MATLAB/Simulink train agents. An agent trainer Q-learning (QL) deep Q-network (DQNL) proposed data training. Furthermore, actions actuating horizontal arm states angles velocities arm. reward computed according zero when attends upright position. without understanding classical controllers implement agent. Finally, outcome indicates effectiveness QL DQNL conventional LQR controllers.
Язык: Английский
Процитировано
10Robotics and Autonomous Systems, Год журнала: 2025, Номер unknown, С. 104913 - 104913
Опубликована: Янв. 1, 2025
Язык: Английский
Процитировано
2Computers in Industry, Год журнала: 2025, Номер 167, С. 104263 - 104263
Опубликована: Фев. 18, 2025
Процитировано
2International Journal of Systems Science, Год журнала: 2025, Номер unknown, С. 1 - 30
Опубликована: Март 2, 2025
Reinforcement Learning (RL) is a machine learning methodology that develops the capability to make sequential decisions in intricate issues using trial-and-error techniques. RL has become increasingly prevalent for decision-making and control tasks diverse fields such as industrial processes, biochemical systems energy management. This review paper presents comprehensive examination of development, models, algorithms practical uses RL, with specific emphasis on its application process control. The study examines fundamental theories, applications classifying them into two categories: classical Markov decision processes (MDP) deep viz., actor critic methods. topic discussion multiple industries, chemical control, systems, wastewater treatment oil gas sector. Nevertheless, also highlights challenges hinder larger acceptance, including requirement substantial computational resources, complexity simulating real-world settings challenge guaranteeing stability resilience dynamic unpredictable environments. demonstrated significant promise, but more research needed fully integrate it environmental order solve current challenges.
Язык: Английский
Процитировано
2Educational and Psychological Measurement, Год журнала: 2025, Номер unknown
Опубликована: Янв. 3, 2025
This study examines the performance of ChatGPT, developed by OpenAI and widely used as an AI-based conversational tool, a data analysis tool through exploratory factor (EFA). To this end, simulated were generated under various conditions, including normal distribution, response category, sample size, test length, loading, measurement models. The analyzed using ChatGPT-4o twice with 1-week interval same prompt, results compared those obtained R code. In analysis, Kaiser–Meyer–Olkin (KMO) value, total variance explained, number factors estimated empirical Kaiser criterion, Hull method, Kaiser–Guttman well loadings, calculated. findings from ChatGPT at two different times found to be consistent R. Overall, demonstrated good for steps that require only computational decisions without involving researcher judgment or theoretical evaluation (such KMO, loadings). However, multidimensional structures, although was across analyses, biases observed, suggesting researchers should exercise caution in such decisions.
Язык: Английский
Процитировано
1Journal of Field Robotics, Год журнала: 2025, Номер unknown
Опубликована: Фев. 5, 2025
ABSTRACT Tracked robots equipped with flippers and sensors are extensively employed in outdoor search rescue scenarios. However, achieving precise motion control on complex terrains remains a significant challenge, often necessitating expert teleoperation. This stems from the high degree of robot joint freedom need for flipper coordination based terrain roughness. To address this problem, we propose F lipper‐ T rack R obot Bench mark ( FTR‐Bench ), simulator featuring flipper‐track tasked crossing various obstacles using reinforcement learning (RL) algorithms. The primary objective is to enable autonomous locomotion environments that too remote or hazardous humans, such as disaster zones planetary surfaces. Built Isaac Lab, achieves efficient RL training at over 4000 FPS an RTX 3070 GPU. Additionally, it integrates algorithms OpenAI Gym interface specifications, enabling fast secondary development verification. On basis, provides series standardized RL‐based benchmarking experiments baselines obstacle‐crossing tasks, providing solid foundation subsequent algorithm design performance comparison. Experimental results empirically indicate SAC performs relatively well single mixed traversal, but most struggle multi‐terrain traversal skills, which calls community more substantial development. Our project open‐source https://github.com/nubot-nudt/FTR-Benchmark .
Язык: Английский
Процитировано
1Food and Humanity, Год журнала: 2025, Номер unknown, С. 100587 - 100587
Опубликована: Март 1, 2025
Язык: Английский
Процитировано
1