Adaptive mixed reality robotic games for personalized consumer robot entertainment DOI
Ajmeera Kiran,

J. Refonaa,

Muhammad Nabeel

et al.

Entertainment Computing, Journal Year: 2024, Volume and Issue: 52, P. 100825 - 100825

Published: July 14, 2024

Language: Английский

Active Visual Localization for Multi-Agent Collaboration: A Data-Driven Approach DOI
Matthew Hanlon, Boyang Sun, Marc Pollefeys

et al.

Published: May 13, 2024

Language: Английский

Citations

5

Walk Along: An Experiment on Controlling the Mobile Robot ‘Spot’ with Voice and Gestures DOI
Renchi Zhang,

Jesse van der Linden,

Dimitra Dodou

et al.

ACM Transactions on Human-Robot Interaction, Journal Year: 2025, Volume and Issue: unknown

Published: April 12, 2025

Robots are becoming more capable and can autonomously perform tasks such as navigating between locations. However, human oversight remains crucial. This study compared two touchless methods for directing mobile robots: voice control gesture control, to investigate the efficiency of these preference users. We tested in conditions: one which participants remained stationary they walked freely alongside robot. hypothesized that walking robot would result higher intuitiveness ratings improved task performance, based on idea promotes spatial alignment reduces effort required mental rotation. In a 2×2 within-subject design, 218 guided quadruped Spot along circuitous route with multiple 90° turns using rotate left, right, walk forward commands. After each trial, rated command mapping, while post-experiment interviews were used gather participants’ preferences. Results showed combined was most favored intuitive, whereas standing caused confusion left/right Nevertheless, 29% preferred citing increased engagement visual congruence reasons. An odometry-based analysis revealed often followed behind Spot, particularly condition, when allowed walk. conclusion, produced best outcomes. Improving physical ergonomics adjusting types could make effective.

Language: Английский

Citations

0

RDSP-SLAM: Robust Object-Aware SLAM Based on Deep Shape Priors DOI Creative Commons
Moses Chukwuka Okonkwo, Junyou Yang, Yizhen Sun

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 46764 - 46773

Published: Jan. 1, 2024

Object-aware systems such as Deep Shape prior SLAM (DSP-SLAM), provide a feasible technique for creating environment sparse maps, while representing scene objects complete 3D models. Such compelling solution improving the intelligence of care robots and enriching user experience in augmented (AR) applications. However, owing to abrupt unpredictable movements exhibited by users during AR engagements real-time robot responses changes situations commands, robustness speed sensor data processing are imperative. DSP-SLAM suffers from low-performance 10 - 15 fps, though it is based on ORB-SLAM2 which can run at 30 fps. This mainly because instance segmentation approach has an average latency 53ms(18.86fps). To improve tracking robustness, keyframes must be processed fast rate. We use state-of-the-art one-stage deep learning detector, significantly reduces wait time detection-based association keyframe creation, finally present Robust Prior (RDSP-SLAM). The results show that was performed 20ms (50fps), object reconstruction quality same DSP-SLAM. RDSP-SLAM accepts RGB sequential images 30fps tracks them mean 38fps.

Language: Английский

Citations

0

Adaptive mixed reality robotic games for personalized consumer robot entertainment DOI
Ajmeera Kiran,

J. Refonaa,

Muhammad Nabeel

et al.

Entertainment Computing, Journal Year: 2024, Volume and Issue: 52, P. 100825 - 100825

Published: July 14, 2024

Language: Английский

Citations

0