Leveraging geometric modeling-based computer vision for context aware control in a hip exosuit

Human beings adapt their motor patterns in response to their surroundings, utilizing sensory modalities such as visual inputs. This context-informed adaptive motor behavior has increased interest in integrating computer vision (CV) algorithms into robotic assistive technologies, marking a shift towa...

Full description

Saved in:
Bibliographic Details
Main Authors: Tricomi, Enrica (Author) , Piccolo, Giuseppe (Author) , Russo, Federica (Author) , Zhang, Xiaohui (Author) , Missiroli, Francesco (Author) , Ferrari, Sandro (Author) , Gionfrida, Letizia (Author) , Ficuciello, Fanny (Author) , Xiloyannis, Michele (Author) , Masia, Lorenzo (Author)
Format: Article (Journal)
Language:English
Published: 2025
In: IEEE transactions on robotics
Year: 2025, Volume: 41, Pages: 3462-3479
ISSN:1941-0468
DOI:10.1109/TRO.2025.3567489
Online Access:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.1109/TRO.2025.3567489
Verlag, lizenzpflichtig, Volltext: https://ieeexplore.ieee.org/document/10989543/metrics
Get full text
Author Notes:Enrica Tricomi, Graduate Student Member, IEEE, Giuseppe Piccolo, Federica Russo, Xiaohui Zhang, Francesco Missiroli, Member, IEEE, Sandro Ferrari, Letizia Gionfrida, Fanny Ficuciello, Senior Member, IEEE, Michele Xiloyannis, and Lorenzo Masia, Senior Member, IEEE
Description
Summary:Human beings adapt their motor patterns in response to their surroundings, utilizing sensory modalities such as visual inputs. This context-informed adaptive motor behavior has increased interest in integrating computer vision (CV) algorithms into robotic assistive technologies, marking a shift toward context aware control. However, such integration has rarely been achieved so far, with current methods mostly relying on data-driven approaches. In this study, we introduce a novel control framework for a soft hip exosuit, employing instead a physics-informed CV method grounded on geometric modeling of the captured scene for assistance tuning during stairs and level walking. This approach promises to provide a viable solution that is more computationally efficient and does not depend on training examples. Evaluating the controller with six subjects on a path comprising level walking and stairs, we achieved an overall detection accuracy of 93.0\pm 1.1%. CV-based assistance provided significantly greater metabolic benefits compared to non-vision-based assistance, with larger energy reductions relative to being unassisted during stair ascent (-18.9 \pm 4.1% versus -5.2 \pm 4.1%) and descent (-10.1 \pm 3.6% versus -4.7 \pm 4.8%). Such a result is a consequence of the adaptive nature of the device, enabled by the context aware controller that allowed for more effective walking support, i.e., the assistive torque showed a significant increase while ascending stairs (+33.9\pm 8.8%) and decrease while descending stairs (-17.4\pm 6.0%) compared to a condition without assistance modulation enabled by vision. These results highlight the potential of the approach, promoting effective real-time embedded applications in assistive robotics.
Item Description:Gesehen am 21.04.2026
Physical Description:Online Resource
ISSN:1941-0468
DOI:10.1109/TRO.2025.3567489