This paper presents the design and implementation of an AI vision-controlled
orthotic hand exoskeleton to enhance rehabilitation and assistive functionality
for individuals with hand mobility impairments. The system leverages a Google
Coral Dev Board Micro with an Edge TPU to enable real-time object detection
using a customized MobileNet\_V2 model trained on a six-class dataset. The
exoskeleton autonomously detects objects, estimates proximity, and triggers
pneumatic actuation for grasp-and-release tasks, eliminating the need for
user-specific calibration needed in traditional EMG-based systems. The design
prioritizes compactness, featuring an internal battery. It achieves an 8-hour
runtime with a 1300 mAh battery. Experimental results demonstrate a 51ms
inference speed, a significant improvement over prior iterations, though
challenges persist in model robustness under varying lighting conditions and
object orientations. While the most recent YOLO model (YOLOv11) showed
potential with 15.4 FPS performance, quantization issues hindered deployment.
The prototype underscores the viability of vision-controlled exoskeletons for
real-world assistive applications, balancing portability, efficiency, E
real-time responsiveness, while highlighting future directions for model
optimization and hardware miniaturization.
Questo articolo esplora i giri e le loro implicazioni.
Scarica PDF:
2504.16319v1