Anotace:
Dynamic gesture recognition systems face persistent challenges in achieving real-time performance and high recognition efficiency. This paper presents a novel framework integrating computer vision techniques with machine learning algorithms to address these issues. The approach utilizes OpenCV for dynamic gesture detection, employing gesture contour extraction using skin color feature segmentation, fingertip detection for static gesture recognition, and dynamic gesture tracking through Hidden Markov Models (HMM). Experimental results demonstrate high recognition accuracy of 95.8 ± 1.4% across various gesture types, with individual rates ranging from 95.2 ± 1.6% to 98.5 ± 0.8%. Real-time performance is achieved with average processing time of 82.71 ± 3.2 ms per frame (12.1 FPS). Our method demonstrates 1.15× to 1.49× speed improvements over state-of-the-art approaches while maintaining superior accuracy. Validation on the DHG-14/28 public benchmark confirms generalizability with 93.4 ± 1.6% accuracy. The framework shows robust performance under challenging conditions: low-light environments (92.4 ± 2.1%), fast motion (91.8 ± 1.9%), and complex backgrounds (93.2 ± 1.7%). Statistical significance was confirmed through comprehensive evaluation across diverse demographic groups. This research has implications for human-computer interaction applications, including smart home systems, augmented reality, and industrial automation.