: Objective. Surface electromyography (sEMG) and pressure-based force myography (pFMG) are two complementary modalities adopted in hand gesture recognition due to their ability to capture muscle electrical and mechanical activity, respectively. While sEMG carries rich neural information about the intended gestures and has long been established as the primary control signal in myoelectric interfaces, pFMG has recently emerged as a stable modality that is less sensitive to sweat and can indicate motion onset earlier than sEMG, making their fusion promising for robust pattern recognition. However, gesture classification systems based on these signals often suffer from performance degradation due to limb position changes, which affect signal characteristics.Approach. To address this, we introduce MyoPose, a novel and lightweight spatial synergy-based feature set for enhancing neuromechanical control. MyoPose works on effectively decoding colocated sEMG-pFMG information to improve hand gesture recognition under limb position variability while remaining computationally efficient for resource-constrained hardware.Main results. The proposed MyoPose feature combined with linear discriminant analysis, achieved 87.7% accuracy (ACC) in a nine-hand gesture recognition task, outperforming standard myoelectric feature sets and comparable to a state-of-the-art decision-level multimodal fusion parallel convolutional neural network. Notably, MyoPose maintained computational efficiency, achieving real-time feasibility with an estimated controller delay of 110.62 ms, well within the operational requirement of 100-125 ms, as well as ultra-light memory requirement of 0.011 KB.Significance. The novelty of this study lies in providing an effective feature set for multimodal driven hand gesture recognition, handling limb position variations with robust ACC, and showing potential for real-time feasibility for human-machine interfaces without the need for deep learning.

MyoPose: position-limb-robust neuromechanical features for enhanced hand gesture recognition in colocated sEMG-pFMG armbands

Verdini F.;
2025-01-01

Abstract

: Objective. Surface electromyography (sEMG) and pressure-based force myography (pFMG) are two complementary modalities adopted in hand gesture recognition due to their ability to capture muscle electrical and mechanical activity, respectively. While sEMG carries rich neural information about the intended gestures and has long been established as the primary control signal in myoelectric interfaces, pFMG has recently emerged as a stable modality that is less sensitive to sweat and can indicate motion onset earlier than sEMG, making their fusion promising for robust pattern recognition. However, gesture classification systems based on these signals often suffer from performance degradation due to limb position changes, which affect signal characteristics.Approach. To address this, we introduce MyoPose, a novel and lightweight spatial synergy-based feature set for enhancing neuromechanical control. MyoPose works on effectively decoding colocated sEMG-pFMG information to improve hand gesture recognition under limb position variability while remaining computationally efficient for resource-constrained hardware.Main results. The proposed MyoPose feature combined with linear discriminant analysis, achieved 87.7% accuracy (ACC) in a nine-hand gesture recognition task, outperforming standard myoelectric feature sets and comparable to a state-of-the-art decision-level multimodal fusion parallel convolutional neural network. Notably, MyoPose maintained computational efficiency, achieving real-time feasibility with an estimated controller delay of 110.62 ms, well within the operational requirement of 100-125 ms, as well as ultra-light memory requirement of 0.011 KB.Significance. The novelty of this study lies in providing an effective feature set for multimodal driven hand gesture recognition, handling limb position variations with robust ACC, and showing potential for real-time feasibility for human-machine interfaces without the need for deep learning.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11389/75395
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 1
social impact