Brain–computer interfaces (BCIs) are interactive machines using implicit neurophysiological signals, with applications ranging from medical rehabilitation to smart prostheses and entertainment. In this context, the need for high recognition performance demands increasingly complex machine learning (ML) architectures. Generally, the more complex the architecture, the less transparent its reasoning. This leads to difficulties in motivating their outputs and validating their internal model. Moreover, explainability is explicitly required by recent regulations on personal data processing, which advise against black box modeling. Here, a novel ensemble learning model is proposed aiming to effectively balance recognition performances and explainability. The proposed architecture employs different multilayer perceptrons, each one specialized to distinguish a single pair of classes and to provide counterfactual explanations and the minimal feature changes resulting in a classification shift. Subsequently, their outcomes are weighted to minimize the contribution of the non-competent classifiers and combined to address a multiclass classification problem. Results were gathered from two publicly available datasets on multiclass electroencephalography-based motor imagery and demonstrate that the proposed architecture overcomes state-of-the-art recognition performance while providing information on the most discriminant brain areas and power bands. For the sake of reproducibility, the implementation of the proposed approach is made publicly available.

EEG-based motor imagery recognition via novel explainable ensemble learning architecture

Antonio Luca Alfeo
;
2025-01-01

Abstract

Brain–computer interfaces (BCIs) are interactive machines using implicit neurophysiological signals, with applications ranging from medical rehabilitation to smart prostheses and entertainment. In this context, the need for high recognition performance demands increasingly complex machine learning (ML) architectures. Generally, the more complex the architecture, the less transparent its reasoning. This leads to difficulties in motivating their outputs and validating their internal model. Moreover, explainability is explicitly required by recent regulations on personal data processing, which advise against black box modeling. Here, a novel ensemble learning model is proposed aiming to effectively balance recognition performances and explainability. The proposed architecture employs different multilayer perceptrons, each one specialized to distinguish a single pair of classes and to provide counterfactual explanations and the minimal feature changes resulting in a classification shift. Subsequently, their outcomes are weighted to minimize the contribution of the non-competent classifiers and combined to address a multiclass classification problem. Results were gathered from two publicly available datasets on multiclass electroencephalography-based motor imagery and demonstrate that the proposed architecture overcomes state-of-the-art recognition performance while providing information on the most discriminant brain areas and power bands. For the sake of reproducibility, the implementation of the proposed approach is made publicly available.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11389/79175
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact