TY - JOUR
T1 - Robust Fusion of c-VEP and Gaze
AU - Kadioglu, Berkan
AU - Yildiz, Ilkay
AU - Closas, Pau
AU - Fried-Oken, Melanie B.
AU - Erdogmus, Deniz
N1 - Funding Information:
NIH under Grant R01DC009834.
Publisher Copyright:
© 2017 IEEE.
PY - 2019/1
Y1 - 2019/1
N2 - Brain-computer interfaces (BCIs) are one of the developing technologies, serving as a communication interface for people with neuromuscular disorders. Electroencephalography (EEG) and gaze signals are among the commonly used inputs for the user intent classification problem arising in BCIs. Fusing different types of input modalities, i.e., EEG and gaze, is an obvious but effective solution for achieving high performance on this problem. Even though there are some simplistic approaches for fusing these two evidences, a more effective method is required for classification performances and speeds suitable for real-life scenarios. One of the main problems that is left unrecognized is highly noisy real-life data. In the context of the BCI framework utilized in this article, noisy data stem from user error in the form of tracking a nontarget stimuli, which in turn results in misleading EEG and gaze signals. We propose a method for fusing aforementioned evidences in a probabilistic manner that is highly robust against noisy data. We show the performance of the proposed method on real EEG and gaze data for different configurations of noise control variables. Compared to the regular fusion method, the robust method achieves up to 15% higher classification accuracy.
AB - Brain-computer interfaces (BCIs) are one of the developing technologies, serving as a communication interface for people with neuromuscular disorders. Electroencephalography (EEG) and gaze signals are among the commonly used inputs for the user intent classification problem arising in BCIs. Fusing different types of input modalities, i.e., EEG and gaze, is an obvious but effective solution for achieving high performance on this problem. Even though there are some simplistic approaches for fusing these two evidences, a more effective method is required for classification performances and speeds suitable for real-life scenarios. One of the main problems that is left unrecognized is highly noisy real-life data. In the context of the BCI framework utilized in this article, noisy data stem from user error in the form of tracking a nontarget stimuli, which in turn results in misleading EEG and gaze signals. We propose a method for fusing aforementioned evidences in a probabilistic manner that is highly robust against noisy data. We show the performance of the proposed method on real EEG and gaze data for different configurations of noise control variables. Compared to the regular fusion method, the robust method achieves up to 15% higher classification accuracy.
KW - Bayesian fusion
KW - M-estimation
KW - brain-computer interfaces (BCIs)
KW - code-based VEP (c-VEP)
KW - eye tracking
KW - multimodal fusion
UR - http://www.scopus.com/inward/record.url?scp=85082633166&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082633166&partnerID=8YFLogxK
U2 - 10.1109/LSENS.2018.2878705
DO - 10.1109/LSENS.2018.2878705
M3 - Article
AN - SCOPUS:85082633166
SN - 2475-1472
VL - 3
JO - IEEE Sensors Letters
JF - IEEE Sensors Letters
IS - 1
M1 - 8515115
ER -