2017
DOI: 10.1088/1741-2552/aa6639
|View full text |Cite
|
Sign up to set email alerts
|

Improving zero-training brain-computer interfaces by mixing model estimators

Abstract: Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(11 citation statements)
references
References 28 publications
0
11
0
Order By: Relevance
“…Following this hypothesis, a combined approach could lead to a faster rampup behaviour and a more robust classifier compared to the traditional EMalgorithm or the standalone LLP. This idea was picked up by Verhoeven and colleagues [50] who show that the performance of a combined approach can even transcend the performance of each individual classifier at almost any time. Second, LLP could be used in a transfer learning scenario where one starts with a general classifier obtained on several other subjects and utilizes LLP as an unsupervised adaptation method with guarantees.…”
Section: Possible Extensionsmentioning
confidence: 99%
“…Following this hypothesis, a combined approach could lead to a faster rampup behaviour and a more robust classifier compared to the traditional EMalgorithm or the standalone LLP. This idea was picked up by Verhoeven and colleagues [50] who show that the performance of a combined approach can even transcend the performance of each individual classifier at almost any time. Second, LLP could be used in a transfer learning scenario where one starts with a general classifier obtained on several other subjects and utilizes LLP as an unsupervised adaptation method with guarantees.…”
Section: Possible Extensionsmentioning
confidence: 99%
“…To be a more practical application, the training mode should be minimized or removed. Numerous studies are underway in the field to construct this general classifier (Kindermans et al, 2014a , b ; Verhoeven et al, 2017 ; Eldeib et al, 2018 ; Lee et al, 2020 ). Usually, however, a general classifier requires a significant number of data samples, which can be achieved through transfer learning using data from one domain for another.…”
Section: Discussionmentioning
confidence: 99%
“…BCI studies to date typically require extensive training over a period of several days or weeks, typically with a 20-to 30-min recalibration being necessary prior to each session [141]. However, recent studies suggest that initial training time can be reduced to less than a minute and that subsequent calibration time can be eliminated entirely [141][142][143]. Replicating such low initial training and subsequent calibration times in speech BCI would be highly desirable.…”
Section: Neural Decoding Performancementioning
confidence: 99%