2022
DOI: 10.1016/j.jneumeth.2022.109726
|View full text |Cite
|
Sign up to set email alerts
|

Application of rapid invisible frequency tagging for brain computer interfaces

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 41 publications
0
9
0
Order By: Relevance
“…Indeed, the classification accuracy for the twoclass problem reached 93.7% and 81.1% for the control and periliminal conditions respectively. This classification performance surpasses the 60% classification accuracy achieved using high-frequency (using 56 and 60 Hz flickers) flickers displayed with high-refresh-rate projectors to elicit SSVEP responses recorded using MEG (Brickwedde et al, 2022). The improved user experience, coupled with the high classification performance achieved by periliminal flickers, has the potential to enhance initial engagement and user retention in future SSVEP-based applications.…”
Section: Discussionmentioning
confidence: 86%
“…Indeed, the classification accuracy for the twoclass problem reached 93.7% and 81.1% for the control and periliminal conditions respectively. This classification performance surpasses the 60% classification accuracy achieved using high-frequency (using 56 and 60 Hz flickers) flickers displayed with high-refresh-rate projectors to elicit SSVEP responses recorded using MEG (Brickwedde et al, 2022). The improved user experience, coupled with the high classification performance achieved by periliminal flickers, has the potential to enhance initial engagement and user retention in future SSVEP-based applications.…”
Section: Discussionmentioning
confidence: 86%
“…Though it may be possible to evoke a stronger RIFT response from higher frequencies in future tasks as described above, currently we only see usable responses to 60Hz and 64Hz tagging. However, given that previous RIFT work has also made use of 56Hz (Brickwedde et al, 2022) without any resulting concerns over perceptibility of the flicker, and that different choices during the analysis stage (narrower bandpass filtering for coherence) may allow increased frequency resolution for tagging, we infer that RIFT-EEG is capable of uniquely tagging at least two individual stimuli/regions, with a strong likelihood of being able to tag at least three.…”
Section: Dependence On Tagged Frequencymentioning
confidence: 97%
“…While the existing framework has clearly delivered novel insights for cognitive neuroscience as described above, the low number of MEG setups worldwide combined with both the active cost of running MEG experiments as well as the initial capital required encourages the exploration of viable, more accessible alternatives. Additionally, as the potential of RIFT as a communication medium for Brain-Computer Interfaces begins to be explored (Brickwedde et al, 2022) as an improvement upon existing SSVEP Brain-Computer Interfaces (Zhu et al, 2010), the need to assess the possibility of a more portable RIFT framework rises further. Electroencephalography (EEG) has been shown to be sensitive to periodic stimulation in the RIFT frequency range using flickering LEDs (Gulbinaite et al, 2019;Herrmann, 2001).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To address this limitation, we developed the rapid invisible frequency tagging (RIFT) technique, which involves flickering visual stimuli at a frequency above 60 Hz, making it invisible and non-disruptive to the ongoing task. Responses to RIFT have been shown to increase with the allocation of attention to the stimulus bearing the visual flicker (Brickwedde et al, 2022; Drijvers et al, 2021; Duecker et al, 2021; Ferrante et al, 2023; Gutteling et al, 2022; Zhigalov et al, 2021, 2019; Zhigalov and Jensen, 2022, 2020). In our previous study, we adapted RIFT to a natural reading task and found temporally-precise evidence for parafoveal processing at the lexical level (Pan et al, 2021).…”
Section: Introductionmentioning
confidence: 99%