2022
DOI: 10.1002/mrm.29251
|View full text |Cite
|
Sign up to set email alerts
|

Tracking of rigid head motion duringMRIusing an EEGsystem

Abstract: To demonstrate a novel method for tracking of head movements during MRI using electroencephalography (EEG) hardware for recording signals induced by native imaging gradients. Theory and Methods: Gradient switching during simultaneous EEG-fMRI induces distortions in EEG signals, which depend on subject head position and orientation. When EEG electrodes are interconnected with high-impedance carbon wire loops, the induced voltages are linear combinations of the temporal gradient waveform derivatives. We introduc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 50 publications
0
9
0
Order By: Relevance
“…In this setting, the results corresponded well with our theoretical formulas. We believe that the generalization to the other two remaining axes is straightforward, as demonstrated by Laustsen et al [ 22 ], which can be explored in future work. Because we used conventional EEG acquisition equipment with hardware lowpass filters, we were limited to relatively low-frequency stimulation gradients, which added time to the imaging sequence.…”
Section: Discussionmentioning
confidence: 83%
See 2 more Smart Citations
“…In this setting, the results corresponded well with our theoretical formulas. We believe that the generalization to the other two remaining axes is straightforward, as demonstrated by Laustsen et al [ 22 ], which can be explored in future work. Because we used conventional EEG acquisition equipment with hardware lowpass filters, we were limited to relatively low-frequency stimulation gradients, which added time to the imaging sequence.…”
Section: Discussionmentioning
confidence: 83%
“…In general, this does not hold, as in Equation (11), gradients and positions occur in a multivariate relationship. In addition, we developed a rigid motion algorithm to specifically take advantage of the new sensors, which has the benefit of not requiring a subject–specific calibration [ 22 ]. Our method does assume that movements are small; however, this assumption is not significantly restrictive in practice, since one can analyze smaller time differences if needed to ensure that this condition holds.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This is because the existing motion capture systems are very expensive and require large spaces for installation, while most tracking systems only need a web camera. Also, some have bridged the gap by developing human facial expression synthesis systems [60] and facial expression reconstruction systems [61] via existing facial tracking systems, which opens the path for transferring the motion from a tracking system directly to a corresponded synthesis system. For instance, if a system wants to synthesis happy facial expressions for a virtual avatar, it can directly reconstruct the intended facial expression for that avatar by using the motion [62], which makes the capture of motion possible without using cameras, all be it only with a specialist head mounted display (HMD) that can detect one's brain signals.…”
Section: Technical Approachesmentioning
confidence: 99%
“…For both prospective and retrospective motion and δB0$$ \delta {\mathbf{B}}_{\mathbf{0}} $$ correction, estimates of these time‐varying parameters are needed. As a first category, external tracking devices, with or without markers, have been proposed to externally track motion or δB0$$ \delta {\mathbf{B}}_{\mathbf{0}} $$ at a very high temporal resolution 11–15 . As a second category, data‐driven estimation methods estimate changes in motion and δB0$$ \delta {\mathbf{B}}_{\mathbf{0}} $$ by fitting a motion—and δB0$$ \delta {\mathbf{B}}_{\mathbf{0}} $$—informed signal model to the acquired imaging data 2,7,16,17 .…”
Section: Introductionmentioning
confidence: 99%