Although promising from numerous applications, current brain-computer interfaces (BCIs) still suffer from a number of limitations. In particular, they are sensitive to noise, outliers and the non-stationarity of electroencephalographic (EEG) signals, they require long calibration times and are not reliable. Thus, new approaches and tools, notably at the EEG signal processing and classification level, are necessary to address these limitations. Riemannian approaches, spearheaded by the use of covariance matrices, are such a very promising tool slowly adopted by a growing number of researchers. This article, after a quick introduction to Riemannian geometry and a presentation of the BCI-relevant manifolds, reviews how these approaches have been used for EEG-based BCI, in particular for feature representation and learning, classifier design and calibration time reduction. Finally, relevant challenges and promising research directions for EEG signal classification in BCIs are identified, such as feature tracking on manifold or multi-task learning.
This paper is devoted to the construction of a complete database which is intended to improve the implementation and the evaluation of automated facial reconstruction. This growing database is currently composed of 85 head CT-scans of healthy European subjects aged 20-65 years old. It also includes the triangulated surfaces of the face and the skull of each subject. These surfaces are extracted from CT-scans using an original combination of image-processing techniques which are presented in the paper. Besides, a set of 39 referenced anatomical skull landmarks were located manually on each scan. Using the geometrical information provided by triangulated surfaces, we compute facial soft-tissue depths at each known landmark positions. We report the average thickness values at each landmark and compare our measures to those of the traditional charts of [J. Rhine, C.E. Moore, Facial Tissue Thickness of American Caucasoïds, Maxwell Museum of Anthropology, Albuquerque, New Mexico, 1982] and of several recent in vivo studies [M.H. Manhein, G.A. Listi, R.E. Barsley, et al., In vivo facial tissue depth measurements for children and adults, Journal of Forensic Sciences 45 (1) (2000) 48-60; S. De Greef, P. Claes, D. Vandermeulen, et al., Large-scale in vivo Caucasian facial soft tissue thickness database for craniofacial reconstruction, Forensic Science International 159S (2006) S126-S146; R. Helmer, Schödelidentifizierung durch elektronische bildmischung, Kriminalistik Verlag GmbH, Heidelberg, 1984].
Abstract. In this paper, we present a computer-assisted method for facial reconstruction. This method provides an estimation of the facial shape associated with unidentified skeletal remains. Current computer-assisted methods using a statistical framework rely on a common set of extracted points located on the bone and soft-tissue surfaces. Most of the facial reconstruction methods then consist in predicting the position of the soft-tissue surface points, when the positions of the bone surface points are known. We propose to use Latent Root Regression for prediction. The results obtained are then compared to those given by Principal Components Analysis linear models. In conjunction, we have evaluated the influence of the number of skull landmarks used. Anatomical skull landmarks are completed iteratively by points located upon geodesics which link these anatomical landmarks, thus enabling us to artificially increase the number of skull points. Facial points are obtained using a mesh-matching algorithm between a common reference mesh and individual soft-tissue surface meshes. The proposed method is validated in term of accuracy, based on a leave-one-out cross-validation test applied to a homogeneous database. Accuracy measures are obtained by computing the distance between the original face surface and its reconstruction. Finally, these results are discussed referring to current computer-assisted reconstruction facial techniques.
We propose a new online approach for multimodal dictionary learning. The method developed in this work addresses the great challenges posed by the computational resource constraints in dynamic environment when dealing with large scale tensor sequences. Given a sequence of tensors, i.e. a set composed of equal-size tensors, the approach proposed in this paper allows to infer a basis of latent factors that generate these tensors by sequentially processing a small number of data samples instead of using the whole sequence at once. Our technique is based on block coordinate descent, gradient descent and recursive computations of the gradient. A theoretical result is provided and numerical experiments on both real and synthetic data sets are performed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.