International audienceThe PASCAL Visual Object Classes Challenge ran from February to March 2005. The goal of the challenge was to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). Four object classes were selected: motorbikes, bicycles, cars and people. Twelve teams entered the challenge. In this chapter we provide details of the datasets, algorithms used by the teams, evaluation criteria, and results achieved
For quantitative PET information, correction of tissue photon attenuation is mandatory. Generally in conventional PET, the attenuation map is obtained from a transmission scan, which uses a rotating radionuclide source, or from the CT scan in a combined PET/CT scanner. In the case of PET/MRI scanners currently under development, insufficient space for the rotating source exists; the attenuation map can be calculated from the MR image instead. This task is challenging because MR intensities correlate with proton densities and tissue-relaxation properties, rather than with attenuation-related mass density. Methods: We used a combination of local pattern recognition and atlas registration, which captures global variation of anatomy, to predict pseudo-CT images from a given MR image. These pseudo-CT images were then used for attenuation correction, as the process would be performed in a PET/CT scanner. Results: For human brain scans, we show on a database of 17 MR/CT image pairs that our method reliably enables estimation of a pseudo-CT image from the MR image alone. On additional datasets of MRI/PET/ CT triplets of human brain scans, we compare MRI-based attenuation correction with CT-based correction. Our approach enables PET quantification with a mean error of 3.2% for predefined regions of interest, which we found to be clinically not significant. However, our method is not specific to brain imaging, and we show promising initial results on 1 whole-body animal dataset. Conclusion: This method allows reliable MRI-based attenuation correction for human brain scans. Further work is necessary to validate the method for whole-body imaging.
Brain-computer interfaces (BCIs) have attracted much attention recently, triggered by new scientific progress in understanding brain function and by impressive applications. The aim of this review is to give an overview of the various steps in the BCI cycle, i.e., the loop from the measurement of brain activity, classification of data, feedback to the subject and the effect of feedback on brain activity. In this article we will review the critical steps of the BCI cycle, the present issues and state-of-the-art results. Moreover, we will develop a vision on how recently obtained results may contribute to new insights in neurocognition and, in particular, in the neural representation of perceived stimuli, intended actions and emotions. Now is the right time to explore what can be gained by embracing real-time, online BCI and by adding it to the set of experimental tools already available to the cognitive neuroscientist. We close by pointing out some unresolved issues and present our view on how BCI could become an important new tool for probing human cognition.
We reveal the presence of refractory and overlap effects in the event-related potentials in visual P300 speller datasets, and we show their negative impact on the performance of the system. This finding has important implications for how to encode the letters that can be selected for communication. However, we show that such effects are dependent on stimulus parameters: an alternative stimulus type based on apparent motion suffers less from the refractory effects and leads to an improved letter prediction performance.
The Farwell and Donchin matrix speller is well known as one of the highest performing brain-computer interfaces (BCIs) currently available. However, its use of visual stimulation limits its applicability to users with normal eyesight. Alternative BCI spelling systems which rely on non-visual stimulation, e.g. auditory or tactile, tend to perform much more poorly and/or can be very difficult to use. In this paper we present a novel extension of the matrix speller, based on flipping the letter matrix, which allows us to use the same interface for visual, auditory or simultaneous visual and auditory stimuli. In this way we aim to allow users to utilize the best available input modality for their situation, that is use visual + auditory for best performance and move smoothly to purely auditory when necessary, e.g. when disease causes the user's eyesight to deteriorate. Our results on seven healthy subjects demonstrate the effectiveness of this approach, with our modified visual + auditory stimulation slightly out-performing the classic matrix speller. The purely auditory system performance was lower than for visual stimulation, but comparable to other auditory BCI systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.