Conventional analysis of electroencephalography (EEG) and magnetoencephalography (MEG) often relies on averaging over multiple trials to extract statistically relevant differences between two or more experimental conditions. In this article we demonstrate single-trial detection by linearly integrating information over multiple spatially distributed sensors within a predefined time window. We report an average, single-trial discrimination performance of A z Ϸ 0.80 and fraction correct between 0.70 and 0.80, across three distinct encephalographic data sets. We restrict our approach to linear integration, as it allows the computation of a spatial distribution of the discriminating component activity. In the present set of experiments the resulting component activity distributions are shown to correspond to the functional neuroanatomy consistent with the task (e.g., contralateral sensorymotor cortex and anterior cingulate). Our work demonstrates how a purely data-driven method for learning an optimal spatial weighting of encephalographic activity can be validated against the functional neuroanatomy.
Convolutive blind source separation and adaptive beamforming have a similar goal-extracting a source of interest (or multiple sources) while reducing undesired interferences. A benefit of source separation is that it overcomes the conventional cross-talk or leakage problem of adaptive beamforming. Beamforming on the other hand exploits geometric information which is often readily available but not utilized in blind algorithms. In this work we propose to join these benefits by combining cross-power minimization of second-order source separation with geometric linear constraints used in adaptive beamforming. We find that the geometric constraints resolve some of the ambiguities inherent in the independence criterion such as frequency permutations and degrees of freedom provided by additional sensors. We demonstrate the new method in performance comparisons for actual room recordings of two and three simultaneous acoustic sources.
The Mumford-Shah functional has had a major impact on a variety of image analysis problems, including image segmentation and filtering, and, despite being introduced over two decades ago, it is still in widespread use. Present day optimization of the Mumford-Shah functional is predominated by active contour methods. Until recently, these formulations necessitated optimization of the contour by evolving via gradient descent, which is known for its overdependence on initialization and the tendency to produce undesirable local minima. In order to reduce these problems, we reformulate the corresponding Mumford-Shah functional on an arbitrary graph and apply the techniques of combinatorial optimization to produce a fast, low-energy solution. In contrast to traditional optimization methods, use of these combinatorial techniques necessitates consideration of the reconstructed image outside of its usual boundary, requiring additionally the inclusion of regularization for generating these values. The energy of the solution provided by this graph formulation is compared with the energy of the solution computed via traditional gradient descent-based narrow-band level set methods. This comparison demonstrates that our graph formulation and optimization produces lower energy solutions than the traditional gradient descent based contour evolution methods in significantly less time. Finally, we demonstrate the usefulness of the graph formulation to apply the Mumford-Shah functional to new applications such as point clustering and filtering of nonuniformly sampled images.
Computed tomography (CT) is used widely to image patients for medical diagnosis and to scan baggage for threatening materials. Automated reading of these images can be used to reduce the costs of a human operator, extract quantitative information from the images or support the judgements of a human operator. Object quantification requires an image segmentation to make measurements about object size, material composition and morphology. Medical applications mostly require the segmentation of prespecified objects, such as specific organs or lesions, which allows the use of customized algorithms that take advantage of training data to provide orientation and anatomical context of the segmentation targets. In contrast, baggage screening requires the segmentation algorithm to provide segmentation of an unspecified number of objects with enormous variability in size, shape, appearance and spatial context. Furthermore, security systems demand 3D segmentation algorithms that can quickly and reliably detect threats. To address this problem, we present a segmentation algorithm for 3D CT images that makes no assumptions on the number of objects in the image or on the composition of these objects. The algorithm features a new Automatic QUality Measure (AQUA) model that measures the segmentation confidence for any single object (from any segmentation method) and uses this confidence measure to both control splitting and to optimize the segmentation parameters at runtime for each dataset. The algorithm is tested on 27 bags that were packed with a large variety of different objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.