Molecular analysis on the single-cell level represents a rapidly growing field in the life sciences. While bulk analysis from a pool of cells provides a general molecular profile, it is blind to heterogeneities between individual cells. This heterogeneity, however, is an inherent property of every cell population. Its analysis is fundamental to understanding the development, function, and role of specific cells of the same genotype that display different phenotypical properties. Single-cell mass spectrometry (MS) aims to provide broad molecular information for a significantly large number of cells to help decipher cellular heterogeneity using statistical analysis. Here, we present a sensitive approach to single-cell MS based on high-resolution MALDI-2-MS imaging in combination with MALDI-compatible staining and use of optical microscopy. Our approach allowed analyzing large amounts of unperturbed cells directly from the growth chamber. Confident coregistration of both modalities enabled a reliable compilation of single-cell mass spectra and a straightforward inclusion of optical as well as mass spectrometric features in the interpretation of data. The resulting multimodal datasets permit the use of various statistical methods like machine learning–driven classification and multivariate analysis based on molecular profile and establish a direct connection of MS data with microscopy information of individual cells. Displaying data in the form of histograms for individual signal intensities helps to investigate heterogeneous expression of specific lipids within the cell culture and to identify subpopulations intuitively. Ultimately, t-MALDI-2-MSI measurements at 2-µm pixel sizes deliver a glimpse of intracellular lipid distributions and reveal molecular profiles for subcellular domains.
Heading estimation from optic flow is crucial for safe locomotion but becomes inaccurate if independent object motion is present. In ecological settings, such motion typically involves other animals or humans walking across the scene. An independently walking person presents a local disturbance of the flow field, which moves across the flow field as the walker traverses the scene. Is the bias in heading estimation produced by the local disturbance of the flow field or by the movement of the walker through the scene? We present a novel flow field stimulus in which the local flow disturbance and the movement of the walker can be pitted against each other. Each frame of this stimulus consists of a structureless random dot distribution. Across frames, the body shape of a walker is molded by presenting different flow field dynamics within and outside the body shape. In different experimental conditions, the flow within the body shape can be congruent with the walker's movement, incongruent with it, or congruent with the background flow. We show that heading inaccuracy results from the local flow disturbance rather than the movement through the scene. Moreover, we show that the local disturbances of the optic flow can be used to segment the walker and support biological motion perception to some degree. The dichotomous result that the walker can be segmented from the scene but that heading perception is nonetheless influenced by the flow produced by the walker confirms separate visual pathways for heading estimation, object segmentation, and biological motion perception.
Nonrigid materials such as liquids or smoke deform over time. Little is known about the visual perception of nonrigid motion other than that many motion cues associated with rigid motion perception are not reliable for nonrigid motion. Nonrigid motion patterns lack clear borders and their movement can be inconsistent with the motion of their parts. We developed a novel stimulus that creates a nonrigid vortex motion pattern in a random dot distribution and decouples the movement of the vortex from the first-order motion of the dots. We presented three moving vortices that entailed consecutively fewer motion cues, eliminating occlusion, motion borders, and velocity field gradients in the process. Subjects were well able to report the end position and travel path in all cases, showing that nonrigid motion is perceived through an analysis of the temporal evolution of visual motion patterns and does not require borders or speed differences. Adding a coherent global motion did not hamper perception, but adding local noise did, indicating that the visual system uses mid-level features that are on a local scale. We also found that participants judged the movement of the nonrigid motion patterns slower than a rigid control, revealing that speed perception was based on a combination of motion of the parts and movement of the pattern. We propose that the visual system uses the temporal evolution of a motion pattern for the perception of nonrigid motion and suggest a plausible mechanism based on the curl of the motion field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.