BackgroundUltrasound measurement of dynamic changes in inferior vena cava (IVC) diameter can be used to assess intravascular volume status in critically ill patients, but published studies vary in accuracy as well as recommended diagnostic cutoffs. Part of this variability may be related to movements of the vessel relative to the transducer during the respiratory cycle which results in unintended comparison of different points of the IVC at end expiration and inspiration, possibly introducing error related to variations in normal anatomy. The objective of this study was to quantify both craniocaudal and mediolateral movements of the IVC as well as the vessel's axis of collapse during respirophasic ultrasound imaging.MethodsPatients were enrolled from a single urban academic emergency department with ultrasound examinations performed by sonographers experienced in IVC ultrasound. The IVC was imaged from the level of the diaphragm along its entire course to its bifurcation with diameter measurements and respiratory collapse measured at a single point inferior to the confluence of the hepatic veins. While imaging the vessel in its long axis, movement in a craniocaudal direction during respiration was measured by tracking the movement of a fixed point across the field of view. Likewise, imaging the short axis of the IVC allowed for measurement of mediolateral displacement as well as the vessel's angle of collapse relative to vertical.ResultsSeventy patients were enrolled over a 6-month period. The average diameter of the IVC was 13.8 mm (95% CI 8.41 to 19.2 mm), with a mean respiratory collapse of 34.8% (95% CI 19.5% to 50.2%). Movement of the vessel relative to the transducer occurred in both mediolateral and craniocaudal directions. Movement was greater in the craniocaudal direction at 21.7 mm compared to the mediolateral movement at 3.9 mm (p < 0.001). Angle of collapse assessed in the transverse plane averaged 115° (95% CI 112° to 118°).ConclusionsMovement of the IVC occurs in both mediolateral and craniocaudal directions during respirophasic ultrasound imaging. Further, collapse of the vessel occurs not at true vertical (90°) but 25° off this axis. Technical approach to IVC assessment needs to be tailored to account for these factors.
A core goal of visual neuroscience is to predict human perceptual performance from natural signals. Performance in any natural task can be limited by at least three sources of uncertainty: stimulus variability, internal noise, and suboptimal computations. Determining the relative importance of these factors has been a focus of interest for decades but requires methods for predicting the fundamental limits imposed by stimulus variability on sensory-perceptual precision. Most successes have been limited to simple stimuli and simple tasks. But perception science ultimately aims to understand how vision works with natural stimuli. Successes in this domain have proven elusive. Here, we develop a model of humans based on an image-computable (images in, estimates out) Bayesian ideal observer. Given biological constraints, the ideal optimally uses the statistics relating local intensity patterns in moving images to speed, specifying the fundamental limits imposed by natural stimuli. Next, we propose a theoretical link between two key decision-theoretic quantities that suggests how to experimentally disentangle the impacts of internal noise and deterministic suboptimal computations. In several interlocking discrimination experiments with three male observers, we confirm this link and determine the quantitative impact of each candidate performance-limiting factor. Human performance is near-exclusively limited by natural stimulus variability and internal noise, and humans use near-optimal computations to estimate speed from naturalistic image movies. The findings indicate that the partition of behavioral variability can be predicted from a principled analysis of natural images and scenes. The approach should be extendable to studies of neural variability with natural signals.
Temporal differences in visual information processing between the eyes can cause dramatic misperceptions of motion and depth. Processing delays between the eyes cause the Pulfrich effect: oscillating targets in the frontal plane are misperceived as moving along near-elliptical motion trajectories in depth (Pulfrich, 1922). Here, we explain a previously reported but poorly understood variant: the anomalous Pulfrich effect. When this variant is perceived, the illusory motion trajectory appears oriented left- or right-side back in depth, rather than aligned with the true direction of motion. Our data indicate that this perceived misalignment is due to interocular differences in neural temporal integration periods, as opposed to interocular differences in delay. For oscillating motion, differences in the duration of temporal integration dampen the effective motion amplitude in one eye relative to the other. In a dynamic analog of the Geometric effect in stereo-surface-orientation perception (Ogle, 1950), the different motion amplitudes cause the perceived misorientation of the motion trajectories. Forced-choice psychophysical experiments, conducted with both different spatial frequencies and different onscreen motion damping in the two eyes show that the perceived misorientation in depth is associated with the eye having greater motion damping. A target-tracking experiment provided more direct evidence that the anomalous Pulfrich effect is caused by interocular differences in temporal integration and delay. These findings highlight the computational hurdles posed to the visual system by temporal differences in sensory processing. Future work will explore how the visual system overcomes these challenges to achieve accurate perception.
Learned visual categorical perception (CP) effects were assessed using three different measures (similarity rating, same-different judgment, and an XAB task) and two sets of stimuli differing in discriminability and varying on one category-relevant and one category-irrelevant dimension. Participant scores were converted to a common scale to allow assessment method to serve as an independent variable. Two different analyses using the Bayes Factor approach produced patterns of results consistent with learned CP effects: compared to a control group, participants trained on the category distinction could better discriminate between-category pairs of stimuli and were more sensitive to the category-relevant dimension. In addition, performance was better in general for the more highly discriminable stimuli, but stimulus discriminability did not influence the pattern of observed CP effects. Furthermore, these results were consistent regardless of how performance was assessed. This suggests that, for these methods at least, learned CP effects are robust across substantially different performance measures. Four different kinds of learned CP effects are reported in the literature singly or in combination: greater sensitivity between categories, reduced sensitivity within categories, increased sensitivity to category-relevant dimensions, and decreased sensitivity to category-irrelevant dimensions. The results of the current study suggest that the cause of these different patterns of CP effects is not due to either stimulus discriminability or assessment task. Other possible causes of the differences in reported CP findings are discussed.
A core goal of visual neuroscience is to be able to predict human perceptual performance from natural signals. In principle, performance in any natural task can be limited by at least three sources of uncertainty: stimulus variability, internal noise, and sub-optimal computations. Determining the relative importance of these factors has been a focus of interest for decades, but most successes have been achieved with simple tasks and simple stimuli. Drawing quantitative links directly from natural signals to perceptual performance has proven a substantial challenge. Here, we develop an image-computable Bayesian ideal observer that makes optimal use of the statistics relating image movies to speed. We then use this ideal observer to predict and model the behavioral signatures of each performance-limiting factor, and test the predictions in an interlocking series of speed discrimination experiments with naturalistic image movies. The critical experiment presents each unique trial twice. A model observer, based on the ideal and constrained by the previous experiments, tightly predicts human response consistency across repeated presentations without free parameters. This result implies that human observers use near-optimal computations, and that human performance is near-exclusively limited by natural stimulus variability and internal noise. The results demonstrate that human performance can be tightly predicted from a task-specific statistical analysis of naturalistic stimuli, show that image-computable ideal observer analysis can be generalized from simple to natural stimuli, and encourage similar analyses in other domains. 2 2
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.