Purpose To characterize intraocular pressure (IOP) dynamics by identifying the sources of transient IOP fluctuations and quantifying the frequency, magnitude, associated cumulative IOP-related mechanical energy, and temporal distribution. Methods IOP was monitored at 500 Hz for periods of 16 to 451 days in nine normal eyes of six conscious, unrestrained nonhuman primates using a validated, fully implanted wireless telemetry system. IOP transducers were calibrated every two weeks via anterior chamber cannulation manometry. Analysis of time-synchronized, high-definition video was used to identify the sources of transient IOP fluctuations. Results The distribution of IOP in individual eyes is broad, and changes at multiple timescales, from second-to-second to day-to-day. Transient IOP fluctuations arise from blinks, saccades, and ocular pulse amplitude and were as high as 14 mm Hg (>100%) above momentary baseline. Transient IOP fluctuations occur ∼10,000 times per waking hour, with ∼2000 to 5000 fluctuations per hour greater than 5 mm Hg (∼40%) above baseline. Transient IOP fluctuations account for up to 17% (mean of 12%) of the total cumulative IOP-related mechanical energy that the eye must withstand during waking hours. Conclusions Transient IOP fluctuations occur frequently and comprise a large and significant portion of the total IOP loading in the eye and should, therefore, be considered in future studies of cell mechanotransduction, ocular biomechanics, and/or clinical outcomes where transient IOP fluctuations may be important. If IOP dynamics are similar in humans, clinical snapshot IOP measurements are insufficient to capture true IOP.
Background and Aims Therapeutic, clinical trial entry and stratification decisions for hepatocellular carcinoma (HCC) are made based on prognostic assessments, using clinical staging systems based on small numbers of empirically selected variables that insufficiently account for differences in biological characteristics of individual patients’ disease. Approach and Results We propose an approach for constructing risk scores from circulating biomarkers that produce a global biological characterization of individual patient’s disease. Plasma samples were collected prospectively from 767 patients with HCC and 200 controls, and 317 proteins were quantified in a Clinical Laboratory Improvement Amendments–certified biomarker testing laboratory. We constructed a circulating biomarker aberration score for each patient, a score between 0 and 1 that measures the degree of aberration of his or her biomarker panel relative to normal, which we call HepatoScore. We used log‐rank tests to assess its ability to substratify patients within existing staging systems/prognostic factors. To enhance clinical application, we constructed a single‐sample score, HepatoScore‐14, which requires only a subset of 14 representative proteins encompassing the global biological effects. Patients with HCC were split into three distinct groups (low, medium, and high HepatoScore) with vastly different prognoses (medial overall survival 38.2/18.3/7.1 months; P < 0.0001). Furthermore, HepatoScore accurately substratified patients within levels of existing prognostic factors and staging systems (P < 0.0001 for nearly all), providing substantial and sometimes dramatic refinement of expected patient outcomes with strong therapeutic implications. These results were recapitulated by HepatoScore‐14, rigorously validated in repeated training/test splits, concordant across Myriad RBM (Austin, TX) and enzyme‐linked immunosorbent assay kits, and established as an independent prognostic factor. Conclusions HepatoScore‐14 augments existing HCC staging systems, dramatically refining patient prognostic assessments and therapeutic decision making and enrollment in clinical trials. The underlying strategy provides a global biological characterization of disease, and can be applied broadly to other disease settings and biological media.
Background The COVID-19 pandemic has caused major health and socio-economic disruptions worldwide. Accurate investigation of emerging data is crucial to inform policy makers as they construct viral mitigation strategies. Complications such as variable testing rates and time lags in counting cases, hospitalizations and deaths make it challenging to accurately track and identify true infectious surges from available data, and requires a multi-modal approach that simultaneously considers testing, incidence, hospitalizations, and deaths. Although many websites and applications report a subset of these data, none of them provide graphical displays capable of comparing different states or countries on all these measures as well as various useful quantities derived from them. Here we introduce a freely available dynamic representation tool, COVID-TRACK, that allows the user to simultaneously assess time trends in these measures and compare various states or countries, equipping them with a tool to investigate the potential effects of the different mitigation strategies and timelines used by various jurisdictions. Findings COVID-TRACK is a Python based web-application that provides a platform for tracking testing, incidence, hospitalizations, and deaths related to COVID-19 along with various derived quantities. Our application makes the comparison across states in the USA and countries in the world easy to explore, with useful transformation options including per capita, log scale, and/or moving averages. We illustrate its use by assessing various viral trends in the USA and Europe. Conclusion The COVID-TRACK web-application is a user-friendly analytical tool to compare data and trends related to the COVID-19 pandemic across areas in the United States and worldwide. Our tracking tool provides a unique platform where trends can be monitored across geographical areas in the coming months to watch how the pandemic waxes and wanes over time at different locations around the USA and the globe.
Background The COVID-19 pandemic has caused major health and socio-economic disruptions worldwide. Accurate investigation of emerging data is crucial to inform policy makers as they construct viral mitigation strategies. Complications such as variable testing rates and time lags in counting cases, hospitalizations and deaths make it challenging to accurately track and identify true infectious surges from available data, and requires a multi-modal approach that simultaneously considers testing, incidence, hospitalizations, and deaths. Although many websites and applications report a subset of these data, none of them provide graphical displays capable of comparing different states or countries on all these measures as well as various useful quantities derived from them. Here we introduce a freely available graphical application that allows the user to simultaneously assess time trends in these measures and compare various states or countries, equipping them with a tool to investigate the potential effects of the different mitigation strategies and timelines used by various jurisdictions. Findings COVID-TRACK is a Python based web-application that provides a platform for tracking testing, incidence, hospitalizations, and deaths related to COVID-19 along with various derived quantities. Our application makes the comparison across states or countries in the world easy to explore, with useful transformation options including per capita, log scale, and/or moving averages. We illustrate its use by assessing various viral trends in the USA and Europe. Conclusion The COVID-TRACK web-application is a user-friendly analytical tool to compare data and trends related to the COVID-19 pandemic across areas in the United States and worldwide. Our tracking tool provides a unique platform where trends can be monitored across geographical areas in the coming months to watch how the pandemic waxes and wanes throughout the summer and into a potential second wave in the fall.
The optimal approach for continuous measurement of intraocular pressure (IOP), including pressure transducer location and measurement frequency, is currently unknown. This study assessed the capability of extraocular (EO) and intraocular (IO) pressure transducers, using different IOP sampling rates and duty cycles, to characterize IOP dynamics. Transient IOP fluctuations were measured and quantified in 7 eyes of 4 male rhesus macaques (NHPs) using the Konigsberg EO system (continuous at 500 Hz), 12 eyes of 8 NHPs with the Stellar EO system and 16 eyes of 12 NHPs with the Stellar IO system (both measure at 200 Hz for 15 s of every 150 s period). IOP transducers were calibrated bi-weekly via anterior chamber manometry. Linear mixed effects models assessed the differences in the hourly transient IOP impulse, and transient IOP fluctuation frequency and magnitude between systems and transducer placements (EO versus IO). All systems measured 8000–12,000 and 5000–6500 transient IOP fluctuations per hour > 0.6 mmHg, representing 8–16% and 4–8% of the total IOP energy the eye must withstand during waking and sleeping hours, respectively. Differences between sampling frequency/duty cycle and transducer placement were statistically significant (p < 0.05) but the effect sizes were small and clinically insignificant. IOP dynamics can be accurately captured by sampling IOP at 200 Hz on a 10% duty cycle using either IO or EO transducers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.