Background Axial involvement constitutes a specific domain of psoriatic arthritis (PsA). Interleukin (IL)-23 inhibitors have demonstrated improvement in axial PsA (axPsA) symptoms, but have not shown efficacy in treating ankylosing spondylitis (AS), suggesting differences in axPsA processes and treatments. In a post hoc, pooled analysis of patients with investigator- and imaging-confirmed sacroiliitis in two phase 3, randomized, placebo-controlled studies (DISCOVER-1 and DISCOVER-2), patients treated with guselkumab, an IL-23p19 inhibitor, had greater axial symptom improvements compared with placebo. Confirmatory imaging at baseline was restricted to the sacroiliac (SI) joints, occurred prior to/at screening, and was locally read. Methods The STAR study will prospectively assess efficacy outcomes in PsA patients with magnetic resonance imaging (MRI)-confirmed axial inflammation. Eligible, biologic-naïve patients with PsA (N = 405) for ≥ 6 months and active disease (≥ 3 swollen and ≥ 3 tender joints, C-reactive protein [CRP] ≥ 0.3 mg/dL) despite prior non-biologic disease-modifying antirheumatic drugs, apremilast, and/or nonsteroidal anti-inflammatory drugs will be randomized (1:1:1) to guselkumab every 4 weeks (Q4W); guselkumab at week (W) 0, W4, then every 8 weeks (Q8W); or placebo with crossover to guselkumab at W24, W28, then Q8W. Patients will have Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) score ≥ 4, spinal pain component score (0–10 visual analog scale) ≥ 4, and screening MRI-confirmed axial involvement (positive spine and/or SI joints according to centrally read Spondyloarthritis Research Consortium of Canada [SPARCC] score ≥ 3 in ≥ 1 region). The primary endpoint is mean change from baseline in BASDAI at W24; multiplicity controlled secondary endpoints at W24 include AS Disease Activity Score employing CRP (ASDAS), Disease Activity Index for PsA (DAPSA), Health Assessment Questionnaire – Disability Index (HAQ-DI), Investigator’s Global Assessment of skin disease (IGA), and mean changes from baseline in MRI SI joint SPARCC scores. Centrally read MRIs of spine and SI joints (scored using SPARCC) will be obtained at W0, W24, and W52, with readers blinded to treatment group and timepoint. Treatment group comparisons will be performed using a Cochran-Mantel-Haenszel or chi-square test for binary endpoints and analysis of covariance, mixed model for repeated measures, or constrained longitudinal data analysis for continuous endpoints. Discussion This study will evaluate the ability of guselkumab to reduce both axial symptoms and inflammation in patients with active PsA. Trial registration This trial was registered at ClinicalTrials.gov, NCT04929210, on 18 June 2021. Protocol version: Version 1.0 dated 14 April 2021.
The transition from manual to robotic high throughput screening (HTS) in the last few years has made it feasible to screen hundreds of thousands of chemical entities against a biological target in less than a month. This rate of HTS has increased the visibility of bottlenecks, one of which is assay optimization. In many organizations, experimental methods are generated by therapeutic teams associated with specific targets and passed on to the HTS group. The resulting assays frequently need to be further optimized to withstand the rigors and time frames inherent in robotic handling. Issues such as protein aggregation, ligand instability, and cellular viability are common variables in the optimization process. The availability of robotics capable of performing rapid random access tasks has made it possible to design optimization experiments that would be either very difficult or impossible for a person to carry out. Our approach to reducing the assay optimization bottleneck has been to unify the highly specific fields of statistics, biochemistry, and robotics. The product of these endeavors is a process we have named automated assay optimization (AAO). This has enabled us to determine final optimized assay conditions, which are often a composite of variables that we would not have arrived at by examining each variable independently. We have applied this approach to both radioligand binding and enzymatic assays and have realized benefits in both time and performance that we would not have predicted a priori. The fully developed AAO process encompasses the ability to download information to a robot and have liquid handling methods automatically created. This evolution in smart robotics has proven to be an invaluable tool for maintaining high-quality data in the context of increasing HTS demands.
The transition from manual to robotic high throughput screening (HTS) in the last few years has made it feasible to screen hundreds of thousands of chemical entities against a biological target in less than a month. This rate of HTS has increased the visibility of bottlenecks, one of which is assay optimization. In many organizations, experimental methods are generated by therapeutic teams associated with specific targets and passed on to the HTS group. The resulting assays frequently need to be further optimized to withstand the rigors and time frames inherent in robotic handling. Issues such as protein aggregation, ligand instability, and cellular viability are common variables in the optimization process. The availability of robotics capable of performing rapid random access tasks has made it possible to design optimization experiments that would be either very difficult or impossible for a person to carry out. Our approach to reducing the assay optimization bottleneck has been to unify the highly specific fields of statistics, biochemistry, and robotics. The product of these endeavors is a process we have named automated assay optimization (AAO). This has enabled us to determine final optimized assay conditions, which are often a composite of variables that we would not have arrived at by examining each variable independently. We have applied this approach to both radioligand binding and enzymatic assays and have realized benefits in both time and performance that we would not have predicted a priori. The fully developed AAO process encompasses the ability to download information to a robot and have liquid handling methods automatically created. This evolution in smart robotics has proven to be an invaluable tool for maintaining high-quality data in the context of increasing HTS demands.
People are living longer than ever due to advances in healthcare, and this has prompted many healthcare providers to look towards remote patient care as a means to meet the needs of the future. It is now a priority to enable people to reside in their own homes rather than in overburdened facilities whenever possible. The increasing maturity of IoT technologies and the falling costs of connected sensors has made the deployment of remote healthcare at scale an increasingly attractive prospect. In this work we demonstrate that we can measure the consistency and regularity of the behaviour of a household using sensor readings generated from interaction with the home environment. We show that we can track changes in this behaviour regularity longitudinally and detect changes that may be related to significant life events or trends that may be medically significant. We achieve this using periodicity analysis on water usage readings sampled from the main household water meter every 15 minutes for over 8 months. We utilise an IoT Application Enablement Platform in conjunction with low cost LoRaenabled sensors and a Low Power Wide Area Network in order to validate a data collection methodology that could be deployed at large scale in future. We envision the statistical methods described here being applied to data streams from the homes of elderly and at-risk groups, both as a means of early illness detection and for monitoring the well-being of those with known illnesses.
A major issue in machine learning is availability of training data. While this historically referred to the availability of a sufficient volume of training data, recently this has shifted to the availability of sufficient unbiased training data. In this paper we focus on the effect of training data bias on an emerging multimedia application, the automatic captioning of short video clips. We use subsets of the same training data to generate different models for video captioning using the same machine learning technique and we evaluate the performances of different training data subsets using a well-known video caption benchmark, TRECVid. We train using the MSR-VTT video-caption pairs and we prune this to reduce and make the set of captions describing a video more homogeneously similar, or more diverse, or we prune randomly. We then assess the effectiveness of caption-generating trained with these variations using automatic metrics as well as direct assessment by human assessors. Our findings are preliminary and show that randomly pruning captions from the training data yields the worst performance and that pruning to make the data more homogeneous, or diverse, does improve performance slightly when compared to random. Our work points to the need for more training data, both more video clips but, more importantly, more captions for those videos.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.