The search for early biomarkers of mild cognitive impairment (MCI) has been central to the Alzheimer's Disease (AD) and dementia research community in recent years. To identify MCI status at the earliest possible point, recent studies have shown that linguistic markers such as word choice, utterance and sentence structures can potentially serve as preclinical behavioral markers. Here we present an adaptive dialogue algorithm (an AI-enabled dialogue agent) to identify sequences of questions (a dialogue policy) that distinguish MCI from normal (NL) cognitive status. Our AI agent adapts its questioning strategy based on the user's previous responses to reach an individualized conversational strategy per user. Because the AI agent is adaptive and scales favorably with additional data, our method provides a potential avenue for large-scale preclinical screening of neurocognitive decline as a new digital biomarker, as well as longitudinal tracking of aging patterns in the outpatient setting.
Reinforcement learning (RL) provides a theoretical framework for continuously improving an agent's behavior via trial and error. However, efficiently learning policies from scratch can be very difficult, particularly for tasks with exploration challenges. In such settings, it might be desirable to initialize RL with an existing policy, offline data, or demonstrations. However, naively performing such initialization in RL often works poorly, especially for value-based methods. In this paper, we present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy, and is compatible with any RL approach. In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks: a guide-policy, and an exploration-policy. By using the guide-policy to form a curriculum of starting states for the exploration-policy, we are able to efficiently improve performance on a set of simulated robotic tasks. We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms, particularly in the small-data regime. In addition, we provide an upper bound on the sample complexity of JSRL and show that with the help of a guide-policy, one can improve the sample complexity for non-optimism exploration methods from exponential in horizon to polynomial.
Background The search for early biomarkers of mild cognitive impairment (MCI) has been central to Alzheimer's Disease (AD) and the dementia research community in recent years. While there exist in‐vivo biomarkers (e.g., beta‐amyloid and tau) that can serve as indicators of pathological progression toward AD, biomarker screenings are prohibitively expensive to scale if widely used among pre‐symptomatic individuals in the outpatient setting. Behavior and social markers such as language, speech, and conversational behaviors reflect cognitive changes that may precede physical changes and offer a much more cost‐effective option for preclinical MCI detection, especially if they can be extracted from a non‐clinical setting. Method We developed a prototype AI conversational agent that conducts screening conversations with participants. Specifically, this AI agent must learn to ask the right sequence of questions to distinguishing the conversational characteristics of the participants with MCI from those with normal cognition. Using transcribed data obtained from recorded conversational interactions between participants and trained interviewers generated in a recently completed clinical trial, and applying supervised learning models to these data, we developed a novel reinforcement learning (RL) pipeline and a dialogue simulation environment to train an efficient dialogue agent to explore a range of semi‐structured questions. We train and validate our AI dialogue agent based on transcribed data from a randomized controlled behavioral intervention study, where we use the transcribed data from 41 subjects (14 MCI, 27 NL). Each subject has an average of 35 turns of dialogue on average. Result The results show that while using only a few turns of conversation, our framework can significantly outperform state‐of‐the‐art supervised learning approaches used in a past study. An AI agent of 30 turns of dialogue achieves over 0.853 Area Under the Receiver Operating Characteristic Curves (AUC) and 0.809 AUC with 20 turns, as compared to 0.811 AUC with the full dialogue turns. Conclusion Our dialogue‐based AI agent presents a step toward using AI to extend clinical care beyond the classical hospital and clinical settings, where we find that AI‐generated dialogues produce more predictive linguistic markers.
Machine learning (ML) has become a prevalent approach to tame the complexity of design space exploration for domain-specific architectures. While appealing, using ML for design space exploration poses several challenges. First, it is not straightforward to identify the most suitable algorithm from an ever-increasing pool of ML methods. Second, assessing the trade-offs between performance and sample efficiency across these methods is inconclusive. Finally, the lack of a holistic framework for fair, reproducible, and objective comparison across these methods hinders the progress of adopting ML-aided architecture design space exploration and impedes creating repeatable artifacts. To mitigate these challenges, we introduce ArchGym, an open-source gymnasium and easy-to-extend framework that connects a diverse range of search algorithms to architecture simulators. To demonstrate its utility, we evaluate ArchGym across multiple vanilla and domain-specific search algorithms in the design of a custom memory controller, deep neural network accelerators, and a custom SoC for AR/VR workloads, collectively encompassing over 21K experiments. The results suggest that with an unlimited number of samples, ML algorithms are equally favorable to meet the user-defined target specification if its hyperparameters are tuned thoroughly; no one solution is
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.