Developing a knowledge-driven contemporaneous health index (CHI) that can precisely reflect the underlying patient condition across the course of the condition's progression holds a unique value, like facilitating a range of clinical decision-making opportunities. This is particularly important for monitoring degenerative conditions such as Alzheimer's disease (AD), where the condition of the patient will decay over time. Detecting early symptoms and progression signs, and continuous severity evaluation, are all essential for disease management. While a few methods have been developed in the literature, uncertainty quantification of those health index models has been largely neglected. To ensure the continuity of the care, we should be more explicit about the level of confidence in model outputs. Ideally, decision-makers should be provided with recommendations that are robust in the face of substantial uncertainty about future outcomes. In this paper, we aim at filling this gap by developing an uncertainty quantification based contemporaneous longitudinal index, named UQ-CHI, with a particular focus on continuous patient monitoring of degenerative conditions. Our method is to combine convex optimization and Bayesian learning using the maximum entropy learning (MEL) framework, integrating uncertainty on labels as well. Our methodology also provides closed-form solutions in some important decision making tasks, e.g., such as predicting the label of a new sample. Numerical studies demonstrate the effectiveness of the proposed UQ-CHI method in prediction accuracy, monitoring efficacy, and unique advantages if uncertainty quantification is enabled in practice. (Shuai Huang) communications between clinicians, healthcare providers, and patients. It will also be a crucial enabling factor for the development of many envisioned AI systems to implement adaptive interventions for better healthcare management, given a representation of the dynamic evolution of the patient's condition.Thus, to ensure continuity of care, we should be more explicit about our level of confidence in model outputs. Ideally, decision-makers should be provided with recommendations that are robust in the face of substantial uncertainty about future outcomes. However, computational models are an abstraction of clinical observations, as such, they are usually built on analytically tractable assumptions that may simplify the realworld problem. Also, most of these models are estimated from