Continuous electrocardiographic (ECG) monitoring was first introduced into hospitals in the 1960s, initially into critical care, as bedside monitors, and eventually into step-down units with telemetry capabilities. Although the initial use was rather simplistic (ie, heart rate and rhythm assessment), the capabilities of these devices and associated physiologic (vital sign) monitors have expanded considerably. Current bedside monitors now include sophisticated ECG software designed to identify myocardial ischemia (ie, ST-segment monitoring), QT-interval prolongation, and a myriad of other cardiac arrhythmia types. Physiologic monitoring has had similar advances from noninvasive assessment of core vital signs (blood pressure, respiratory rate, oxygen saturation) to invasive monitoring including arterial blood pressure, temperature, central venous pressure, intracranial pressure, carbon dioxide, and many others. The benefit of these monitoring devices is that continuous and real-time information is displayed and can be configured to alarm to alert nurses to a change in a patient’s condition. I think it is fair to say that critical and high-acuity care nurses see these devices as having a positive impact in patient care. However, this enthusiasm has been somewhat dampened in the past decade by research highlighting the shortcomings and unanticipated consequences of these devices, namely alarm and alert fatigue. In this article, which is associated with the American Association of Critical-Care Nurses’ Distinguished Research Lecture, I describe my 36-year journey from a clinical nurse to nurse scientist and the trajectory of my program of research focused primarily on ECG and physiologic monitoring. Specifically, I discuss the good, the not so good, and the untapped potential of these monitoring systems in clinical care. I also describe my experiences with community-based research in patients with acute coronary syndrome and/or heart failure.