BackgroundOver a tenth of preventable adverse events in health care are caused by failures in information flow. These failures are tangible in clinical handover; regardless of good verbal handover, from two-thirds to all of this information is lost after 3-5 shifts if notes are taken by hand, or not at all. Speech recognition and information extraction provide a way to fill out a handover form for clinical proofing and sign-off.ObjectiveThe objective of the study was to provide a recorded spoken handover, annotated verbatim transcriptions, and evaluations to support research in spoken and written natural language processing for filling out a clinical handover form. This dataset is based on synthetic patient profiles, thereby avoiding ethical and legal restrictions, while maintaining efficacy for research in speech-to-text conversion and information extraction, based on realistic clinical scenarios. We also introduce a Web app to demonstrate the system design and workflow.MethodsWe experiment with Dragon Medical 11.0 for speech recognition and CRF++ for information extraction. To compute features for information extraction, we also apply CoreNLP, MetaMap, and Ontoserver. Our evaluation uses cross-validation techniques to measure processing correctness.ResultsThe data provided were a simulation of nursing handover, as recorded using a mobile device, built from simulated patient records and handover scripts, spoken by an Australian registered nurse. Speech recognition recognized 5276 of 7277 words in our 100 test documents correctly. We considered 50 mutually exclusive categories in information extraction and achieved the F1 (ie, the harmonic mean of Precision and Recall) of 0.86 in the category for irrelevant text and the macro-averaged F1 of 0.70 over the remaining 35 nonempty categories of the form in our 101 test documents.ConclusionsThe significance of this study hinges on opening our data, together with the related performance benchmarks and some processing software, to the research and development community for studying clinical documentation and language-processing. The data are used in the CLEFeHealth 2015 evaluation laboratory for a shared task on speech recognition.
Comprehensive statistical characterizations of the dynamic narrowband on-body area and on-body to off-body area channels are presented. These characterizations are based on real-time measurements of the time domain channel response at carrier frequencies near the 900-and 2,400-MHz industrial, scientific, and medical bands and at a carrier frequency near the 402-MHz medical implant communications band. We consider varying amounts of body movement, numerous transmit-receive pair locations on the human body, and various bandwidths. We also consider long periods, i.e., hours of everyday activity (predominantly Parts of this work appeared in [1-7].indoor scenarios), for on-body channel characterization. Various adult human test subjects are used. It is shown, by applying the Akaike information criterion, that the Weibull and Gamma distributions generally fit agglomerates of received signal amplitude data and that in various individual cases the Lognormal distribution provides a good fit. We also characterize fade duration and fade depth with direct matching to second-order temporal statistics. These first-and second-order characterizations have important utility in the design and evaluation of body area communications systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.