Predictive preventive personalized medicine Liver cancer is the fifth most common form of cancer worldwide [1], with an incidence rate almost equals the mortality rate and ranks 3 rd among causes of cancer related death [2]. The coexistence of two life threatening conditions, cancer and liver cirrhosis makes the staging challenging. However, there are some staging systems, e.g. the Barcelona staging system for Hepatocellular carcinoma (HCC) [3], that suggest treatment options and management. Whereas diagnosis in early stages gives hope for a curative outcome, the treatment regime for around 80 % [2] of the patients classified as severe stages only gears towards palliation [4]. An intra-arterial radiation approach, radioembolisation (RE) is ubiquitously applied as one of palliative approaches. Although, in general RE shows promising results in intermediate and advanced stage HCC [5], individual treatment outcomes are currently unpredictable. Corresponding stratification criteria are still unclear. We hypothesised that individual radioresistance/radiosensitivity may play a crucial role in treatment response towards RE strongly influencing individual outcomes. Further, HCC represents a highly heterogeneous group of patients which requires patient stratification according to clear criteria for treatment algorithms to be applied individually. Multilevel diagnostic approach (MLDA) is considered helpful to set-up optimal predictive and prognostic biomarker panel for individualised application of radioembolisation. Besides comprehensive medical imaging, our MLDA includes non-invasive multi-omics and sub-cellular imaging. Individual patient profiles are expected to give a clue to targeting shifted molecular pathways, individual RE susceptibility, treatment response. Hence, a dysregulation of the detoxification pathway (SOD2/Catalase) might indicate possible adverse effects of RE, and highly increased systemic activities of matrix metalloproteinases indicate an enhanced tumour aggressiveness and provide insights into molecular mechanisms/targets. Consequently, an optimal set-up of predictive and prognostic biomarker panels may lead to the changed treatment paradigm from untargeted "treat and wait" to the cost-effective predictive, preventive and personalised approach, improving the life quality and life expectancy of HCC patients. Keywords: Market access, Value, Strategy, Companion diagnostics, Cost-effectiveness, Reimbursement, Health technology assessment, Economic models, Predictive preventive personalized medicine Achieving and sustaining seamless "drug -companion diagnostic" market access requires a sound strategy throughout a product life cycle, which enables timely creation, substantiation and communication of value to key stakeholders [1, 2]. The study aims at understanding the root-cause of market access inefficiencies of companies by gazing at the "Rx-CDx" co-development process through the prism of "value", and developing a perfect co-development scenario based on the literature review and discussions with the ...
An efficient acoustic events detection system EAR-TUKE is presented in this paper. The system is capable of processing continuous input audio stream in order to detect potentially dangerous acoustic events, specifically gunshots or breaking glass. The system is programmed entirely in C++ language (core math. functions in C) and was designed to be self sufficient without requiring additional dependencies. In the design and development process the main focus was put on easy support of new acoustic events detection, low memory profile, low computational requirements to operate on devices with low resources, and on long-term operation and continuous input stream monitoring without any maintenance. In order to satisfy these requirements on the system, EAR-TUKE is based on a custom approach to detection and classification of acoustic events. The system is using acoustic models of events based on Hidden Markov Models (HMMs) and a modified Viterbi decoding process with an additional module to allow continuous monitoring. Cepstral Mean Normalization (CMN) and our proposed removal of basic coefficients from feature vectors to increase robustness. This paper also presents the development process and results evaluating the final design of the system.
The robustness of n-gram language models depends on the quality of text data on which they have been trained. The text corpora collected from various resources such as web pages or electronic documents are characterized by many possible topics. In order to build efficient and robust domain-specific language models, it is necessary to separate domain-oriented segments from the large amount of text data, and the remaining out-of-domain data can be used only for updating of existing in-domain n-gram probability estimates. In this paper, we describe the process of classification of heterogeneous text data into two classes, to the in-domain and out-of-domain data, mainly used for language modeling in the task-oriented speech recognition from judicial domain. The proposed algorithm for text classification is based on detection of theme in short text segments based on the most frequent key phrases. In the next step, each text segment is represented in vector space model as a feature vector with term weighting. For classification of these text segments to the in-domain and out-of domain area, document similarity with automatic thresholding are used. The experimental results of modeling the Slovak language and adaptation to the judicial domain show significant improvement in the model perplexity and increasing the performance of the Slovak transcription and dictation system.
Large databases of scanned documents (medical records, legal texts, historical documents) require natural language processing for retrieval and structured information extraction. Errors caused by the optical character recognition (OCR) system increase ambiguity of recognized text and decrease performance of natural language processing. The paper proposes OCR post correction system with parametrized string distance metric. The correction system learns specific error patterns from incorrect words and common sequences of correct words. A smoothing technique is proposed to assign non-zero probability to edit operations not present in the training corpus. Spelling correction accuracy is measured on database of OCR legal documents in English language. Language model and learning string metric with smoothing improves Viterbi-based search for the best sequence of corrections and increases performance of the spelling correction system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.