The knowledge of clinical medicine can, efficiently and effectively, be elicited and encoded automatically as cases. The developed knowledge base often contains some redundancy. The cost can be tolerated as it affects neither the performance of the system nor the integrity of the knowledge. The advantages are simplicity in development and reliability in performance.
The massive fast of new scientific publications increase the need to a reliable effective automatic machine translation (AMT) system, which translates from English, as the common language of publications, to other different languages. Statistical machine translation (SMT) model crafted to deal with certain domain of text often fails when subjected to another domain. The paper addresses the characterization of language domains and their behavior in SMT, experiments the management of SMT model to translate scientific text collected from artificial intelligence publications. The effectiveness of Bilingual language model is tested against the typical N-gram language model, in addition to utilizing the fill-up and back-off techniques to handle different phrase tables from different domains. As not every human capable to translate artificial intelligence book, should have strong knowledge in the field, We suggest that in order AMT can handle different domains it must be trained by in-domain parallel data, adjusting weights for the words on different domains to learn the model how to differentiate between different meaning of same word in different domains.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.