Medical Imaging suffers from different problems. This paper explores our solution that aims to provide efficient retrieval of medical imaging. Depending on the user, the same image can be described through different views. In essence, an image can be described on the basis of either low-level properties, such as texture or color; context, such as date of acquisition or author; or semantic content, such as real-world objects and relations. Our approach consists of providing a global description solution capable of integrating different dimensions (or views) of a medical image. The description problem of medical images during both storage and retrieval processes is studied. Few proposed solutions take into consideration the heterogeneity of user competence (physician, researcher, student, etc.) and the necessiv of a high expressive power for medical imaging description. For example, spatial content in terms of relationships in surgical or radiation therapy of brain tumors is very decisive because the location of a tumor has profound implications on a therapeutic decision. Visual solutions are recommended and are the most appropriated for non computer-scientist users. However, current visual languages suffer from several problems, especially ambiguities generated by the user andlor the system at different levels of image description, imprecision and no respect of the integrity of spatial relations. This framework exposes our solution showing ho" this problematic can be resolved. An implementation has been realized to prove our proposition.
This paper explores our solution aiming to provide efficient retrieval of medical imaging. Depending on the user, the same image can be described through different views. In essence, an image can be described on the basis of either low-level properties, such as texture or color; contextual data, such as date of acquisition or author; or semantic content, such as real-world objects and relations. Our approach consists of providing a multispaced description model capable of integrating different facets (or views) of the medical image. Visual retrieval solutions are recommended and are the most appropriated for noncomputer-science users. However, current visual languages suffer from several problems, especially ambiguities generated by the user and/or the system, and imprecision at different levels of image description. In this paper, we expose our solution and demonstrate how spatial precision of medical image content and ambiguities can be resolved. An implementation called Medical Image Management System (MIMS) has been realized to prove our proposition. A set of tests has been deployed to validate our prototype.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.