Advances in sensors and imaging technologies are contributing to rapidly expanding data repositories that contain interrelated information from different modalities. The extraction and visualisation of knowledge from these repositories is a major challenge in the modern, digital world. In the medical domain, images are routinely acquired for a variety of tasks, including diagnosis and patient monitoring. Advances in imaging technologies have resulted in devices capable of acquiring images in multiple dimensions (volumetric and dynamic) as well as from multiple modalities. One example of a widely used volumetric and multimodality image is combined positron emission tomography and computed tomography (PET-CT), which presents physicians with complementary functional and anatomical features and spatial relationships. In clinical practice, PET-CT imaging has already proven its ability to improve cancer diagnosis, localisation, and staging compared to its single-modality counterparts.The clinical benefits provided by medical imaging have spurred increases in the data volume acquired in clinical environments. As such, massive medical imaging collections offer the opportunity for search-based applications in evidence-based diagnosis, physician training, and biomedical research. However, conventional search techniques that operate upon manually assigned textual annotations are not feasible for the volume of data acquired in modern hospitals. Qualitative text descriptions are also limited in their capacity to quantitatively describe the rich information inherent in medical images.
Content-based image retrieval (CBIR) is an image search technique that utilises visual features as search criteria. CBIR has already demonstrated benefits forevidence-based diagnosis, physician training, and biomedical research by allowing clinical staff to consider relevant knowledge from retrieved cases. The majority of medical CBIR research has focused on single modality medical images leaving a clear deficiency in the retrieval of multi-modality images. In particular, images iii like PET-CT offer the ability for retrieval based upon the relationships between regions in different modalities, such as the location of tumour features (from PET) in relation to organ features (from CT). The challenge of multi-modality image retrieval for cancer patients lies in representing these complementary geometric and topologic attributes between tumours and organs. A secondary challenge lies in the human aspect of retrieval -effectively communicating the retrieved results to users and facilitating a better understanding of the similarity between the query and retrieved multi-modality images.As such, in this thesis we propose a new graph representation for multimodality images. Our representation preserves the spatial relationships between modalities by emphasising the inherent characteristics of these images that are used for disease staging and classification. This is done by structurally constraining the graph based on image features, e.g., spatia...