In some pattern analysis problems, there exists expert knowledge, in addition to the original data involved in the classification process. The vast majority of existing approaches simply ignore such auxiliary (privileged) knowledge. Recently a new paradigm-learning using privileged information-was introduced in the framework of SVM+. This approach is formulated for binary classification and, as typical for many kernel-based methods, can scale unfavorably with the number of training examples. While speeding up training methods and extensions of SVM+ to multiclass problems are possible, in this paper we present a more direct novel methodology for incorporating valuable privileged knowledge in the model construction phase, primarily formulated in the framework of generalized matrix learning vector quantization. This is done by changing the global metric in the input space, based on distance relations revealed by the privileged information. Hence, unlike in SVM+, any convenient classifier can be used after such metric modification, bringing more flexibility to the problem of incorporating privileged information during the training. Experiments demonstrate that the manipulation of an input space metric based on privileged data improves classification accuracy. Moreover, our methods can achieve competitive performance against the SVM+ formulations.
We present an approach for automatic threshold segmentation of greyscale images. The procedure is inspired by a reinterpretation of the strategy observed in human operators when adjusting thresholds manually and interactively by means of 'slider' controls. The approach translates into two methods. The first one is suitable for single or multiple global thresholds to be applied globally to images and consists of searching for a threshold value that generates a phase whose boundary coincides with the largest gradients in the original image. The second method is a variation, implemented to operate on the discrete connected components of the thresholded phase (i.e. the binary regions) independently. Consequently, this becomes an adaptive local threshold procedure, which operates relative to regions, rather than to local image subsets as is the case in most local thresholding methods previously published. Adding constraints for specifying certain classes of expected objects in the images can improve the output of the method over the traditional 'segmenting first, then classify' approach.
Many pattern analysis problems require classification of examples into naturally ordered classes. In such cases nominal classification schemes will ignore the class order relationships, which can have detrimental effect on classification accuracy. This paper introduces two novel ordinal Learning Vector Quantization (LVQ) schemes, with metric learning, specifically designed for classifying data items into ordered classes. Unlike in nominal LVQ, in ordinal LVQ the class order information is utilized during training in selection of the class prototypes to be adapted, as well as in determining the exact manner in which the prototypes get updated. Prototype based models are in general more amenable to interpretations and can often be constructed at a smaller computational cost than alternative non-linear classification models. Experiments demonstrate that the proposed ordinal LVQ formulations compare favourably with their nominal counterparts.Moreover, our methods achieve competitive performance against existing benchmark ordinal regression models.
Algorithmic segmentation of histologically relevant regions of tissues in digitized histopathological images is a critical step towards computer-assisted diagnosis and analysis. For example, automatic identification of epithelial and stromal tissues in images is important for spatial localisation and guidance in the analysis and characterisation of tumour micro-environment. Current segmentation approaches are based on supervised methods, which require extensive training data from high quality, manually annotated images. This is often difficult and costly to obtain. This paper presents an alternative data-independent framework based on unsupervised segmentation of oropharyngeal cancer tissue micro-arrays (TMAs). An automated segmentation algorithm based on mathematical morphology is first applied to light microscopy images stained with haematoxylin and eosin. This partitions the image into multiple binary ‘virtual-cells’, each enclosing a potential ‘nucleus’ (dark basins in the haematoxylin absorbance image). Colour and morphology measurements obtained from these virtual-cells as well as their enclosed nuclei are input into an advanced unsupervised learning model for the identification of epithelium and stromal tissues. Here we exploit two Consensus Clustering (CC) algorithms for the unsupervised recognition of tissue compartments, that consider the consensual opinion of a group of individual clustering algorithms. Unlike most unsupervised segmentation analyses, which depend on a single clustering method, the CC learning models allow for more robust and stable detection of tissue regions. The proposed framework performance has been evaluated on fifty-five hand-annotated tissue images of oropharyngeal tissues. Qualitative and quantitative results of the proposed segmentation algorithm compare favourably with eight popular tissue segmentation strategies. Furthermore, the unsupervised results obtained here outperform those obtained with individual clustering algorithms.
Abstract. We present a framework for adapting consensus clustering methods with superpixels to segment oropharyngeal cancer images into tissue types (epithelium, stroma and background). The simple linear iterative clustering algorithm is initially used to split-up the image into binary superpixels which are then used as clustering elements. Colour features of the superpixels are extracted and fed into several base clustering approaches with various parameter initializations. Two consensus clustering formulations are then used, the Evidence Accumulation Clustering (EAC) and the voting-based function. They both combine the base clustering outcomes to obtain a single more robust consensus result. Unlike most unsupervised tissue image segmentation approaches that depend on individual clustering methods, the proposed approach allows for a robust detection of tissue compartments. For the voting-based consensus function, we introduce a technique based on image processing to generate a consistent labelling scheme among the base clustering outcomes. Experiments conducted on forty five hand-annotated images of oropharyngeal cancer tissue microarray cores show that the ensemble algorithm generates more accurate and stable results than individual clustering algorithms. The clustering performance of the voting-based consensus function using our re-labelling technique also outperforms the existing EAC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.