Small, supra-threshold color differences are typically described with Euclidean distance metrics, or dimension-weighted Euclidean metrics, in color appearance spaces such as CIELAB. This research examines the perception and modeling of very large color differences in the order of 10 CIELAB units or larger, with an aim of describing the salience of color differences between distinct objects in real-world scenes and images. A psychophysical experiment was completed to compare directly large color-difference pairs designed to probe various Euclidean and non-Euclidean distance metrics. The results indicate that very large color differences are best described by HyAB, a combination of a Euclidean metric in hue and chroma with a city-block metric to incorporate lightness differences. K E Y W O R D Scolor difference formula, distance metric, perceived color difference, very large color difference
Image edge detection based on low‐level feature is usually performed on gray‐scale images. Some methods have been developed for edge detection on colour images based on low‐level feature, but they are not consistent with human colour perception. This research provides a new algorithm for edge detection based on the “HyAB” large‐colour‐difference formula. This algorithm uses Sobel operators for gradient‐magnitude calculations and Canny methods for localizing edge points. The performance of the new algorithm is qualitatively compared with Sobal and Canny methods using some challenging colour images. The results indicate that gradient magnitudes are best calculated using the HyAB colour‐difference formula, and that CIELAB and CIEDE2000 differences are not suitable for this purpose. Definition of gradient magnitudes according human perception is essential in applications such as quality control of fabric printing, calculation of disruptive colouration, and so on. The new algorithm is successful in accuracy and fine edge detection in comparison with the Sobel and Canny methods. The new method is quantitatively compared with state‐of‐the‐art methods using three datasets including BSDS500, MBDD, and BIPED. The correctness and accuracy of annotations of images in datasets have an important effect on results. The new method does not reach scores better than deep‐learning‐based methods, but it is simple and does not need training. It could probably have better results with improving noise‐suppression.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.