In recent years, people have wanted to watch high dynamic range imagery which can give high human visual satisfaction on smartphones and demand longer smartphone battery time. However, compression of dynamic range using tone-mapping operators is required in smartphones because most smartphone displays currently have a low dynamic range, and this causes loss of local contrast and details to compress dynamic range. Thus, in this paper we propose a novel dynamic voltage scaling scheme tightly coupled with a modified tone-mapping operator to achieve high power saving as well as good human perceptuality on an AMOLED display smartphone. In order to perform a human perceptuality-aware voltage control, we control display panel voltage to save power consumption and use a well-adjusted global tone-mapping operator to convert image brightness and unsharp masking to enhance local contrast and details and control. We implement the proposed scheme on the AMOLED display Android smartphone and experiment with various high dynamic range image databases. Experimental results show that not only tone-mapped images but also general images are improved in terms of human visual satisfaction and power saving, compared to conventional techniques.
Saliency, which means the area human vision is concentrated, can be used in many applications, such as enemy detection in solider goggles and person detection in an auto-driving car. In recent years, saliency is obtained instead of human eyes using a model in an automated way in HMD (Head Mounted Display), smartphones, and VR (Virtual Reality) devices based on mobile displays; however, such a mobile device needs too much power to maintain saliency on a mobile display. Therefore, low power saliency methods have been important. CURA tried to power down, according to the saliency level, while keeping human visual satisfaction. But it still has some artifacts due to the difference in brightness at the boundary of the region divided by saliency. In this paper, we propose a new segmentation-based saliency-aware low power approach to minimize the artifacts. Unlike CURA, our work considers visual perceptuality and power management at the saliency level and at the segmented region level for each saliency. Through experiments, our work achieves low power in each region divided by saliency and in the segmented regions in each saliency region, while maintaining human visual satisfaction for saliency. In addition, it maintains good image distortion quality while removing artifacts efficiently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.