International audiencePerceptual image quality assessment (IQA) uses a computational model to assess the image quality in a fashion consistent with human opinions. A good IQA model should consider both the effectiveness and efficiency. To meet this need, a new model called multiscale contrast similarity deviation (MCSD) is developed in this paper. Contrast is a distinctive visual attribute closely related to the quality of an image. To further explore the contrast features, we resort to the multiscale representation. Although the contrast and the multiscale representation have already been used by other IQA indices, few have reached the goals of effectiveness and efficiency simultaneously. We compared our method with other state-of-the-art methods using six well-known databases. The experimental results showed that the proposed method yielded the best performance in terms of correlation with human judgments. Furthermore, it is also efficient when compared with other competing IQA models
Reinforcement learning encounters major challenges in multi-agent settings, such as scalability and non-stationarity. Recently, value function factorization learning emerges as a promising way to address these challenges in collaborative multi-agent systems. However, existing methods have been focusing on learning fully decentralized value function, which are not efficient for tasks requiring communication. To address this limitation, this paper presents a novel framework for learning nearly decomposable value functions with communication, with which agents act on their own most of the time but occasionally send messages to other agents in order for effective coordination. This framework hybridizes value function factorization learning and communication learning by introducing two information-theoretic regularizers. These regularizers are maximizing mutual information between decentralized Q functions and communication messages while minimizing the entropy of messages between agents. We show how to optimize these regularizers in a way that is easily integrated with existing value function factorization methods such as QMIX. Finally, we demonstrate that, on the StarCraft unit micromanagement benchmark, our framework significantly outperforms baseline methods and allows to cut off more than 80% communication without sacrificing the performance. The video of our experiments is available at https://sites.google.com/view/ndvf.
Recently, multi-agent policy gradient (MAPG) methods witness vigorous progress. However, there is a discrepancy between the performance of MAPG methods and state-of-the-art multi-agent value-based approaches. In this paper, we investigate the causes that hinder the performance of MAPG algorithms and present a multiagent decomposed policy gradient method (DOP). This method introduces the idea of value function decomposition into the multi-agent actor-critic framework. Based on this idea, DOP supports efficient off-policy learning and addresses the issue of centralized-decentralized mismatch and credit assignment in both discrete and continuous action spaces. We formally show that DOP critics have sufficient representational capability to guarantee convergence. In addition, empirical evaluations on the StarCraft II micromanagement benchmark and multi-agent particle environments demonstrate that our method significantly outperforms state-of-the-art value-based and policy-based multi-agent reinforcement learning algorithms. Demonstrative videos are available at https://sites.google.com/view/dop-mapg/.
Intrinsically motivated reinforcement learning aims to address the exploration challenge for sparse-reward tasks. However, the study of exploration methods in transition-dependent multi-agent settings is largely absent from the literature. We aim to take a step towards solving this problem. We present two exploration methods: exploration via information-theoretic influence (EITI) and exploration via decision-theoretic influence (EDTI), by exploiting the role of interaction in coordinated behaviors of agents. EITI uses mutual information to capture influence transition dynamics. EDTI uses a novel intrinsic reward, called Value of Interaction (VoI), to characterize and quantify the influence of one agent's behavior on expected returns of other agents. By optimizing EITI or EDTI objective as a regularizer, agents are encouraged to coordinate their exploration and learn policies to optimize team performance. We show how to optimize these regularizers so that they can be easily integrated with policy gradient reinforcement learning. The resulting update rule draws a connection between coordinated exploration and intrinsic reward distribution. Finally, we empirically demonstrate the significant strength of our method in a variety of multi-agent scenarios.
Image quality that is consistent with human opinion is assessed by a perceptual image quality assessment (IQA) that defines/utilizes a computational model. A good model should take effectiveness and efficiency into consideration, but most of the previously proposed IQA models do not simultaneously consider these factors. Therefore, this study attempts to develop an effective and efficient IQA metric. Contrast is an inherent visual attribute that indicates image quality, and visual saliency (VS) is a quality that attracts the attention of human beings. The proposed model utilized these two features to characterize the image local quality. After obtaining the local contrast quality map and the global VS quality map, we added the weighted standard deviation of the previous two quality maps together to yield the final quality score. The experimental results for three benchmark databases (LIVE, TID2008, and CSIQ) demonstrated that our model performs the best in terms of a correlation with the human judgment of visual quality. Furthermore, compared with competing IQA models, this proposed model is more efficient.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.