Abstract. Visual Sensor Networks consist of several camera nodes with wireless communication capabilities that can perform visual analysis tasks such as object identification, recognition and tracking. Often, VSNs deployments result in many camera nodes with overlapping fields of view. In the past, such redundancy has been exploited in two different ways: (i) to improve the accuracy/quality of the visual analysis task by exploiting multi-view information or (ii) to reduce the energy consumed for performing the visual task, by applying temporal scheduling techniques among the cameras. In this work, we propose a game theoretic framework based on the Nash Bargaining Solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results in both simulated and real-life scenarios confirm that the proposed scheme is able to increase the network lifetime, with a negligible loss in terms of visual analysis accuracy.