In this paper we propose the idea of fusing information from facial features and colour intensity of a person for the purpose of matching targets across cameras with non-overlapping field of views in a camera sensor network. Cumulative Brightness Transfer Function (CBTF) is used to model and learn the relationship of separate radiometric responses of the cameras and difference in illumination of the field of views of the cameras. The non-parametric CBTF is computed for camera pairs in the camera network from the labelled training data. The same training data is used to learn the ensemble of eigen-faces for face matching. After face detection and localization eigen-facial components are extracted from each face in the frames of the image sequence. A match measure based on facial features of the candidate face against known reference faces observed in the camera network is defined based on feature vector. The match measure for colour intensity and facial feature is then fused together at the score level for the purpose of determining the identity of the person across non-overlapping cameras. Successful matching results are demonstrated on a real life data set collected from multiple cameras.978-1-4577-0674-5/11/$26.00