SummaryImage salient region detection methods, which can detect and extract interesting regions in pictures, have been a hot research direction in recent years. Most of current salient region detection algorithms and corresponding training datasets originated from general visual attention processes and primarily reflected attention to object shape in pictures. While color vision provided more useful information in vision systems, the color effect of visual attention should be and must be considered. First, we collected cue data of painting pictures from several observers using eye‐track recording technology when observers were asked to pay attention to color information of various paintings. Second, we constructed a color attention dataset, color saliency dataset (CSD), from the cue data and pictures. Thirdly, we designed a V‐fused color saliency net (VCSNet) model which included three modules: a color information fusion module, a prediction module, and an optimization module and trained the model using the CSD. Finally, we compared our method with previous algorithms on the CSD, and results showed that our method outperformed the previous algorithms in color saliency detection with MAE of 0.057 and Fmax of 0.265. We open source part of the self‐created dataset: https://github.com/InfiniteEM/ColorSaliencyDataSet.