Image saliency detection is an important research topic in the field of
computer vision. With the traditional saliency detection models, the texture
detail s are not obvious and the edge contour is not complete. The accuracy
and recall rate of object detection are low, which are mostly based on the
manual features and prior information. With the rise of deep convolutional
neural networks, saliency detection has been rapidly developed. However,
the existing saliency methods still have some common shortcomings, and it is
difficult to uniformly highlight the clear boundary and internal region of
the whole object in complex images, mainly because of the lack of sufficient
and rich features. In this paper, a new frog leaping algorithm-oriented
fully convolutional neural network is proposed for dance motion object
saliency detection. The VGG (Visual Geometry Group) model is improved. The
final full connection layer is removed, and the jump connection layer is
used for the saliency prediction, which can effectively combine the
multi-scale information from different convolution layers in the
convolutional neural network. Meanwhile, an improved frog leaping algorithm
is used to optimize the selection of initial weights during network
initialization. In the process of network iteration, the forward propagation
loss of convolutional neural network is calculated, and the anomaly weight
is corrected by using the improved frog leaping algorithm. When the network
satisfies the terminal conditions, the final weight is optimized by one frog
leaping to make the network weight further optimization. In addition, the
new network can combine high-level semantic information and low-level detail
information in a data-driven framework. In order to preserve the unity of
the object boundary and inner region effectively, the fully connected
conditional random field (CRF) model is used to adjust the obtained saliency
feature map. In this paper, the precision recall (PR) curve, F-measure,
maximum F-measure, weighted F-measure and mean absolute error (MAE) are
tested on six widely used public data sets. Com pared with other most
advanced and representative methods, the results show that the proposed
method achieves better performance and it is superior to most representative methods. The presented method reveals that it has strong
robustness for image saliency detection with various scenes, and can make
the boundary and inner region of the saliency object more uniform and the
detection results more accurate.