2019
DOI: 10.3390/s19173738
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs

Abstract: Posture detection targeted towards providing assessments for the monitoring of health and welfare of pigs has been of great interest to researchers from different disciplines. Existing studies applying machine vision techniques are mostly based on methods using three-dimensional imaging systems, or two-dimensional systems with the limitation of monitoring under controlled conditions. Thus, the main goal of this study was to determine whether a two-dimensional imaging system, along with deep learning approaches… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
89
0
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 113 publications
(91 citation statements)
references
References 33 publications
0
89
0
2
Order By: Relevance
“…Zheng et al [2] and Yang et al [22] used Faster-RCNN to recognize pig postures and feeding behaviors. Nasirahmadi et al [16] proposed three detector methods including Faster R-CNN, SSD, and R-FCN to recognize postures of pigs. Real-time sow drinking, urination, and mounting behavior recognition has been achieved by using an optimized target detection method based on the SSD and the MobileNet [24].…”
Section: Feeding Scratching Mounting Lying Walkingmentioning
confidence: 99%
See 2 more Smart Citations
“…Zheng et al [2] and Yang et al [22] used Faster-RCNN to recognize pig postures and feeding behaviors. Nasirahmadi et al [16] proposed three detector methods including Faster R-CNN, SSD, and R-FCN to recognize postures of pigs. Real-time sow drinking, urination, and mounting behavior recognition has been achieved by using an optimized target detection method based on the SSD and the MobileNet [24].…”
Section: Feeding Scratching Mounting Lying Walkingmentioning
confidence: 99%
“…Staff also need to record sensor readings frequently, which is more troublesome. So, contactless, low-cost, easy, and effective computer vision techniques [1] have been widely used in animal monitoring processes and play an essential role in assessment of animal behavior [16]. Viazzi et al [17] extracted the mean intensity of motion and the occupation index; then, they used the Linear Discriminant Analysis (LDA) to classify two features to identify aggressive behavior with an accuracy of 89%.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Prior to calibrating the IPCA using the previously selected machine-vision camera, as shown in Figure 10 , camera calibration should be performed. Here, error factors are identified and calibrated, such as the distortion of the lens (including the radial distortion and tangential distortion that arise when a point in a three-dimensional space is mapped onto a two-dimensional image plane) and installation uncertainties [ 25 , 26 , 27 ].…”
Section: Experiments and Analysismentioning
confidence: 99%
“…Lying, feeding and drinking behaviors were detected by posture classification in deep images [7]. Faster regions-convolutional neural network (Faster R-CNN), single shot multibox detector (SSD) and region-based fully convolutional network (R-FCN) were applied for classification of lying and other kinds of standing behaviors [8]. Moreover, the behaviors could also be classified by spatiotemporal features.…”
Section: Introductionmentioning
confidence: 99%