2020
DOI: 10.3390/ani10020190
|View full text |Cite
|
Sign up to set email alerts
|

A Machine Vision-Based Method for Monitoring Scene-Interactive Behaviors of Dairy Calf

Abstract: Requirements for animal and dairy products are increasing gradually in emerging economic bodies. However, it is critical and challenging to maintain the health and welfare of the increasing population of dairy cattle, especially the dairy calf (up to 20% mortality in China). Animal behaviors reflect considerable information and are used to estimate animal health and welfare. In recent years, machine vision-based methods have been applied to monitor animal behaviors worldwide. Collected image or video informati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 36 publications
(22 citation statements)
references
References 20 publications
0
22
0
Order By: Relevance
“…Noncontact and nondestructive monitoring methods such as the machine vision-based technology (MVT) have been suggested and tested to monitor poultry and livestock behavior and for individual identification [4][5][6][7][8][9]. MVT has also been used to evaluate welfare status (e.g., lameness, estrus, pecking, etc.)…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Noncontact and nondestructive monitoring methods such as the machine vision-based technology (MVT) have been suggested and tested to monitor poultry and livestock behavior and for individual identification [4][5][6][7][8][9]. MVT has also been used to evaluate welfare status (e.g., lameness, estrus, pecking, etc.)…”
Section: Introductionmentioning
confidence: 99%
“…For poultry housing, different versions of MVT have been tested for the identification of specific behaviors under given scenarios (e.g., feeding and drinking as affected by environmental factors or enrichments) and general group behavior (e.g., activity index and locomotion) with or without assistance from other sensors (e.g., Radio-frequency identification and accelerometers) [16][17][18][19][20][21]. However, most existing procedures have limitations or high levels of uncertainty in monitoring group chicken behavior and distribution in the different feeding, drinking, and resting zones due to higher animal density (>10,000 broiler chickens in a commercial facility) compared to other animal (e.g., cattle and swine) facilities [7,8,22]. The "optical flow" method for measuring broiler welfare based on optical flow statistics of flock movements recorded on video [19,22,23], and the "eYeNamic" system for gait score monitoring in broiler houses [17,24], are the most common vision-based methods.…”
Section: Introductionmentioning
confidence: 99%
“…Guo et al [ 89 ] recently developed a machine vision model for the recognition of calf behavior by combining background subtraction and inter-frame difference models. They managed to distinguish behaviors of calves housed in igloos with detection rates of over 90% (pen entering: 94.38%, pen leaving: 92.86%, standing or laying in a static position: 96.85%, and turning: 93.51%), as well as feeding and drinking behaviors, at near 80% (79.69% and 81.73%, respectively) [ 89 ]. Transferring this study to a loose housing dairy barn would remain challenging, as it requires the installation and combined evaluation of multiple cameras within the barn.…”
Section: Resultsmentioning
confidence: 99%
“…Computer vision sensing technologies, such as visible light and structured light, have drawn more and more attention in the fields of anti-terrorism, stability maintenance, emergency rescue, and scene monitoring [1][2][3][4][5]. Human pose reconstruction is a hot topic of computer vision that has been explored by researchers in recent years, which can be applied in the healthcare industry by automating patient monitoring [6], and used in automatic teaching for fitness, sports and dance, motion capture, augmented reality in film production, and training a robot to follow a human pose doing specific actions [7][8][9].…”
Section: Introductionmentioning
confidence: 99%