2021
DOI: 10.1016/j.oceaneng.2021.108785
|View full text |Cite
|
Sign up to set email alerts
|

Feature recognition and strength estimation of chain links by 3D inspections

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…If you want to obtain the sample set of knee bending motion visual sensor of ball motion frame image, you need to first obtain the knee bending motion trigger data from the massive frame images of ball motion continuous video stream and first disperse the continuous video stream according to the time features to obtain the ball motion frame image. After that, the ball motion frame image is classified according to the cluster, the dynamic difference and logical judgment are completed in the cluster, the dynamic active region image is obtained, and these regions are converted into time address/frame data [11,12].…”
Section: Wireless Communications and Mobile Computingmentioning
confidence: 99%
See 1 more Smart Citation
“…If you want to obtain the sample set of knee bending motion visual sensor of ball motion frame image, you need to first obtain the knee bending motion trigger data from the massive frame images of ball motion continuous video stream and first disperse the continuous video stream according to the time features to obtain the ball motion frame image. After that, the ball motion frame image is classified according to the cluster, the dynamic difference and logical judgment are completed in the cluster, the dynamic active region image is obtained, and these regions are converted into time address/frame data [11,12].…”
Section: Wireless Communications and Mobile Computingmentioning
confidence: 99%
“…Taking a as the central pixel, set the pixel as the threshold, compare the threshold with the gray value of adjacent pixels, and obtain the n-bit binary number. The value is calculated by Equation (11).…”
Section: Feature Extraction Based On Cnnmentioning
confidence: 99%
“…where x and y are the values of joint coordinates, and a ′ and b ′ are the values of the previous joint coordinates, which are the estimated angles. In addition, several contour features with good discrimination are extracted from each frame: contour area s, contour centroid ordinate R, maximum contour width f, contour height f p n ,(N−1) , and contour aspect ratio Z. e values extracted from the same feature in different frames can be expressed as time series [14]. Finally, we get 10 gait time series, which can well represent the features contained in gait video.…”
Section: Athlete Gait Feature Acquisition Based On Multisourcementioning
confidence: 99%
“…Grey system is not the same as fuzzy mathematics. It attaches importance to the objects with clear extension and unclear extension, with larger cross-section and stronger permeability [13]. Deep belief neural network classification algorithm and Big Data analysis modeling first determine the mathematical relationship between many factors in the gray system, then extract and classify the differentiated English features of the text, and then match and analyze the information stored in the known database [14].…”
Section: Related Workmentioning
confidence: 99%