“…The study first identifies and cleanses the original "dirty data," and the common data cleaning methods are statistical 3σ criterion, box plots, and clustering methods based on machine learning, local anomaly factors, isolated forests, and deep learning methods [5][6][7][8]. Due to the diversity of line loss problems, false alarms, omissions, and other problems in the detection process, the above methods present a significant human impact on anomaly data detection.…”