2022
DOI: 10.3390/diagnostics12102351
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Annotation Efficiency with Machine Learning: Automated Partitioning of a Lung Ultrasound Dataset by View

Abstract: Background: Annotating large medical imaging datasets is an arduous and expensive task, especially when the datasets in question are not organized according to deep learning goals. Here, we propose a method that exploits the hierarchical organization of annotating tasks to optimize efficiency. Methods: We trained a machine learning model to accurately distinguish between one of two classes of lung ultrasound (LUS) views using 2908 clips from a larger dataset. Partitioning the remaining dataset by view would re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…Before experimentation, we split the labeled portion of ParenchymalLUS by anonymous patient identifier into training, validation, and test sets. The unlabeled portion of ParenchymalLUS was assembled by gathering 20000 videos from the unlabeled pool of videos in the database that were predicted to contain a parenchymal view of the lungs by a previously trained lung US view classifier (VanBerlo et al, 2022a). All videos from the same patient were in either the labeled or the unlabeled subset.…”
Section: A-line Vs B-line Classification (Ab)mentioning
confidence: 99%
“…Before experimentation, we split the labeled portion of ParenchymalLUS by anonymous patient identifier into training, validation, and test sets. The unlabeled portion of ParenchymalLUS was assembled by gathering 20000 videos from the unlabeled pool of videos in the database that were predicted to contain a parenchymal view of the lungs by a previously trained lung US view classifier (VanBerlo et al, 2022a). All videos from the same patient were in either the labeled or the unlabeled subset.…”
Section: A-line Vs B-line Classification (Ab)mentioning
confidence: 99%
“…Access to the data was permitted via ethics approval granted by Western University (REB 116838). The dataset had been previously labeled for the View, AB, and PE tasks by competent LUS interpreters during prior work [11,36]. The labeled portion of the local dataset was split by patient identifier into a training set (70%), validation set (15%), and test set (15%), and the external dataset was reserved for testing only.…”
Section: B Datamentioning
confidence: 99%
“…In the past few years, the analysis of medical images based on ML has gained significant importance in scientific research. In particular, with the progress of computer vision, researchers are encouraged to develop various systems for the analysis, correlation, and interpretation of medical images [ 15 , 16 , 17 ] such as convolutional neural networks for brain image segmentation [ 18 , 19 , 20 , 21 ]; for brain tumor detection and classification [ 22 , 23 , 24 ]; medical image registration, fusion, and annotation [ 25 , 26 , 27 , 28 , 29 ]; computer-aided diagnosis (CAD) systems [ 30 , 31 , 32 , 33 , 34 , 35 ]; and the automatic detection of micro-bleeds in a medical image [ 36 , 37 , 38 ].…”
Section: Introductionmentioning
confidence: 99%