Despite the impressive performance of current deep learning models in
the field of medical imaging, the transfer of the lung segmentation task
in X-ray images to clinical practice is still a pending task. In this
study, we explored the performance of a fully automatic framework for
lung fields segmentation in chest X-ray images, based on the combination
of the Segment Anything Model (SAM) with prompt capabilities, and the
You Only Look Once (YOLO) model to provide effective prompts. Transfer
learning, loss functions and several validation strategies were
intensively assessed. This provides a complete benchmark that enables
future research studies to fairly compare new segmentation strategies.
The results achieved demonstrate significant robustness and
generalization capability against the variability in sensors,
populations, disease manifestations, device processing and imaging
conditions. The proposed framework is computationally efficient, can
address bias in training over multiple datasets, and has the potential
to be applied across other domains and modalities.