Light Echoes (LEs) are the reflections of astrophysical transients off of interstellar dust. They are fascinating astronomical phenomena that enable studies of the scattering dust as well as of the original transients. LEs, however, are rare and extremely difficult to detect as they appear as faint, diffuse, time-evolving features. The detection of LEs still largely relies on human inspection of images, a method unfeasible in the era of large synoptic surveys. The Vera C. Rubin Observatory Legacy Survey of Space and Time, LSST, will generate an unprecedented amount of astronomical imaging data at high spatial resolution, exquisite image quality, and over tens of thousands of square degrees of sky: an ideal survey for LEs. However, the Rubin data processing pipelines are optimized for the detection of point-sources and will entirely miss LEs. Over the past several years, Artificial Intelligence (AI) object detection frameworks have achieved and surpassed real-time, human-level performance. In this work, we prepare a dataset from the ATLAS telescope and test a popular AI object detection framework, You Only Look Once, or YOLO, developed in the computer vision community, to demonstrate the potential of AI in the detection of LEs in astronomical images. We find that an AI framework can reach human-level performance even with a size-and quality-limited dataset. We explore and highlight challenges, including class imbalance and label incompleteness, and roadmap the work required to build an end-to-end pipeline for the automated detection and study of LEs in high-throughput astronomical surveys.