This study presents the first demonstration of the transferability of a convolutional neural network (CNN) trained to detect microseismic events in one fiber-optic distributed acoustic sensing (DAS) data set to other data sets. DAS is being increasingly used for microseismic monitoring in industrial settings, and the dense spatial and temporal sampling provided by these systems produces large data volumes (approximately 650 GB/day for a 2 km long cable sampling at 2000 Hz with a spatial sampling of 1 m), requiring new processing techniques for near-real-time microseismic analysis. We have trained the CNN known as YOLOv3, an object detection algorithm, to detect microseismic events using synthetically generated waveforms with real noise superimposed. The performance of the CNN network is compared to the number of events detected using filtering and amplitude threshold (short-term average/long-term average) detection techniques. In the data set from which the real noise is taken, the network is able to detect >80% of the events identified by manual inspection and 14% more than detected by standard frequency-wavenumber filtering techniques. The false detection rate is approximately 2% or one event every 20 s. In other data sets, with monitoring geometries and conditions previously unseen by the network, >50% of events identified by manual inspection are detected by the CNN.