Improving our skills to monitor flooding events is crucial for protecting populations and infrastructures and for planning mitigation and adaptation strategies. Despite recent advancements, hydrological models and remote sensing tools are not always useful for mapping flooding at the required spatial and temporal resolutions because of intrinsic model limitations and remote sensing data. In this regard, images collected by web cameras can be used to provide estimates of water levels during flooding or the presence/absence of water within a scene. Here, we report the results of an assessment of an algorithm which uses web camera images to estimate water levels and detect the presence of water during flooding events. The core of the algorithm is based on a combination of deep convolutional neural networks (D-CNNs) and image segmentation. We assessed the outputs of the algorithm in two ways: first, we compared estimates of time series of water levels obtained from the algorithm with those measured by collocated tide gauges and second, we performed a qualitative assessment of the algorithm to detect the presence of flooding from images obtained from the web under different illumination and weather conditions and with low spatial or spectral resolutions. The comparison between measured and camera-estimated water levels pointed to a coefficient of determination R2 of 0.84–0.87, a maximum absolute bias of 2.44–3.04 cm and a slope ranging between 1.089 and 1.103 in the two cases here considered. Our analysis of the histogram of the differences between gauge-measured and camera-estimated water levels indicated mean differences of −1.18 cm and 5.35 cm for the two gauges, respectively, with standard deviations ranging between 4.94 and 12.03 cm. Our analysis of the performances of the algorithm to detect water from images obtained from the web and containing scenes of areas before and after a flooding event shows that the accuracy of the algorithm exceeded ~90%, with the Intersection over Union (IoU) and the boundary F1 score (both used to assess the output of segmentation analysis) exceeding ~80% (IoU) and 70% (BF1).