Emerging deep learning (DL) approaches with edge computing have enabled the automation of rich information extraction, such as complex events from camera feeds. Due to the low speed and accuracy of object detection, some objects are missed and not detected. As objects constitute simple events, missing objects result in missing simple events, thus the number of detected complex events. As the main objective of this paper, an integrated cloud and edge computing architecture was designed and developed to reduce missing simple events. To achieve this goal, we deployed multiple smart cameras (i.e., cameras which connect to the Internet and are integrated with computerised systems such as the DL unit) in order to detect complex events from multiple views. Having more simple events from multiple cameras can reduce missing simple events and increase the number of detected complex events. To evaluate the accuracy of complex event detection, the F-score of risk behaviour regarding COVID-19 spread events in video streams was used. The experimental results demonstrate that this architecture delivered 1.73 times higher accuracy in event detection than that delivered by an edge-based architecture that uses one camera. The average event detection latency for the integrated cloud and edge architecture was 1.85 times higher than that of only one camera. However, this finding was insignificant with regard to the current case study. Moreover, the accuracy of the architecture for complex event matching with more spatial and temporal relationships showed significant improvement in comparison to the edge computing scenario. Finally, complex event detection accuracy considerably depended on object detection accuracy. Regression-based models, such as you only look once (YOLO), were able to provide better accuracy than region-based models.