Summary
The seismic performance of a building must be evaluated after it has been affected by an earthquake load. In the evaluation process, building codes and standards require that the drift of the structure is determined to assess structural performance. This study provides an innovative method that helps engineers in measuring the deflection of reinforced concrete (RC) beams. An imagery deep learning model, called residual networks (ResNet), is used to classify the deflection based on observation by computer vision. However, determining the optimal values of the hyperparameters of this model is a challenge. Therefore, a hybrid model that integrates the bio‐inspired optimization (i.e., jellyfish search [JS] algorithm) and ResNet is developed. The input data that are used to train the model are images that are collected in RC structural experiments. This experiment involved 29 cantilever beams with various RC designs. These specimen RC beams were tested under simulated seismic loads with lateral displacement control. After each load had been applied to the beam, four single‐lens digital cameras captured images from the east, west, north, and south. Then, the performance of computer vision‐based JS–ResNet was evaluated by comparing its accuracy with that of the original ResNet using default hyperparameters. The results of the analysis show that the proposed JS–ResNet model achieves higher accuracy than conventional ResNet. Therefore, the hybrid model can provide insights in similar visual surveillance tasks.