Using deep learning (DL) to recognize building and infrastructure damage via images is becoming popular in vision‐based structural health monitoring (SHM). However, many previous studies solely work on the existence of damage in the images and directly treat the problem as a single‐attribute classification or separately focus on finding the location or area of the damage as a localization or segmentation problem. Abundant information in the images from multiple sources and intertask relationships are not fully exploited. In this study, the vision‐based SHM problem is first reformulated into a multiattribute multitask setting, where each image contains multiple labels to describe its characteristics. Subsequently, a general multiattribute multitask detection framework, namely ϕ‐NeXt, is proposed, which introduces 10 benchmark tasks including classification, localization, and segmentation tasks. Accordingly, a large‐scale data set containing 37,000 pairs of multilabeled images is established. To pursue better performance in all tasks, a novel hierarchical framework, namely multiattribute multitask transformer (MAMT2) is proposed, which integrates multitask transfer learning mechanisms and adopts a transformer‐based network as the backbone. Finally, for benchmarking purposes, extensive experiments are conducted on all tasks and the performance of the proposed MAMT2 is compared with several classical DL models. The results demonstrate the superiority of the MAMT2 in all tasks, which reveals a great potential for practical applications and future studies in both structural engineering and computer vision.