A piping and instrumentation diagram (P&ID) is a key drawing widely used in the energy industry. In a digital P&ID, all included objects are classified and made amenable to computerized data management. However, despite being widespread, a large number of P&IDs in the image format still in use throughout the process (plant design, procurement, construction, and commissioning) are hampered by difficulties associated with contractual relationships and software systems. In this study, we propose a method that uses deep learning techniques to recognize and extract important information from the objects in the image-format P&IDs. We define the training data structure required for developing a deep learning model for the P&ID recognition. The proposed method consists of preprocessing and recognition stages. In the preprocessing stage, diagram alignment, outer border removal, and title box removal are performed. In the recognition stage, symbols, characters, lines, and tables are detected. The objects for recognition are symbols, characters, lines, and tables in P&ID drawings. A new deep learning model for symbol detection is defined using AlexNet. We also employ the connectionist text proposal network (CTPN) for character detection, and traditional image processing techniques for P&ID line and table detection. In the experiments where two test P&IDs were recognized according to the proposed method, recognition accuracies for symbol, characters, and lines were found to be 91.6%, 83.1%, and 90.6% on average, respectively.
An overview of the Michigan Positron Microscope Program is presented with particular emphasis on the second generation microscope that is presently near completion. The design and intended applications of this microscope will be summarized.
Container yard congestion can become a bottleneck in port logistics and result in accidents. Therefore, transfer cranes, which were previously operated manually, are being automated to increase their work efficiency. Moreover, LiDAR is used for recognizing obstacles. However, LiDAR cannot distinguish obstacle types; thus, cranes must move slowly in the risk area, regardless of the obstacle, which reduces their work efficiency. In this study, a novel method for recognizing the position and class of trained and untrained obstacles around a crane using cameras installed on the crane was proposed. First, a semantic segmentation model, which was trained on images of obstacles and the ground, recognizes the obstacles in the camera images. Then, an image filter extracts the obstacle boundaries from the segmented image. Finally, the coordinate mapping table converts the obstacle boundaries in the image coordinate system to the real-world coordinate system. Estimating the distance of a truck with our method resulted in 32 cm error at a distance of 5 m and in 125 cm error at a distance of 30 m. The error of the proposed method is large compared with that of LiDAR; however, it is acceptable because vehicles in ports move at low speeds, and the error decreases as obstacles move closer.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.