Computer vision applications have become one of the most rapidly developing areas in automation and robotics, as well as in some other similar areas of science and technology, e.g., mechatronics, intelligent transport and logistics, biomedical engineering, and even in the food industry. Nevertheless, automation and robotics seems to be one of the leading areas of practical applications for recently developed artificial intelligence solutions, particularly computer and machine vision algorithms. One of the most relevant issues is the safety of the human-computer and human-machine interactions in robotics, which requires the "explainability" of algorithms, often excluding the potential application of some solutions based on deep learning, regardless of their performance in pattern recognition applications.Considering the limited amount of training data, typical for robotics, important challenges are related to unsupervised learning, as well as no-reference image and video quality assessment methods, which may prevent the use of some distorted video frames for image analysis applied for further control of, e.g., robot motion. The use of image descriptors and features calculated for natural images captured by cameras in robotics, both in "out-hand" and "in-hand" solutions, may cause more problems in comparison to artificial images, typically used for the verification of general-purpose computer vision algorithms, leading to a so-called "reality gap". This Special Issue on "Applications of Computer Vision in Automation and Robotics" brings together the research communities interested in computer and machine vision from various departments and universities, focusing on both automation and robotics as well as computer science.The paper [1] is related to the problem of image registration in printing defect inspection systems and the choice of appropriate feature regions. The proposed automatic feature region searching algorithm for printed image registration utilizes contour point distribution information and edge gradient direction and may also be applied for online printing defect detection.The next contribution [2] presents a method of camera-based calibration for optical see-through headsets used in augmented reality applications, also for consumer level systems. The proposed fast automatic offline calibration method is based on standard camera calibration and computer vision methods to estimate the projection parameters of the display model for a generic position of the camera. They are then refined using planar homography, and the validation of the proposed method has been made using a developed MATLAB application.The analysis of infrared images for pedestrian detection at night is considered in the paper [3], where a method based on an attention-guided encoder-decoder convolutional neural network is proposed to extract discriminative multi-scale features from low-resolution and noisy infrared images. The authors have validated their method using two pedestrian video datasets-Keimyung University (KMU) and Computer ...