“…The sensors learn the geometric features, the appearance, and visual context of the landmark scenes in unknown environments. Each observation is a 360 • set of points measured during one sensor rotation, and the SLAM algorithm needs to predict the transformation T = [∆s, ∆θ], where ∆s represents the UAV distance traversed when the UAV coordinates change from s 1 = [s 1x , s 1y , s 1z ] to s 2 = [s 2x , s 2y , s 2z ] and ∆θ is the orientation difference between two consecutive sensor scans for UAV states [75]. The displacement of the UAV relies on the mapping of states to T = [∆s, ∆θ] at time n and the position q x , q y , θ n of the UAV [74,129].…”