<p>The use of robots in the fabrication of complex architectural structures is increasing in popularity. However, architectural robotic workflows still require convoluted and time-consuming programming in order to execute complex fabrication tasks. Additionally, an inability for robots to adapt to different environments further highlights concerns around the robotic manipulator as a primary construction tool. There are four key issues currently present in robotic fabrication for architectural applications. Firstly, an inability to adapt to unknown environments; Secondly, a lack of autonomous decision making; Thirdly, an inability to locate, recognise, and then manipulate objects in the operating environment; Fourthly a lack of error detection if a motion instruction conflicts with environmental constraints. This research begins to resolve these critical issues by seeking to integrate a feedback loop in a robotic system to improve perception, interaction and manipulation of objects in a robotic working environment. Attempts to achieve intelligence and autonomy in static robotic systems have seen limited success. Primarily, research into these issues has originated from the need to adapt existing robotic processes to architectural applications. The work of Gramazio and Kohler Research, specifically ‘on-site mobile fabrication’ and ‘autonomous robotic stone stacking’, present the current state of the art in intelligent architectural robotic systems and begin to develop solutions to the issues previously outlined. However, the limitations of Gramazio and Kohler’s research, specifically around a lack of perception-controlled grasping, offers an opportunity for this research to begin developing relevant solutions to the outlined issues. This research proposes a system where blocks, of consistent dimensions, are randomly distributed within the robotic working environment. The robot establishes the location and pose (position and orientation) of the blocks through an adaptive inclusion test. The test involves subsampling a point-cloud into a consistent grid; filtering points based on their height above the ground plane in order to establish block surfaces, and matching these surfaces to a CAD model for improved accuracy. The resulting matched surfaces are used to determine four points which define the object rotation plane and centre point. The robot uses the centre point, and the quaternion rotation angle to execute motion and grasping instructions. The robot is instructed to repeat the perception process until the collection of all the blocks within the camera frame is complete, and a preprogrammed wall is built. The implementation of a robotic feedback loop in this way demonstrates both the future potential and success of this research. The research begins to develop pathways through which to integrate new types of technologies such as machine learning and deep learning in order to improve the accuracy, speed and reliability of perception-controlled robotic systems through learned behaviours.</p>