We present a real-time method for computing the mechanical interaction between real and virtual objects in an augmented reality environment. Using model order reduction methods we are able to estimate the physical behavior of deformable objects in real time, with the precision of a high-fidelity solver but working at the speed of a video sequence. We merge tools of machine learning, computer vision, and computer graphics in a single application to describe the behavior of deformable virtual objects allowing the user to interact with them in a natural way. Three examples are provided to test the performance of the method. K E Y W O R D S model order reduction, nonlinear materials, real-time interaction, solids contact 1 INTRODUCTION New technologies are bringing better tools to improve the augmented and mixed reality experience. These tools are in the form of both hardware and software. On the hardware side we have the development of new devices to show information to the user, such as smartphones, or high-resolution virtual reality glasses (eg, StereoLabs ZED Mini on Oculus Rift, see https://www.stereolabs.com/zed-mini/setup/rift/); we also notice great efforts on the development of devices to capture information around us such as RGB-D systems (Microsoft Kinect for Windows, for instance, see https:// developer.microsoft.com/en-us/windows/kinect), or stereo cameras; and even systems that include both the capture and visualization of information, such as Microsoft Hololens 2 (https://www.microsoft.com/en-us/hololens) or Magic Leap One (see https://www.magicleap.com/magic-leap-one), which allow the capture of the environment, the visualization of virtual objects and the interaction thanks to built-in controls. From the software point of view, a great work has been done to generate new environments to reduce complexity in the process of capturing the data, such as the new development kits Apple ARKit 3 (https://developer.apple.com/augmentedreality/) or Google ARCore (https://developers.google.com/ar/); other software development kits are more focused and integrated into helmets such as Hololens; there are also great advances in visualization libraries such as OpenGL (https:// www.opengl.org/) or other proprietary libraries like Nvidia CUDA (https://developer.nvidia.com/cuda-zone) or Apple Metal (https://developer.apple.com/metal/); and also there has been a big development in new techniques to take robust measures from a scene, such as ORB-SLAM 1 or LSD-SLAM, 2 among others. This paper leverages some of these technologies and develops real-time computational mechanics techniques so as to provide mixed reality systems the ability to seamlessly integrate virtual and physical objects and make them interact F I G U R E 1 Mixed Reality (MR) as the interaction of three sciences: machine learning, computer graphics, and computer vision [Color figure can be viewed at wileyonlinelibrary.com] according to physical laws. The interest of adding physical realism to the interaction between real and virtual elements in a scene is ...