To complete an image, it is needed to go through the process to capture the actual actor's motion and compose it with virtual environment. Due to the excessive cost for production or lack of post-processing technology, however, it is mostly conducted by manual labor. The actor plays his role depending on his own imagination at the virtual chromakey studio, and at that time, he has to move considering the possible collision with or reaction to an object that does not exist. And in the process of composition applying CG, when the actor's motion does not go with the virtual environment, the original image may have to be discarded and it is necessary to remake the film. The current study suggested and realized depth-based real-time 3D virtual image composition system to reduce the ratio of remaking the film, shorten the production time, and lower the production cost. As it is possible to figure out the mutual collision or reaction by composing the virtual background, 3D model, and the actual actor in real time at the site of filming, the actor's wrong position or acting can be corrected right there instantly.