LIBRARY VAULTWe present techniques and ;I system for synthesizing views for video teleconferencing between sninll groups. Tn place of replicating one-to-one systems for each pair of users, we create a single unified display of the remote group. Instead of performing dense 3D scene computation, we use more cameras and trade-off storage and hardware for computation. While it is expensive to directly capture a scene from all possible viewpoints, we have observed that the participants' viewpoints LISLI;III~ remain at ;I constant height (eye level) during video teleconferencing. Therefore, we can restrict the possible viewpoint to be within a virtual plane without sacrificing much of the realism, and in cloing so we significantly reduce the numher of required cameras. Rued on this observation, we hove cleveloped a technique that uses light-field style rendering to giiarantee the cliiality of the synthesized views. using a linear array of calneras with a life-sized, projected display. Our full-duplex prototype system between Sruiclia National Laboratories, California and the University of North Carolina at Chapel Hill has been able to synthesize photo-realistic views at interactive rates, and has been used to video conference cluring regular meetings between the sites.
3This page intentionally left blank.4