In this work, a new approach to gesture recognition using the properties of Spherical Self-Organizing Map (SSOM)is investigated. Bounded mapping of data onto a SSOM creates not only a powerful tool for visualization but also for modeling spatiotemporal information of gesture data. The SSOM allows for the automated decomposition of a variety of gestures into a set of distinct postures. The decomposition naturally organizes this set into a spatial map that preserves associations between postures, upon which we formalize the notion of a gesture as a trajectory through learned posture space. Trajectories from different gestures may share postures. However, the path traversed through posture space is relatively unique. Different variations of posture transitions occurring within a gesture trajectory are used to classify new unknown gestures. Four mechanisms for detecting the occurrence of a trajectory of an unknown gesture are proposed and evaluated on two data sets involving both hand gestures (public sign language database) and full body gestures (Microsoft Kinect database collected in-house) showing the effectiveness of the proposed approach. v Acknowledgement I would like to express my sincere gratitude to Prof. Matthew Kyan, who has guided me through my master's degree and have provided a lot of support and help during my studies. To all my family and friends for their support and understanding. To all my colleagues and friends from Ryerson Multimedia Lab who have helped me by sharing their knowledge with me. Especially to Adrian Bulzacki for collecting and providing the Microsoft Kinect data set used in this thesis. To Naimul Mefraz Khan, for his support in the topic of Spherical Self-Organizing Maps. Finally, to Chun-Hao Wang for providing network access to the Microsoft Kinect data set. vi