Group activity recognition in video is a complex task due to the need for a model to recognise the actions of all individuals in the video and their complex interactions. Recent studies propose that optimal performance is achieved by individually tracking each person and subsequently inputting the sequence of poses or cropped images/optical flow into a model. This helps the model to recognise what actions each person is performing before they are merged to arrive at the group action class. However, all previous models rely highly on high-quality tracking and have only been evaluated using ground truth tracking information. In practice, achieving highly reliable tracking information for all individuals in a group activity video is almost impossible. We introduce an innovative deep learning-based group activity recognition approach called Rendered Pose-based Group Activity Recognition System (RePGARS), designed to tolerate unreliable tracking and pose information. Experimental results confirm that RePGARS outperforms all existing group activity recognition algorithms tested, which do not use ground truth detection and tracking information.