n our society, it is very difficult for hearing impaired and speech impaired people to communicate with ordinary people. They use sign languages to communicate, which use visually transmitted sign patterns, generally includes hand gestures. Sign languages being difficult to learn and non-universal, there is a barrier of communication between the hearing impaired and ordinary people. To break this barrier a system is required that can convert sign language to voice and vice versa in real-time. Here, we propose a real-time two-way system, for communication between hearing-impaired and normal people, which converts the Indian Sign Language (ISL) letters into equivalent alphabet letters and vice versa. In the proposed system, using a camera, images of ISL hand gestures are captured. Then Image pre-processing is done so that these images are ready for feature extraction. Here, a novel approach of using the Canny Edge Detection Algorithm. Once the necessary details are extracted from the image, it is matched with the data set, which is classified using Convolutional Neural Network, and the corresponding text is generated. This text is converted into a voice. Similarly, using a microphone, the voice input of an ordinary person is captured and converted into text. This text is then matched with the data set and a corresponding sign is generated. This system reduces the gap in communication between hearing-impaired and ordinary people. Our method provides 98 % accuracy for the 35 alphanumeric gestures of ISL