In our paper, we consider approaches towards the recognition of sign languages used by the deaf people in Russia and India. The structure of the recognition system for individual gestures is proposed based on the identification of its five components: configuration, orientation, localization, movement and non-manual markers. We overview the methods applied for the recognition of both individual gestures and continuous Indian and Russian sign languages. In particular we consider the problem of building corpuses of sign languages, as well as sets of training data (datasets). We note the similarity of certain individual gestures in Russian and Indian sign languages and specify the structure of the local dataset for static gestures of the Russian sign language. For the dataset, 927 video files with static one-handed gestures were collected and converted to JSON using the OpenPose library. After analyzing 21 points of the skeletal model of the right hand, the obtained reliability for the choice of points equal to 0.61, which was found insufficient. It is noted that the recognition of individual gestures and sign speech in general is complicated by the need for accurate tracking of various components of the gestures, which are performed quite quickly and are complicated by overlapping hands and faces. To solve this problem, we further propose an approach related to the development of a biosimilar neural network, which is to process visual information similarly to the human cerebral cortex: identification of lines, construction of edges, detection of movements, identification of geometric shapes, determination of the direction and speed of the objects movement. We are currently testing a biologically similar neural network proposed by A.V. Kugaevskikh on video files from the Russian sign language dataset.