The textual data of a document is supplemented by the graphical information in it. To make communication easier, they contain tables, charts and images. However, it excludes a section of our population -the visually impaired. With technological advancements, the blind can access the documents through text to speech software solutions. In this method, even images can be conveyed by reading out the figure captions. However, charts and other statistical comparisons which involve critical information are difficult to be "read" out this way. Aim of this paper is to analyse various methods available to solve this vexatious issue. We survey the state-of-the-art works that do the exact opposite of graphing tools. In this paper, we explore the existing literature in understanding the graphs and extracting the visual encoding from them. We classify these approaches into modality-based approaches, conventional and deep-learning based methods. The survey also contains comparisons and analyses relevant study datasets. As an outcome of this survey, we observe that: (i) All existing works under each category need decoding in a variety of graphs. (ii) Among the approaches, deep learning performs remarkably well in localisation and classification. However, it needs further improvements in reasoning from chart images. (iii) Research works are still in progress to access data from vector images. Recreating data from the raster images has unresolved issues. Based on this study, the various applications of decoding the graphs, challenges and future possibilities are also discussed. This paper explores current works in the extraction of chart data, which seek to enable researchers in Human Computer Interaction to achieve human-level perception of visual data by machines. In this era of visual summarisation of data, the AI approaches can automate the underlying data extraction and hence provide the natural language descriptions to support visually disabled users.