MindMapping (Tony Buzan and Harrison, 2010) is a well-known technique for note-taking, which encourages learning and studying. MindMapping has been manually adopted to help present knowledge and concepts in a visual form. Unfortunately, there is no reliable automated approach to generate MindMaps from Natural Language text. This work firstly introduces the MindMap Multi-level Visualization concept that jointly visualize and summarize textual information. The visualization is achieved pictorially across multiple levels using semantic information (i.e. ontology), while the summarization is achieved by the information in the highest levels as they represent abstract information in the text. This work also presents the first automated approach that takes a text input and generates a MindMap visualization out of it. The approach could visualize text documents in multi-level MindMaps, in which a high-level MindMap node could be expanded into child MindMaps. The proposed method involves understanding of the input text and converting it into intermediate Detailed Meaning Representation (DMR). The DMR is then visualized with two modes; Single level or Multiple levels, which is convenient for larger text. The generated MindMaps from both approaches were evaluated based on human subject experiments performed on Amazon Mechanical Turk with various parameter settings.
Electronic supplementary materialThe online version of this article (