Most information we consume as a society is obtained over the Web. News-often from questionable sources-are spread online, as are election campaigns; calls for (collective) action spread with unforeseen speed and intensity. All such actions have argumentation at their core, and the conveyed content is often strategically selected or rhetorically framed. The responsibility of critical analysis of arguments is thus tacitly transferred to the content consumer who is often not prepared for the task, nor aware of the responsibility. The ExpLAIN project aims at making the structure and reasoning of arguments explicit-not only for humans, but for Robust Argumentation Machines that are endowed with language understanding capacity. Our vision is a system that is able to deeply analyze argumentative text: that identifies arguments and counter-arguments, and reveals their internal structure, conveyed content and reasoning. A particular challenge for such a system is to uncover implicit knowledge which many arguments rely on. This requires human background knowledge and reasoning capacity, in order to explicate the complete reasoning of an argument. This article presents ongoing research of the ExpLAIN project that aims to make the vision of such a system a tangible aim. We introduce the problems and challenges we need to address, and present the progress we achieved until now by applying advanced natural language and knowledge processing methods. Our approach puts particular focus on leveraging available sources of structured and unstructured background knowledge, the automatic extension of such knowledge, the uncovering of implicit content, and reasoning techniques suitable for informal, everyday argumentation.