This letter is a retrospective analysis of our team's research for the Defense Advanced Research Projects Agency Explainable Artificial Intelligence project. Our initial approach was to use salience maps, English sentences, and lists of feature names to explain the behavior of deep-learning-based discriminative systems, with particular focus on visual question answering systems. We found that presenting static explanations along with answers led to limited positive effects. By exploring various combinations of machine and human explanation production and consumption, we evolved a notion of explanation as an interactive process that takes place usually between humans and artificial intelligence systems but sometimes within the software system. We realized that by interacting via explanations people could task and adapt machine learning (ML) agents. We added affordances for editing explanations and modified the ML system to act in accordance with the edits to produce an interpretable interface to the agent. Through this interface, editing an explanation can adapt a system's performance to new, modified purposes. This deep tasking, wherein the agent knows its objective and the explanation for that objective, will be critical to enable higher levels of autonomy.explainable artificial intelligence (XAI), human/computer interaction (HCI), tasking and adapting agents, visual question answering (VQA)
This letter provides a retrospective analysis of our team’s research
performed under the DARPA Explainable Artificial Intelligence (XAI)
project. We began by exploring salience maps, English sentences, and
lists of feature names for explaining the behavior of
deep-learning-based discriminative systems, especially visual question
answering systems. We demonstrated limited positive effects from
statically presenting explanations along with system answers – for
example when teaching people to identify bird species. Many XAI
performers were getting better results when users interacted with
explanations. This motivated us to evolve the notion of explanation as
an interactive medium – usually, between humans and AI systems but
sometimes within the software system. We realized that interacting via
explanations could enable people to task and adapt ML agents. We added
affordances for editing explanations and modified the ML system to act
in accordance with the edits to produce an interpretable interface to
the agent. Through this interface, editing an explanation can adapt a
system’s performance to new, modified purposes. This deep tasking,
wherein the agent knows its objective and the explanation for that
objective will be critical to enable higher levels of autonomy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.