A number of techniques for interpretability have been presented for deep learning in computer vision, typically with the goal of understanding what the networks have actually learned underneath a given classification decision. However, interpretability for deep video architectures is still in its infancy and we do not yet have a clear concept of how to decode spatiotemporal features. In this paper, we present a study comparing how 3D convolutional networks and convolutional LSTM networks learn features across temporally dependent frames. This is the first comparison of two video models that both convolve to learn spatial features but that have principally different methods of modeling time. Additionally, we extend the concept of meaningful perturbation introduced by Fong & Vedaldi (2017) to the temporal dimension to search for the most meaningful part of a sequence for a classification decision. (a) Moving something and something away from each other.
Deep Neural Networks (NNs) have been widely utilized in contact-rich manipulation tasks to model the complicated contact dynamics. However, NN-based models are often difficult to decipher which can lead to seemingly inexplicable behaviors and unidentifiable failure cases. In this work, we address the interpretability of NN-based models by introducing the kinodynamic images. We propose a methodology that creates images from the kinematic and dynamic data of a contact-rich manipulation task. Our formulation visually reflects the task's state by encoding its kinodynamic variations and temporal evolution. By using images as the state representation, we enable the application of interpretability modules that were previously limited to vision-based tasks. We use this representation to train Convolution-based Networks and we extract interpretations of the model's decisions with Grad-CAM, a technique that produces visual explanations. Our method is versatile and can be applied to any classification problem using synchronous features in manipulation to visually interpret which parts of the input drive the model's decisions and distinguish its failure modes. We evaluate this approach on two examples of real-world contact-rich manipulation: pushing and cutting, with known and unknown objects. Finally, we demonstrate that our method enables both detailed visual inspections of sequences in a task, as well as high-level evaluations of a model's behavior and tendencies. Data and code for this work are available at [1].
Highly automated driving systems are required to make robust decisions in many complex driving environments, such as urban intersections with high traffic. In order to make as informed and safe decisions as possible, it is necessary for the system to be able to predict the future maneuvers and positions of other traffic agents, as well as to provide information about the uncertainty in the prediction to the decision making module. While Bayesian approaches are a natural way of modeling uncertainty, recently deep learning-based methods have emerged to address this need as well. However, balancing the computational and system complexity, while also taking into account agent interactions and uncertainties, remains a difficult task. The work presented in this paper proposes a method of producing predictions of other traffic agents' trajectories in intersections with a singular Deep Learning module, while incorporating uncertainty and the interactions between traffic participants. The accuracy of the generated predictions is tested on a simulated intersection with a high level of interaction between agents, and different methods of incorporating uncertainty are compared. Preliminary results show that the CVAE-based method produces qualitatively and quantitatively better measurements of uncertainty and manage to more accurately assign probability to the future occupied space of traffic agents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.