Abstract:The two fields of machine learning and graphical causality arose and are developed separately. However, there is, now, cross-pollination and increasing interest in both fields to benefit from the advances of the other. In this article, we review fundamental concepts of causal inference and relate them to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research. This also applies in the opposite direction:… Show more
“…[69][70][71][72][73] However, understanding and disentangling their latent spaces remains challenging. 74,75 viAE wins Among the tested DNNs and across the multiple tests, the viAE provided the best face-shape representations to predict human behavior. With the notable exception of the generalization testing, the simple nonlinear pixelPCA model came close to this performance.…”
Section: Hypothesis-driven Research Using Generative Modelsmentioning
Highlights d DNNs modeled how humans rate the similarity of familiar faces to random face stimuli d A generative model controlled the shape and texture features of the face stimuli d The best DNN predicted human behavior because it used similar face-shape features d Explaining human behavior from causal features is difficult with naturalistic images
“…[69][70][71][72][73] However, understanding and disentangling their latent spaces remains challenging. 74,75 viAE wins Among the tested DNNs and across the multiple tests, the viAE provided the best face-shape representations to predict human behavior. With the notable exception of the generalization testing, the simple nonlinear pixelPCA model came close to this performance.…”
Section: Hypothesis-driven Research Using Generative Modelsmentioning
Highlights d DNNs modeled how humans rate the similarity of familiar faces to random face stimuli d A generative model controlled the shape and texture features of the face stimuli d The best DNN predicted human behavior because it used similar face-shape features d Explaining human behavior from causal features is difficult with naturalistic images
“…However, this teaching has been criticised as being detrimental to the potential understanding, which can be gained from techniques such as counterfactual explanations, a specific class of explanation that provides a link between what could have happened had input to a model been changed in a particular way [ 129 ]. Causal representation learning is a by-product of this research activity, and its applications have reached explainable CD [ 130 , 131 ].…”
Section: Challenges Comparisons and Future Directions For Change Representation Techniquesmentioning
Fine-grained change detection in sensor data is very challenging for artificial intelligence though it is critically important in practice. It is the process of identifying differences in the state of an object or phenomenon where the differences are class-specific and are difficult to generalise. As a result, many recent technologies that leverage big data and deep learning struggle with this task. This review focuses on the state-of-the-art methods, applications, and challenges of representation learning for fine-grained change detection. Our research focuses on methods of harnessing the latent metric space of representation learning techniques as an interim output for hybrid human-machine intelligence. We review methods for transforming and projecting embedding space such that significant changes can be communicated more effectively and a more comprehensive interpretation of underlying relationships in sensor data is facilitated. We conduct this research in our work towards developing a method for aligning the axes of latent embedding space with meaningful real-world metrics so that the reasoning behind the detection of change in relation to past observations may be revealed and adjusted. This is an important topic in many fields concerned with producing more meaningful and explainable outputs from deep learning and also for providing means for knowledge injection and model calibration in order to maintain user confidence.
“…This is the essential problem for artificial systems in emulating cognition in animals. However, there is recent work that employs artificial models of transfer learning [21,22].…”
Section: Abstract Encoding Of Sensory Inputmentioning
Cognition is often defined as a dual process of physical and non-physical mechanisms. This duality originated from past theory on the constituent parts of the natural world. Even though material causation is not an explanation for all natural processes, phenomena at the cellular level of life are modeled by physical causes. These phenomena include explanations for the function of organ systems, including the nervous system and information processing in the cerebrum. This review restricts the definition of cognition to a mechanistic process and enlists studies that support an abstract set of proximate mechanisms. Specifically, this process is approached from a large-scale perspective, the flow of information in a neural system. Study at this scale further constrains the possible explanations for cognition since the information flow is amenable to theory, unlike a lower-level approach where the problem becomes intractable. These possible hypotheses include stochastic processes for explaining the processes of cognition along with principles that support an abstract format for the encoded information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.