Understanding intent is an important aspect of communication among people and is an essential component of the human cognitive system. This capability is particularly relevant for situations that involve collaboration among agents or detection of situations that can pose a threat. In this paper, we propose an approach that allows a robot to detect intentions of others based on experience acquired through its own sensory-motor capabilities, then using this experience while taking the perspective of the agent whose intent should be recognized. Our method uses a novel formulation of Hidden Markov Models designed to model a robot's experience and interaction with the world. The robot's capability to observe and analyze the current scene employs a novel vision-based technique for target detection and tracking, using a non-parametric recursive modeling approach. We validate this architecture with a physically embedded robot, detecting the intent of several people performing various activities.
Fluorescein angiography (FA) is a procedure used to image the vascular structure of the retina and requires the insertion of an exogenous dye with potential adverse side effects. Currently, there is only one alternative non-invasive system based on Optical coherence tomography (OCT) technology, called OCT angiography (OCTA), capable of visualizing retina vasculature. However, due to its cost and limited view, OCTA technology is not widely used. Retinal fundus photography is a safe imaging technique used for capturing the overall structure of the retina. In order to visualize retinal vasculature without the need for FA and in a cost-effective, non-invasive, and accurate manner, we propose a deep learning conditional generative adversarial network (GAN) capable of producing FA images from fundus photographs. The proposed GAN produces anatomically accurate angiograms, with similar fidelity to FA images, and significantly outperforms two other state-of-the-art generative algorithms ($$p<.001$$
p
<
.
001
and $$p<.0001$$
p
<
.
0001
). Furthermore, evaluations by experts shows that our proposed model produces such high quality FA images that are indistinguishable from real angiograms. Our model as the first application of artificial intelligence and deep learning to medical image translation, by employing a theoretical framework capable of establishing a shared feature-space between two domains (i.e. funduscopy and fluorescein angiography) provides an unrivaled way for the translation of images from one domain to the other.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.