Contemporary face hallucination (FH) models exhibit considerable ability to reconstruct high-resolution (HR) details from low-resolution (LR) face images. This ability is commonly learned from examples of corresponding HR-LR image pairs, created by artificially down-sampling the HR ground truth data. This down-sampling (or degradation) procedure not only defines the characteristics of the LR training data, but also determines the type of image degradations the learned FH models are eventually able to handle. If the image characteristics encountered with realworld LR images differ from the ones seen during training, FH models are still expected to perform well, but in practice may not produce the desired results. In this paper we study this problem and explore the bias introduced into FH models by the characteristics of the training data. We systematically analyze the generalization capabilities of several FH models in various scenarios, where the image the degradation function does not match the training setup and conduct experiments with synthetically downgraded as well as real-life low-quality images. We make several interesting findings that provide insight into existing problems with FH models and point to future research directions.
Artificial perturbation of local neural activity in the high-level visual cortex alters visual perception. Quantitative characterization of these perceptual alterations holds the key to the development of a mechanistic theory of visual perception1. Historically, though, the complexity of these perceptual alterations, as well as their subjective nature, has rendered them difficult to quantify. Here, we trained macaque monkeys to detect and report brief optogenetic impulses delivered to their inferior temporal cortex, the high-level visual area associated with object recognition, via an implanted LED array2. We assumed that the animals perform this task by detecting the stimulation-induced alterations of the contents of their vision. We required the animals to fixate on a set of images during the task and utilized a machine-learning structure aiming at physically perturbing the viewed images in order to trick the animals into thinking they were being stimulated. In a high-throughput iterative process of behavioral data collection, we developed highly specific perturbed images,perceptograms, looking at which would trick the animals into feeling cortically stimulated. Perceptograms provide parametric and pictorial evidence of the visual hallucinations induced by cortical stimulation. Objective characterization of stimulation-induced perceptual events, besides its theoretical value, opens the door to making better visual prosthetic devices as well as a deeper understanding of visual hallucinations in mental disorders.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.