The cerebellum is necessary and sufficient for the acquisition and execution of adaptively timed conditioned motor responses following repeated paired presentations of a conditioned stimulus and an unconditioned stimulus. The underlying plasticity depends on the convergence of conditioned and unconditioned stimuli signals relayed to the cerebellum by the pontine nucleus and the inferior olive (IO), respectively. Adaptive timing of conditioned responses relies on the correctly predicted onset of the unconditioned stimulus, usually a noxious somatosensory stimulus. We addressed two questions: First, does the IO relay information regarding the duration of somatosensory stimuli to the cerebellum? Multiple-unit recordings from the IO of anesthetized rats that received periorbital airpuffs of various durations revealed that sustained somatosensory stimuli are invariably transformed into phasic IO outputs. The phasic response was followed by a post-peak depression in IO activity as compared to baseline, providing the cerebellum with a highly synchronous signal, time-locked to the stimulus' onset. Second, we sought to examine the involvement of olivocerebellar interactions in this signal transformation. Cerebello-olivary inhibition was interrupted using temporary pharmacological inactivation of cerebellar output nuclei, resulting in more sustained (i.e., less synchronous) IO responses to sustained somatosensory stimuli, in which the post-peak depression was substituted with elevated activity as compared to baseline. We discuss the possible roles of olivocerebellar negative-feedback loops and baseline cerebello-olivary inhibition levels in shaping the temporal dynamics of the IO's response to somatosensory stimuli and the consequences of this shaping for cerebellar plasticity and its ability to adapt to varying contexts.
Text-driven image generation methods have shown impressive results recently, allowing casual users to generate high quality images by providing textual descriptions. However, similar capabilities for editing existing images are still out of reach. Text-driven image editing methods usually need edit masks, struggle with edits that require significant visual changes and cannot easily keep specific details of the edited portion. In this paper we make the observation that image-generation models can be converted to image-editing models simply by fine-tuning them on a single image. We also show that initializing the stochastic sampler with a noised version of the base image before the sampling and interpolating relevant details from the base image after sampling further increase the quality of the edit operation. Combining these observations, we propose UniTune, a novel image editing method. UniTune gets as input an arbitrary image and a textual edit description, and carries out the edit while maintaining high fidelity to the input image. UniTune does not require additional inputs, like masks or sketches, and can perform multiple edits on the same image without retraining. We test our method using the Imagen model in a range of different use cases. We demonstrate that it is broadly applicable and can perform a surprisingly wide range of expressive editing operations, including those requiring significant visual changes that were previously impossible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.