Sensory deprivation has long been known to cause hallucinations or “phantom” sensations, the most common of which is tinnitus induced by hearing loss, affecting 10–20% of the population. An observable hearing loss, causing auditory sensory deprivation over a band of frequencies, is present in over 90% of people with tinnitus. Existing plasticity-based computational models for tinnitus are usually driven by homeostasis mechanisms, modeled to fit phenomenological findings. Here, we use an objective-driven learning algorithm to model an early auditory processing neuronal network, e.g., in the dorsal cochlear nucleus. The learning algorithm maximizes the network’s output entropy by learning the feed-forward and recurrent interactions in the model. We show that the connectivity patterns and responses learned by the model display several hallmarks of early auditory neuronal networks. We further demonstrate that attenuation of peripheral inputs drives the recurrent network towards its critical point and transition into a tinnitus-like state. In this state, the network activity resembles responses to genuine inputs even in the absence of external stimulation, namely, it “hallucinates” auditory responses. These findings demonstrate how objective-driven plasticity mechanisms that normally act to optimize the network’s input representation can also elicit pathologies such as tinnitus as a result of sensory deprivation.Author summaryTinnitus or “ringing in the ears” is a common pathology. It may result from mechanical damage in the inner ear, as well as from certain drugs such as salicylate (aspirin). A common approach toward a computational model for tinnitus is to use a neural network model with inherent plasticity applied to early auditory processing, where the input layer models the auditory nerve and the output layer models a nucleus in the brain stem. However, most of the existing computational models are phenomenological in nature, driven by a homeostatic principle. Here, we use an objective-driven learning algorithm based on information theory to learn the feed-forward interactions between the layers, as well as the recurrent interactions within the output layer. Through numerical simulations of the learning process, we show that attenuation of peripheral inputs drives the network into a tinnitus-like state, where the network activity resembles responses to genuine inputs even in the absence of external stimulation; namely, it “hallucinates” auditory responses. These findings demonstrate how plasticity mechanisms that normally act to optimize network performance can also lead to undesired outcomes, such as tinnitus, as a result of reduced peripheral hearing.