One of the brain's most basic functions is integrating sensory data from diverse sources. This ability causes us to question whether the neural system is computationally capable of intelligently integrating data, not only when sources have known, fixed relative dependencies but also when it must determine such relative weightings based on dynamic conditions, and then use these learned weightings to accurately infer information about the world. We suggest that the brain is, in fact, fully capable of computing this parallel task in a single network and describe a neural inspired circuit with this property. Our implementation suggests the possibility that evidence learning requires a more complex organization of the network than was previously assumed, where neurons have different specialties, whose emergence brings the desired adaptivity seen in human online inference. © 2010 American Institute of Physics. ͓doi:10.1063/1.3491237͔Our senses work in parallel passing multimodal data of the same fact or object to the brain. A fundamental question in the field of computational neuroscience is how the brain accommodates sensory data from different sources to form one holistic picture. Cue integration experiments, in which subjects experience apparently synchronized cross modal stimuli, but where one source is displaced from its counterpart, can reveal how the brain handles parallel inputs. The results suggest that the human brain computes a weighted average over the different cues, fitting neatly with Bayes' probability theory, where each piece of information is weighted by the amount, on which the brain relies on the channeling sense. Studies now suggest that this relative reliability must also take into account changing conditions, such as varying lighting for vision, environmental noise levels for audio, etc. Using fixed reliabilities, as was previously modeled, skews decision making and contradicts both optimality and human studies. A question arises as to whether the previous Bayes-based theory still holds for cue integration given the need of adapting the levels of reliabilities. We propose it does and introduce a neural network architecture that can both constantly learn the reliabilities of sensory data and use them in the integration of cues. While the brain does not necessarily use the method that we are proposing for our architecture and there is currently no way to test exactly how the brain makes these computations, our work provides a proof of concept of the brain's ability to handle learning and inference in parallel and within one network.