Many researchers have argued that combining many models for forecasting gives better estimates than single time series models. For example, a hybrid architecture comprising an autoregressive integrated moving average model (ARIMA) and a neural network is a well-known technique that has recently been shown to give better forecasts by taking advantage of each model's capabilities. However, this assumption carries the danger of underestimating the relationship between the model's linear and non-linear components, particularly by assuming that individual forecasting techniques are appropriate, say, for modeling the residuals. In this paper, we show that such combinations do not necessarily outperform individual forecasts. On the contrary, we show that the combined forecast can underperform significantly compared to its constituents' performances. We demonstrate this using nine data sets, autoregressive linear and time-delay neural network models.
The superior colliculus (SC) is a neural structure found in mammalian brains that acts as a sensory hub through which visual, auditory and somatosensory inputs are integrated. This integration is used to orient the eye's fovea towards a prominent stimulus, independently of which sensory modality it was detected in. A recently observed aspect of this integration is that it is moderated by cortical feedback. As a key sensorimotor function integrating low-level sensory information moderated by the cortex, studying the SC may therefore enable us to understand how natural systems prioritize sensory computation in real-time, possibly as a result of task dependent feedback. In this paper, we focus on such a biological model. From a computational perspective, understanding this combination of bottom-up processing with top-down moderation in a model is therefore appealing. We present for the first time a behavioral model of the SC which combines the development of unisensory and multisensory representations with simulated cortical feedback. Our model demonstrates how unisensory maps can be aligned and integrated automatically into a multisensory representation. Results demonstrate that our model can capture the basic properties of the SC, and in particular they show the influence of the simulated cortical feedback on multisensory responses, reproducing the observed multisensory enhancement and suppression phenomena compared to biological studies. This suggests that our unified competitive learning approach may successfully be used to represent spatial processing that is moderated by task, and hence could be more widely applied to other, task dependent processing.
Abstract-The ability to combine sensory information is an important attribute of the brain. Multisensory integration in natural systems suggests that a similar approach in artificial systems may be important.Multisensory integration is exemplified in mammals by the superior colliculus (SC), which combines visual, auditory and somatosensory stimuli to shift gaze. However, although we have a good understanding of the overall architecture of the SC, as yet we do not fully understand the process of integration. While a number of computational models of the SC have been developed, there has not been a larger scale implementation that can help determine how the senses are aligned and integrated across the superficial and deep layers of the SC. In this paper we describe a prototype implementation of the mammalian SC consisting of self-organizing maps linked by Hebbian connections, modeling visual and auditory processing in the superficial and deep layers. The model is trained on artificial auditory and visual stimuli, with testing demonstrating the formation of appropriate spatial representations, which compare well with biological data.Subsequently, we train the model on multisensory stimuli, testing to see if the unisensory maps can be combined. The results show the successful alignment of sensory maps to form a multisensory representation. We conclude that, while simple, the model lends itself to further exploration of integration, which may give insight into whether such modeling is of benefit computationally.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.