MELIBEA is a directory of institutional open-access policies for research output that uses a composite formula with eight weighted conditions to estimate the "strength" of Open Access mandates (registered in ROARMAP). We analyzed total Web of Science-(WoS)-indexed publication output in years 2011-2013 for 67 institutions where OA was mandated in order to estimate the mandates' effectiveness: How well did the MELIBEA score and its individual conditions predict what percentage of the WoS-indexed articles is actually deposited in each institution's OA repository, and when. We found a small but significant positive correlation (0.18) between the MELIBEA "strength" score and deposit percentage. For three of the eight MELIBEA conditions (deposit timing, internal use, and opt-outs), one value of each was strongly associated with deposit percentage or latency (1: immediate deposit required; 2: deposit required for performance evaluation; 3: unconditional opt-out allowed for the OA requirement but no opt-out for deposit requirement). When we updated the initial values and weights of the MELIBEA formula to reflect the empirical association we had found, the score's predictive power for mandate effectiveness doubled (.36). There are not yet enough OA mandates to test further mandate conditions that might contribute to mandate effectiveness, but the present findings already suggest that it would be productive for existing and future mandates to adopt the three identified conditions so as to maximize their effectiveness, and thereby the growth of OA.
Current empirical research on mapping as a learning strategy presents methodological shortcomings that limit its internal and external validity. The results of our analysis indicate that mapping strategies that make use of feedback and scaffolding have beneficial effects on learning. Accordingly, we see a need to expand the process of reflection on the characteristics of representational guidance as it is provided by mapping techniques and tools based on field of knowledge, instructional objectives, and the characteristics of learners in health professions education.
A large body of experimental and theoretical work on neural coding suggests that the information stored in brain circuits is represented by time-varying patterns of neural activity. Reservoir computing, where the activity of a recurrently connected pool of neurons is read by one or more units that provide an output response, successfully exploits this type of neural activity. However, the question of system robustness to small structural perturbations, such as failing neurons and synapses, has been largely overlooked. This contrasts with well-studied dynamical perturbations that lead to divergent network activity in the presence of chaos, as is the case for many reservoir networks. Here, we distinguish between two types of structural network perturbations, namely local (e.g., individual synaptic or neuronal failure) and global (e.g., network-wide fluctuations). Surprisingly, we show that while global perturbations have a limited impact on the ability of reservoir models to perform various tasks, local perturbations can produce drastic effects. To address this limitation, we introduce a new architecture where the reservoir is driven by a layer of oscillators that generate stable and repeatable trajectories. This model outperforms previous implementations while being resistant to relatively large local and global perturbations. This finding has implications for the design of reservoir models that capture the capacity of brain circuits to perform cognitively and behaviorally relevant tasks while remaining robust to various forms of perturbations. Further, our work proposes a novel role for neuronal oscillations found in cortical circuits, where they may serve as a collection of inputs from which a network can robustly generate complex dynamics and implement rich computations.
How many words—and which ones—are sufficient to define all other words? When dictionaries are analyzed as directed graphs with links from defining words to defined words, they reveal a latent structure. Recursively removing all words that are reachable by definition but that do not define any further words reduces the dictionary to a Kernel of about 10% of its size. This is still not the smallest number of words that can define all the rest. About 75% of the Kernel turns out to be its Core, a “Strongly Connected Subset” of words with a definitional path to and from any pair of its words and no word's definition depending on a word outside the set. But the Core cannot define all the rest of the dictionary. The 25% of the Kernel surrounding the Core consists of small strongly connected subsets of words: the Satellites. The size of the smallest set of words that can define all the rest—the graph's “minimum feedback vertex set” or MinSet—is about 1% of the dictionary, about 15% of the Kernel, and part‐Core/part‐Satellite. But every dictionary has a huge number of MinSets. The Core words are learned earlier, more frequent, and less concrete than the Satellites, which are in turn learned earlier, more frequent, but more concrete than the rest of the Dictionary. In principle, only one MinSet's words would need to be grounded through the sensorimotor capacity to recognize and categorize their referents. In a dual‐code sensorimotor/symbolic model of the mental lexicon, the symbolic code could do all the rest through recombinatory definition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.