The semantic memory literature has recently seen the emergence of predictive neural network models that use principles of reinforcement learning to create a "neural embedding" of word meaning when trained on a language corpus. These models have taken the field by storm, partially due to the resurgence of connectionist architectures, but also due to their remarkable success at fitting human data. However, predictive embedding models also inherit the weaknesses of their ancestors. In this paper, we explore the effect of catastrophic interference (CI), long known to be a flaw with neural network models, on a modern neural embedding model of semantic representation (word2vec). We use homonyms as an index of bias as a function of the order in which a corpus is learned. If the corpus is learned in random order, the final representation will tend towards the dominant sense of the word (bank à money) as opposed to the subordinate sense (bank à river). However, if the subordinate sense is presented to the network after learning the
Fingerprint examiners maintain decision thresholds that represent the amount of evidence required for an identification or exclusion conclusion. As measured by error rate studies (Proc Natl Acad Sci USA. 2011;108(19):7733-8), these decision thresholds currently exhibit a preference for preventing erroneous identification errors at the expense of preventing erroneous exclusion errors. The goal of this study is to measure the decision thresholds for both fingerprint examiners and members of the general public, to determine whether examiners are more risk averse than potential jury members. To externally measure these decision thresholds, subjects manipulated decision criteria in a web-based visualization that reflects the trade-offs between erroneous identification decisions and erroneous exclusion decisions. Data from fingerprint examiners and the general public were compared to determine whether both groups have similar values as expressed by the placement of the decision criteria. The results of this study show that fingerprint examiners are more risk averse than members of the general public, although they align with error rate studies of fingerprint examiners. Demographic data demonstrate those factors that may contribute to differences in decision criterion placement, both between the two groups and between individuals within a group. The experimental methods provide a rich framework for measuring, interpreting, and responding to the values of society as applied to forensic decision-making.
Objective Theory of mind – the ability to infer others’ mental states – declines over the lifespan, potentially due to cognitive decline. However, it is unclear whether deficits emerge because older adults use the same strategies as young adults, albeit less effectively, or use different or no strategies. The current study compared the similarity of older adults’ theory of mind errors to young adults’ and a random model. Methods 120 older adults (MAge = 74.68 years; 64 female) and 111 young adults (MAge = 19.1; 61 female) completed a novel theory of mind task (clips from an episode of the sitcom The Office®), and a standard measure of cognitive function (Logical Memory II). Monte Carlo resampling estimated the likelihood that older adults’ error patterns were more similar to young adults’ or a random distribution. Results Age deficits emerged on the theory of mind task. Poorer performance was associated with less similarity to young adults’ response patterns. Overall, older adults’ response patterns were ~2.7 million times more likely to match young adults’ than a random model. Critically, one-fourth of older adults’ errors were more similar to the random distribution. Poorer memory ability contributed to this relationship. Discussion Age deficits in theory of mind performance may be driven by a subset of older adults and be related to disparities in strategy use. A certain amount of cognitive ability may be necessary for older adults to engage similar strategies to young adults’ during theory of mind.
The semantic memory literature has recently seen the emergence of predictive neural network models that use principles of reinforcement learning to create a “neural embedding” of word meaning when trained on a language corpus. These models have taken the field by storm, partially due to the resurgence of connectionist architectures, but also due to their remarkable success at fitting human data. However, predictive embedding models also inherit the weaknesses of their ancestors. In this paper, we explore the effect of catastrophic interference (CI), long known to be a flaw with neural network models, on a modern neural embedding model of semantic representation (word2vec). We use homonyms as an index of bias as a function of the order in which a corpus is learned. If the corpus is learned in random order, the final representation will tend towards the dominant sense of the word (bank > money) as opposed to the subordinate sense (bank > river). However, if the subordinate sense is presented to the network after learning the dominant sense, CI almost completely erases the dominant sense and the final representation strongly tends towards the more recent subordinate sense. We demonstrate the impact of CI and sequence of learning on the final neural embeddings learned by word2vec in both an artificial language and in an English corpus and evaluate the effectiveness of a recently proposed solution to CI from neuroscience, elastic weight consolidation, on mitigating the effects of CI.
This study was conducted as a response to the concerns about the consequences of latent fingerprint examinations. The goal is to determine if society’s moral values align with the current bias towards erroneous exclusion decisions over erroneous identification decisions found in latent print examinations. Subjects of this experiment were asked to manipulate a webbased visualization that reflects the tradeoffs between putting guilty people in jail and keeping innocent people out of jail. The results of the experiment were analyzed to determine the similarities and differences between the opinions of fingerprint examiners and the opinions of students and members of the general public. In practice, examiners adopt more conservative decision criteria, because they could lose their job if they put an innocent person in jail. According to the results of this study, examiners seem to have a much more liberal exclusion criterion than they actually do in casework, and the public seems willing to tolerate a higher amount of erroneous identifications in exchange for a lower erroneous exclusion rate based on their average criteria placement in the visualization. The results of this study will help examiners align their responses to those of society, and help all citizens understand the tradeoffs that can occur with shifting decision criteria. If the results of the study indicate the need to shift the decision criteria to put more criminals in jail, additional safeguards may be necessary to guard against innocent people going to jail. Thus this dataset represents a rich framework for measuring, interpreting, and responding to the values and beliefs of what constitutes a just and moral society.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.