We present a computational model of multiple-object tracking that makes trial-level predictions about the allocation of visual attention and the effect of this allocation on observers' ability to track multiple objects simultaneously. This model follows the intuition that increased attention to a location increases the spatial resolution of its internal representation. Using a combination of empirical and computational experiments, we demonstrate the existence of a tight coupling between cognitive and perceptual resources in this task: Low-level tracking of objects generates bottom-up predictions of error likelihood, and high-level attention allocation selectively reduces error probabilities in attended locations while increasing it at non-attended locations. Whereas earlier models of multiple-object tracking have predicted the big picture relationship between stimulus complexity and response accuracy, our approach makes accurate predictions of both the macro-scale effect of target number and velocity on tracking difficulty and micro-scale variations in difficulty across individual trials and targets arising from the idiosyncratic within-trial interactions of targets and distractors.
We have developed a method for learning relative preferences from histories of choices made, without requiring an intermediate utility computation. Our method infers preferences that are rational in a psychological sense, where agent choices result from Bayesian inference of what to do from observable inputs. We further characterize conditions on choice histories wherein it is appropriate for modelers to describe relative preferences using ordinal utilities, and illustrate the importance of the influence of choice history by explaining all major categories of context effects using them. Our proposal clarifies the relationship between economic and psychological definitions of rationality and rationalizes several behaviors heretofore judged irrational by behavioral economists.
DNA replication has a finite measurable error rate, net of repair, in all cells. Clonal proliferation of cancer cells leads therefore to accumulation of random mutations. A proportion of these mutational events can create new immunogenic epitopes that, if processed and presented by an MHC allele, may be recognized by the adaptive immune system. Here, we use probability theory to analyze the mutational and epitope composition of a tumor mass in successive division cycles and create a double Pölya model for calculating the number of truly tumor-specific MHC I epitopes in a human tumor. We deduce that depending upon tumor size, the degree of genomic instability and the degree of death within a tumor, human tumors have several tens to low hundreds of new, truly tumor-specific epitopes. Parenthetically, cancer stem cells, due to the asymmetry in their proliferative properties, shall harbor significantly fewer mutations, and therefore significantly fewer immunogenic epitopes. As the overwhelming majority of the mutations in cancer cells are unrelated to malignancy, the mutation-generated epitopes shall be specific for each individual tumor, and constitute the antigenic fingerprint of each tumor. These calculations highlight the benefits for personalization of immunotherapy of human cancer, and in view of the substantial pre-existing antigenic repertoire of tumors, emphasize the enormous potential of therapies that modulate the anti-cancer immune response by liberating it from inhibitory influences.
We propose using side information to further inform anomaly detection algorithms of the semantic context of the text data they are analyzing, thereby considering both divergence from the statistical pattern seen in particular datasets and divergence seen from more general semantic expectations. Computational experiments show that our algorithm performs as expected on data that reflect real-world events with contextual ambiguity, while replicating conventional clustering on data that are either too specialized or generic to result in contextual information being actionable. These results suggest that our algorithm could potentially reduce false positive rates in existing anomaly detection systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.