Category learning is an incredibly broad topic. Researchers with heterogeneous goals and methods from various traditions are working to understand the nature of human categorization. Diversity is desirable if findings can eventually be placed in a common theoretical framework. However, one danger is that the study of categorization could fracture into isolated communities that will not benefit from insights outside their cadre.One current tension is between researchers engaged in mathematical modeling of laboratory studies and those working in domains that explore humans' real-world knowledge, such as researchers exploring the influence of culture on category formation. Both communities have good reasons to be weary of each other. Researchers working in more naturalistic and meaningful domains fault laboratory researchers and modelers for focusing on tasks that do not approximate the richness of real-world categorization. Conversely, some modelers might be hard-pressed to see how research in less well-controlled settings will eventually lead to a mechanistic understanding of categorization.For example, Lynch, Coley, and Medin (2000) explore how different types of tree experts have different conceptualizations of the same trees (as evidenced by their typicality ratings). Although their results are illuminating and rich, the groups they consider differ in a number of respects, making it hard to develop a causal story that ends in a mechanistic account of becoming an expert. On the other end of the naturalistic-tightly controlled spectrum, Nosofsky, Gluck, Palmeri, McKinley, and Glauthier (1994) revisit Shepard, Hovland, and classic learning problems and fit models to the results. Although data collected in well-controlled laboratory studies that use random assignment and counterbalanced stimuli promote model development, how such models could speak to Lynch et al.'s results remains unclear.In this chapter, we attempt to ease this tension. We apply a model of category learning developed through consideration of data from laboratory tasks
Humans continuously categorise inputs, but only rarely receive explicit feedback as to whether or not they are correct. This implies that they may be integrating unsupervised information together with their sparse supervised data -- a form of semi-supervised learning. However, experiments testing semi-supervised learning are rare, and are bedevilled with conflicting results about whether the unsupervised information affords any benefit. Here, we suggest that one important factor that has been paid insufficient attention is the alignment between subjects' internal representations of the stimulus material and the experimenter-defined representations that determine success in the tasks. Subjects' representations are shaped by prior biases and experience, and unsupervised learning can only be successful if the alignment suffices. Otherwise, unsupervised learning might harmfully strengthen incorrect assumptions. To test this hypothesis, we conducted an experiment in which subjects initially categorise items along a salient, but task-irrelevant, dimension, and only recover the correct categories when sufficient feedback draws their attention to the subtle, task-relevant, stimulus dimensions. By withdrawing feedback at different stages along this learning curve, we tested whether unsupervised learning improves or worsens performance when internal stimulus representations and task are sufficiently or insufficiently aligned, respectively. Our results demonstrate that unsupervised learning can indeed have opposing effects on subjects' learning. We also discuss factors limiting the degree to which such effects can be predicted from momentary performance. Our work implies that predicting and understanding human category learning in particular tasks requires assessment and consideration of the representational spaces that subjects entertain for the materials involved in those tasks. These considerations not only apply to studies in the lab, but could also help improve the design of tutoring systems and instruction.
Objectives: Earlier fracture is a strong risk factor for subsequent fracture. Vertebral fractures give loss in body height, and height loss will be a sum of spinal degeneration. We hypothesize that heightj loss predicts subsequent fractures of all types. Material and methods: All males born 1925-59 and all females born 1930-59 and living in the city of Troms@ were invited to surveys in 1979/80 and 1986/87 (Troms~ Heart Study). Of 18707 invited, 12270 met twice (66%). Height was measured without shoes by stadlometer. All types of fractures in the period 1988-1995 (n=826) except vertebral fractures, have been recorded and verified by X-ray in the only hospital in the area. Fractures have been coded as low-traumatic or not from information in the referrals. Results: Those with a height loss compared to those with no height loss suffered more fractures of all types (RR=I,23 CI 1,05-1,44, p=0,009). The relative risk was higher for women (RR=I,54 CI 1,22-1,95, p=0,00013), especially with respect to wrist fractures (RR=I,64 CI 1,14-2,38, p=0,007), but the estimates were similar for ankle fractures (RR=I,64 CI 0,85, p=0,08). Exluding hlgh-tratunatic fractures did not change the point estimates significantly. When adjusting for age among women, the odds ratio for fractures decreased (RR=l,26 CI 1,00-1,51 p=0,07). Among men there were interaction with age (p=0,014), thus height loss among older men was predictive of fractures, but not among young men. When including an interaction term in the regression model, height loss was predictive of fractures among men(p=0,02). Conclusion: Height loss is a predictor of fractures, even in such a young population. There is a need for further studies on the subject in older populations, but this result suggests that proper measurement of height loss could be part of follow-up studies of osteoporosis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.