Languages differ in how they organize events, particularly in the types of semantic elements they express and the arrangement of those elements within a sentence. Here we ask whether these cross-linguistic differences have an impact on how events are represented nonverbally; more specifically, on how events are represented in gestures produced without speech (silent gesture), compared to gestures produced with speech (co-speech gesture). We observed speech and gesture in 40 adult native speakers of English and Turkish (N = 20/per language) asked to describe physical motion events (e.g., running down a path)—a domain known to elicit distinct patterns of speech and co-speech gesture in English- and Turkish-speakers. Replicating previous work (Kita & Özyürek, 2003), we found an effect of language on gesture when it was produced with speech—co-speech gestures produced by English-speakers differed from co-speech gestures produced by Turkish-speakers. However, we found no effect of language on gesture when it was produced on its own—silent gestures produced by English-speakers were identical in how motion elements were packaged and ordered to silent gestures produced by Turkish-speakers. The findings provide evidence for a natural semantic organization that humans impose on motion events when they convey those events without language.
Speakers of all languages gesture, but there are differences in the gestures that they produce. Do speakers learn language-specific gestures by watching others gesture or by learning to speak a particular language? We examined this question by studying the speech and gestures produced by 40 congenitally blind adult native speakers of English and Turkish (n = 20/language), and comparing them with the speech and gestures of 40 sighted adult speakers in each language (20 wearing blindfolds, 20 not wearing blindfolds). We focused on speakers' descriptions of physical motion, which display strong cross-linguistic differences in patterns of speech and gesture use. Congenitally blind speakers of English and Turkish produced speech that resembled the speech produced by sighted speakers of their native language. More important, blind speakers of each language used gestures that resembled the gestures of sighted speakers of that language. Our results suggest that hearing a particular language is sufficient to gesture like a native speaker of that language.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co-speech gesture), not without speech (silent gesture). We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three-dimensional motion scenes. We found an effect of language on co-speech gesture, not on silent gesture-blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language-an organization that relies on neither visuospatial cues nor language structure.
How do humans compute approximate number? According to one influential theory, approximate number representations arise in the intraparietal sulcus and are amodal, meaning that they arise independent of any sensory modality. Alternatively, approximate number may be computed initially within sensory systems. Here we tested for sensitivity to approximate number in the visual system using steady state visual evoked potentials. We recorded electroencephalography from humans while they viewed dotclouds presented at 30 Hz, which alternated in numerosity (ranging from 10 to 20 dots) at 15 Hz. At this rate, each dotcloud backward masked the previous dotcloud, disrupting top-down feedback to visual cortex and preventing conscious awareness of the dotclouds’ numerosities. Spectral amplitude at 15 Hz measured over the occipital lobe (Oz) correlated positively with the numerical ratio of the stimuli, even when nonnumerical stimulus attributes were controlled, indicating that subjects’ visual systems were differentiating dotclouds on the basis of their numerical ratios. Crucially, subjects were unable to discriminate the numerosities of the dotclouds consciously, indicating the backward masking of the stimuli disrupted reentrant feedback to visual cortex. Approximate number appears to be computed within the visual system, independently of higher-order areas, such as the intraparietal sulcus.
BackgroundAlzheimer’s disease (AD) lacks a fast, easy, reliable, and inexpensive method of diagnosis. Currently, diagnosis is based on time‐consuming behavioral tests and the exclusion of other potential causes of impairment. Several biomarkers show good or promising diagnostic performance (e.g. CSF, tau PET, MRI, blood), but are either expensive, invasive, or still in development and while some can detect preclinical disease stages, all appear slow to change relative to the rate of cognitive decline. Here we develop a prototype diagnostic classifier based on novel metrics of brain activity in resting state electroencephalography (EEG) that correlates well with mental status.MethodArchival resting‐state EEG recordings of older adults (N=248) came from a memory clinic and university‐based clinic with a range of clinical diagnoses including subjective cognitive impairment (SCI), mild cognitive impairment (MCI), and dementia, representing AD, vascular dementia, and Lewy body dementia, TBI, and depression. We developed XGBoost classifiers to detect AD using EEG, age, and sex under increasingly challenging conditions. We computed metrics of periodic and aperiodic brain activity using the FOOOF algorithm. Furthermore, we developed a novel technique called [Banded Fractal Variability], which yields a set of features based on fluctuations in the fractal dimension within canonical frequency bands. We trained classifiers using cross‐validation to avoid overfitting during hyperparameter selection.ResultAlong with ROCAUC, we report an optimal sensitivity, specificity, and accuracy for the point on the ROC curve that maximizes Youden’s j. The classification tasks were healthy vs. probable AD (ROCAUC = 98%, sensitivity = 90%, specificity = 99%, accuracy = 96%), SCI vs. mild AD (ROCAUC = 89%, sensitivity = 76%, specificity = 87%, accuracy = 84%), and AD vs. other pathologies (no AD diagnosis) in MCI and dementia patients (ROCAUC = 82%, sensitivity = 72%, specificity = 87%, accuracy = 80%). All ROCAUC values were stronger than would be expected by chance (ps < 0.001).ConclusionThese preliminary results suggest that AD could be diagnosed in the clinic on the basis of machine‐learning classifiers and resting‐state EEG. Furthermore, they demonstrate that [Banded Fractal Variability] carries clinically‐relevant information about AD.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.