This study examined the effects of one night of sleep curtailment on hunger, food cravings, food reward, and portion size selection. Women who reported habitually sleeping 7–9 h per night, were aged 18–55, were not obese, and had no sleep disorders were recruited. Sleep conditions in this randomized crossover study consisted of a normal night (NN) and a curtailed night (CN) where time in bed was reduced by 33%. Hunger, tiredness, sleep quality, sleepiness, and food cravings were measured. A progressive ratio task using chocolates assessed the food reward. Participants selected portions of various foods that reflected how much they wanted to eat at that time. The sleep duration was measured using a single-channel electroencephalograph. Twenty-four participants completed the study. The total sleep time was shorter during the CN (p < 0.001). Participants reported increased hunger (p = 0.013), tiredness (p < 0.001), sleepiness (p < 0.001), and food cravings (p = 0.002) after the CN. More chocolate was consumed after the CN (p = 0.004). Larger portion sizes selected after the CN resulted in increased energy plated for lunch (p = 0.034). In conclusion, the present study observed increased hunger, food cravings, food reward, and portion sizes of food after a night of modest sleep curtailment. These maladaptive responses could lead to higher energy intake and, ultimately, weight gain.
Generating sentences from a library of signs implemented through a sparse set of key frames derived from the segmental structure of a phonetic model of ASL has the advantage of flexibility and efficiency, but lacks the lifelike detail of motion capture. These difficulties are compounded when faced with real-time generation and display. This paper describes a technique for automatically adding realism without the expense of manually animating the requisite detail. The new technique layers transparently over and modifies the primary motions dictated by the segmental model, and does so with very little computational cost, enabling real-time production and display. The paper also discusses avatar optimizations that can lower the rendering overhead in real-time displays.
Translating from English to American Sign Language (ASL) requires an avatar to display synthesized ASL. Essential to the language are nonmanual signals that appear on the face. Previous avatars were hampered by an inability to portray emotion and facial nonmanual signals that occur at the same time. A new animation system addresses this challenge. Animations produced by the new system were tested with 40 members of the Deaf community in the United States. For each animation, participants were able to identify both nonmanual signals and emotional states. Co-occurring question nonmanuals and affect information were distinguishable, which is particularly striking because the two processes can move an avatar's brows in opposing directions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.