Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems 2018
DOI: 10.1145/3173574.3173932
|View full text |Cite
|
Sign up to set email alerts
|

Capturing, Representing, and Interacting with Laughter

Abstract: We investigate a speculative future in which we celebrate happiness by capturing laughter and representing it in tangible forms. We explored technologies for capturing naturally occurring laughter as well as various physical representations of it. For several weeks, our participants collected audio samples of everyday conversations with their loved ones. We processed those samples through a machine learning algorithm and shared the resulting tangible representations (e.g., physical containers and edible displa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(18 citation statements)
references
References 32 publications
0
18
0
Order By: Relevance
“…We obtain videos from seasons 1-8 and segment episodes using laughter cues from its audience. Specifically, we use open-source software for laughter detection (Ryokai et al, 2018) to obtain initial segmentation boundaries and fine-tune them using the subtitles' timestamps.…”
Section: Data Collectionmentioning
confidence: 99%
“…We obtain videos from seasons 1-8 and segment episodes using laughter cues from its audience. Specifically, we use open-source software for laughter detection (Ryokai et al, 2018) to obtain initial segmentation boundaries and fine-tune them using the subtitles' timestamps.…”
Section: Data Collectionmentioning
confidence: 99%
“…Subsequently, interaction chronograms (including turn-taking events) were created using TextGridTools, a Python toolkit for working with Praat TextGrid files (Buschmeier and Włodarczak, 2013 ). Intervals of laughter were identified using the method by Ryokai et al ( 2018 ) with the code and models accompanying the paper 3 . The authors report a per-frame accuracy of 88% on a held-out Switchboard test set, which is comparable to state-of-the-art performance of audio-only automatic laughter recognizers (Cosentino et al, 2016 ).…”
Section: Methodsmentioning
confidence: 99%
“…Ryolai et al [ 168 ] explored ways to assign laughter to tangible objects. Yuan et al [ 169 ] trained a deep learning model to predict the scannability of webpage content.…”
Section: Classifying Hcml Researchmentioning
confidence: 99%