2018
DOI: 10.1002/cav.1835
|View full text |Cite|
|
Sign up to set email alerts
|

Example‐based synthesis for sound of ocean waves caused by bubble dynamics

Abstract: We present an automatic approach for the semantic modeling of indoor scenes based on a single photograph, instead of relying on depth sensors. Without using handcrafted features, we guide indoor scene modeling with feature maps extracted by fully convolutional networks. Three parallel fully convolutional networks are adopted to generate object instance masks, a depth map, and an edge map of the room layout. Based on these high-level features, support relationships between indoor objects can be efficiently infe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1
1

Relationship

4
3

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…In the field of graphics, researchers have developed methods that use the nonphysical methods to automatically synthesize sounds synchronized with animation, and the parameters are obtained from physics-based animation. [17][18][19] An et al 17 proposed a sound synthesis method for the fabric animation, in which the sound grains matching 3D animation was selected based on the Mel-Frequency Cepstrum Coefficient (MFCC) of sound signal, and the selected sound grains were subsequently spliced together to synthesize the fabric sound. The principles of sound grain algorithm are referenced in this paper.…”
Section: Related Workmentioning
confidence: 99%
“…In the field of graphics, researchers have developed methods that use the nonphysical methods to automatically synthesize sounds synchronized with animation, and the parameters are obtained from physics-based animation. [17][18][19] An et al 17 proposed a sound synthesis method for the fabric animation, in which the sound grains matching 3D animation was selected based on the Mel-Frequency Cepstrum Coefficient (MFCC) of sound signal, and the selected sound grains were subsequently spliced together to synthesize the fabric sound. The principles of sound grain algorithm are referenced in this paper.…”
Section: Related Workmentioning
confidence: 99%
“…Wang and Liu [237] proposed a hybrid method to generate the sound of wave animation based on available wave sound samples and bubble extraction. Based on the theory of bubbles as the main sound source, they generated bubble particles to ensure the synchronization between visual and sound effects.…”
Section: Liquid Soundmentioning
confidence: 99%
“…In the computer graphics community, researchers have developed nonphysical methods to automatically synthesize synchronized sounds with animation. [12][13][14] The data used for synchronization are derived from physical-based animation. A data-driven method for automatically synthesizing plausible sound for cloth animations was proposed by An et al.…”
Section: Sound Synthesismentioning
confidence: 99%
“…In the computer graphics community, researchers have developed nonphysical methods to automatically synthesize synchronized sounds with animation 12‐14 . The data used for synchronization are derived from physical‐based animation.…”
Section: Related Workmentioning
confidence: 99%