2020 IEEE/SICE International Symposium on System Integration (SII) 2020
DOI: 10.1109/sii46433.2020.9025860
|View full text |Cite
|
Sign up to set email alerts
|

Real-time 3-D Mapping with Estimating Acoustic Materials

Abstract: This paper proposes a real-time system integrating an acoustic material estimation from visual appearance and an on-the-fly mapping in the 3-dimension. The proposed method estimates the acoustic materials of surroundings in indoor scenes and incorporates them to a 3-D occupancy map, as a robot moves around the environment. To estimate the acoustic material from the visual cue, we apply the state-of-the-art semantic segmentation CNN network based on the assumption that the visual appearance and the acoustic mat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…The available information can be acoustic impulse responses, measured with a single (omni-directional) microphone (Pörschmann et al, 2017), a head-and-torso-simulator (Sloma et al, 2019; Garcia-Gomez & Lopez, 2018) or microphone array solutions (Garí et al, 2019; Stade, 2018; Zaunschirm et al, 2020; Müller & Zotter, 2020; McCormack et al, 2020; Engel & Picinali, 2022). Besides, for example, semantic and visual information can be used to estimate acoustic properties (Kim et al, 2017, 2020). BRIR synthesis can be realized either by pure simulation, for example, based on ray-tracing (Savioja & Svensson, 2015; Brinkmann et al, 2019), wave-based simulation approaches or delay networks (Alary et al, 2019; Välimäki et al, 2012) or by manipulation of measured impulse responses, like interpolation (Bruschi et al, 2020; Brinkmann et al, 2020), extrapolation (Neidhardt et al, 2018; Sloma et al, 2019; Coleman et al, 2017; Pörschmann et al, 2017) or shaping of the late reverberation tail (Jot & Lee, 2016; Pörschmann & Zebisch, 2012; Arend et al, 2021).…”
Section: Basic Technical System For Auditory Augmented Realitymentioning
confidence: 99%
“…The available information can be acoustic impulse responses, measured with a single (omni-directional) microphone (Pörschmann et al, 2017), a head-and-torso-simulator (Sloma et al, 2019; Garcia-Gomez & Lopez, 2018) or microphone array solutions (Garí et al, 2019; Stade, 2018; Zaunschirm et al, 2020; Müller & Zotter, 2020; McCormack et al, 2020; Engel & Picinali, 2022). Besides, for example, semantic and visual information can be used to estimate acoustic properties (Kim et al, 2017, 2020). BRIR synthesis can be realized either by pure simulation, for example, based on ray-tracing (Savioja & Svensson, 2015; Brinkmann et al, 2019), wave-based simulation approaches or delay networks (Alary et al, 2019; Välimäki et al, 2012) or by manipulation of measured impulse responses, like interpolation (Bruschi et al, 2020; Brinkmann et al, 2020), extrapolation (Neidhardt et al, 2018; Sloma et al, 2019; Coleman et al, 2017; Pörschmann et al, 2017) or shaping of the late reverberation tail (Jot & Lee, 2016; Pörschmann & Zebisch, 2012; Arend et al, 2021).…”
Section: Basic Technical System For Auditory Augmented Realitymentioning
confidence: 99%