2018
DOI: 10.48550/arxiv.1805.07020
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Understanding and Improving Deep Neural Network for Activity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…However, this is largely insufficient since deep models still may encode some noise such as irrelevant modalities [175]. Some researchers [19,163] visualized the features extracted by neural networks. Salient features are sent to the subsequent models after the authors find out their relationships to the activities from the visualization [163].…”
Section: Interpretability Of Deep Learning Models In Sensory Datamentioning
confidence: 99%
See 1 more Smart Citation
“…However, this is largely insufficient since deep models still may encode some noise such as irrelevant modalities [175]. Some researchers [19,163] visualized the features extracted by neural networks. Salient features are sent to the subsequent models after the authors find out their relationships to the activities from the visualization [163].…”
Section: Interpretability Of Deep Learning Models In Sensory Datamentioning
confidence: 99%
“…Some researchers [19,163] visualized the features extracted by neural networks. Salient features are sent to the subsequent models after the authors find out their relationships to the activities from the visualization [163]. Nutter et al [103] transformed sensory data to images so that visualization tools can be applied to the sensory data for more direct interpretability.…”
Section: Interpretability Of Deep Learning Models In Sensory Datamentioning
confidence: 99%