2020
DOI: 10.1007/978-3-030-49161-1_21
|View full text |Cite
|
Sign up to set email alerts
|

Task-Projected Hyperdimensional Computing for Multi-task Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 11 publications
0
9
0
Order By: Relevance
“…It has been shown that orthogonal, high-dimensional context vectors can be bound to neural network weights to access task-specific models stored in superposition with other models [6]. A similar result has also been demonstrated for HD classification, where orthogonal HVs representing different tasks were used as keys to access task-specific prototypes [4]. These strategies could be applicable to EMG-based gesture recognition, with gesture classification in each limb position treated as a separate task.…”
mentioning
confidence: 86%
See 2 more Smart Citations
“…It has been shown that orthogonal, high-dimensional context vectors can be bound to neural network weights to access task-specific models stored in superposition with other models [6]. A similar result has also been demonstrated for HD classification, where orthogonal HVs representing different tasks were used as keys to access task-specific prototypes [4]. These strategies could be applicable to EMG-based gesture recognition, with gesture classification in each limb position treated as a separate task.…”
mentioning
confidence: 86%
“…This is in contrast to the strategy proposed in this work, where we approximate the use of different sensor modalities for each stage of dual-stage classification for multiple contexts. Our context-based orthogonalization method is based on work done using orthogonal context vectors to minimize interference between different model parameters stored in superposition [4,6]. Here, we further analyze the benefits of this method when applied to HD computing, and we demonstrate ways to incorporate sensor fusion for creating context-specific HVs using both context classification and direct encoding of accelerometer signals.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…VSA finds its applications in, for example, cognitive architectures [21], natural language processing [22]- [24], biomedical signal processing [1], [25], approximation of conventional data structures [26], [27], and for classification tasks such as gesture recognition [1], [28], cyber threat detection [29], physical activity recognition [30], fault isolation [31], [32]. Examples of efforts on using VSA for other than classification learning tasks are using data HVs for clustering [33]- [35], semi-supervised learning [36], collaborative privacy-preserving learning [37], [38], multi-task learning [39], [40], distributed learning [41], [42].…”
Section: Related Workmentioning
confidence: 99%
“…The usage of VSA in the multi-task learning context has recently gained an increased attention. The recent works [2], [4], [39], [40], [64] proposed using context hypervectors for multi-task hypervectors in the supervised learning concept. The usage of context hypervectors with Hyperseed is natural.…”
Section: Hyperseed In the Multitask Learning Contextmentioning
confidence: 99%