2018
DOI: 10.1007/978-3-319-97676-1_21
|View full text |Cite
|
Sign up to set email alerts
|

A Computational Theory for Life-Long Learning of Semantics

Abstract: Semantic vectors are learned from data to express semantic relationships between elements of information, for the purpose of solving and informing downstream tasks. Other models exist that learn to map and classify supervised data. However, the two worlds of learning rarely interact to inform one another dynamically, whether across types of data or levels of semantics, in order to form a unified model. We explore the research problem of learning these vectors and propose a framework for learning the semantics … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 8 publications
(13 citation statements)
references
References 9 publications
0
13
0
Order By: Relevance
“…This may become expensive when the hyperdimensional space contain many concepts. In order to maintain that data of a particular modality is closer to other examples of that modality, it may be necessary to adopt an approach that facilitates this, such as in Sutor et al ( 2018 ).…”
Section: Discussionmentioning
confidence: 99%
“…This may become expensive when the hyperdimensional space contain many concepts. In order to maintain that data of a particular modality is closer to other examples of that modality, it may be necessary to adopt an approach that facilitates this, such as in Sutor et al ( 2018 ).…”
Section: Discussionmentioning
confidence: 99%
“…This work is an extension of a series of works. First, [4]- [8] describe some methods of encoding arbitrary data into functional hypervectors. Namely, ideas presented in [7] and [8] were used to facilitate the process of converting output signals from networks into hypervectors.…”
Section: A Prior Workmentioning
confidence: 99%
“…On the other hand, if there is data associated with a problem an alternative way to mappings would be to obtain atomic HD vectors with the help of the available data via, e.g., an optimization process. For example, in [1] the optimizationbased mapping [58] was used. Fig.…”
Section: Data Representation In Vsas a Atomic Representationsmentioning
confidence: 99%
“…It rather shows that similar distance structures can be obtained in different ways. Last, it is worth mentioning that the optimization-based method [58] for obtaining similarity preserving HD vectors can be contrasted to a Random Indexing method [35]. Random Indexing also implicitly (i.e., without constructing the co-occurrence matrix) uses co-occurrence statistics in order to form similarity preserving HD vectors based on available data.…”
Section: Data Representation In Vsas a Atomic Representationsmentioning
confidence: 99%
See 1 more Smart Citation