2007 IEEE 6th International Conference on Development and Learning 2007
DOI: 10.1109/devlrn.2007.4354032
|View full text |Cite
|
Sign up to set email alerts
|

Basis iteration for reward based dimensionality reduction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2007
2007
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…In our case, observations can contain continuous features, which would require an infinite number of states. State vector transformation approaches project a high-dimensional space onto a low-dimensional space, such as basis iteration for reward based dimensionality reduction (Sprague, 2007 ), locally linear embedding (LLE) (Roweis and Saul, 2000 ) and dimensionality reduction by learning an invariant mapping (DrLIM) (Hadsell et al, 2006 ). Apart from DrLIM, these methods require recomputation of the embedding for each unknown datapoint, and rely on predetermined computable distance metrics.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In our case, observations can contain continuous features, which would require an infinite number of states. State vector transformation approaches project a high-dimensional space onto a low-dimensional space, such as basis iteration for reward based dimensionality reduction (Sprague, 2007 ), locally linear embedding (LLE) (Roweis and Saul, 2000 ) and dimensionality reduction by learning an invariant mapping (DrLIM) (Hadsell et al, 2006 ). Apart from DrLIM, these methods require recomputation of the embedding for each unknown datapoint, and rely on predetermined computable distance metrics.…”
Section: Related Workmentioning
confidence: 99%
“…A large state space increases computational costs and can lead to overfitting (Sỳkora, 2008 ). It is possible to reduce the state space through various methods, such as state clustering or segmentation [e.g., Q-learning with adaptive state segmentation (Murao and Kitamura, 1997 )], state vector transformation [e.g., basis iteration for reward based dimensionality reduction (Sprague, 2007 )] and state space reconstruction [e.g., action respecting embedding (Bowling et al, 2005 ; Sỳkora, 2008 )]. State space reduction can improve generalization, as well as reduce computational complexity and learning time.…”
Section: Introductionmentioning
confidence: 99%
“…Dimension reduction is one method to deal with the freedom, and can be done via a developmental approach [94]. Other methods guide exploration by emotion or motivation based heuristics, such as curiosity or disappointment [95]- [98].…”
Section: Sensorimotor Controlmentioning
confidence: 99%
“…However, it can lead to considerable information loss (Wang et al, 2017). Sprague (2007) proposed an iterative dimension reduction method using neighborhood components analysis. Their method uses a linear basis function to model the Q‐function and cannot allow more general nonlinear function approximation.…”
Section: Introductionmentioning
confidence: 99%