2021
DOI: 10.4208/csiam-am.2020-0220
|View full text |Cite
|
Sign up to set email alerts
|

Solving the $k$-Sparse Eigenvalue Problem with Reinforcement Learning

Abstract: We examine the possibility of using a reinforcement learning (RL) algorithm to solve large-scale eigenvalue problems in which the desired the eigenvector can be approximated by a sparse vector with at most k nonzero elements, where k is relatively small compare to the dimension of the matrix to be partially diagonalized. This type of problem arises in applications in which the desired eigenvector exhibits localization properties and in large-scale eigenvalue computations in which the amount of computational re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…In reinforcement learning, an agent is trained to take a series of actions in order to maximize a reward . Herein, we follow much of the notation and framework laid out for the k -sparse eigenproblem in ref and apply it to the sCI case. The reinforcement learning process proceeds in a series of episodes, where the agent explores the environment and obtains rewards or penalties for its actions.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In reinforcement learning, an agent is trained to take a series of actions in order to maximize a reward . Herein, we follow much of the notation and framework laid out for the k -sparse eigenproblem in ref and apply it to the sCI case. The reinforcement learning process proceeds in a series of episodes, where the agent explores the environment and obtains rewards or penalties for its actions.…”
Section: Methodsmentioning
confidence: 99%
“…Note that if an action is selected, the loop over S 1 is immediately terminated. Action pairs a = ( p , q ), p ∈ s , q ∉ s from the two lists are iterated over, and an action a is taken if it satisfies a Metropolis-like criterion as detailed in ref . If no action is selected, the learning procedure terminates. Once the action a is selected, the search policy terminates, the local reward r is computed according to eq , and the weights w and v are updated according to eqs and , respectively.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation