2022
DOI: 10.48550/arxiv.2202.05008
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

EvoJAX: Hardware-Accelerated Neuroevolution

Yujin Tang,
Yingtao Tian,
David Ha

Abstract: Evolutionary computation has been shown to be a highly effective method for training neural networks, particularly when employed at scale on CPU clusters. Recent work have also showcased their effectiveness on hardware accelerators, such as GPUs, but so far such demonstrations are tailored for very specific tasks, limiting applicability to other domains. We present EvoJAX, a scalable, general purpose, hardware-accelerated neuroevolution toolkit. Building on top of the JAX library, our toolkit enables neuroevol… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…Over the last few years, JAX (Bradbury et al, 2018) has seen increasing adoption by the research community Heek et al, 2020;Ro et al, 2021;Tang et al, 2022). The key difference between JAX and other popular deep learning frameworks like PyTorch and Tensorflow is the clear separation between functions and state.…”
Section: Why a New Sparsity Library In Jax?mentioning
confidence: 99%
“…Over the last few years, JAX (Bradbury et al, 2018) has seen increasing adoption by the research community Heek et al, 2020;Ro et al, 2021;Tang et al, 2022). The key difference between JAX and other popular deep learning frameworks like PyTorch and Tensorflow is the clear separation between functions and state.…”
Section: Why a New Sparsity Library In Jax?mentioning
confidence: 99%
“…Recent advances in hardware acceleration have led to new QD libraries such as QDax [20] or EvoJax [21]. These tools rely on highly-parallelised simulators like Brax [22] that can run on accelerators (e.g., GPUs and TPUs) and thus target simulated domains, for example, robotics control, where they drastically reduce the evaluation time.…”
Section: B Hardware-accelerated Quality-diversitymentioning
confidence: 99%
“…Nonetheless, recent advances in computer systems enable the high-parallelisation of evaluations. Recent libraries such as QDax [20], or EvoJax [21] based on the Brax simulator [22] allowed to speedup computation by a large order of magnitude thanks to the high-parallelisation of evaluations. With such tools, we now have access to 10 or 100 times more evaluations per generation within the same amount of time.…”
Section: Introductionmentioning
confidence: 99%
“…So unlike traditionally training a neural network to perform one task, where the weight parameters of neural networks are traditionally optimized with a gradient descent algorithm, or with evolution strategies (Tang et al 2022), the goal of meta-learning is to train a meta-learner (which can be another neural network-based system) to learn a learning algorithm. This is a particularly challenging task, with a long history see Schmidhuber (Schmidhuber, 2020) for a review).…”
Section: Meta-learningmentioning
confidence: 99%