2020
DOI: 10.1609/aaai.v34i05.6462
|View full text |Cite
|
Sign up to set email alerts
|

TextNAS: A Neural Architecture Search Space Tailored for Text Representation

Abstract: Learning text representation is crucial for text classification and other language related tasks. There are a diverse set of text representation networks in the literature, and how to find the optimal one is a non-trivial problem. Recently, the emerging Neural Architecture Search (NAS) techniques have demonstrated good potential to solve the problem. Nevertheless, most of the existing works of NAS focus on the search algorithms and pay little attention to the search space. In this paper, we argue that the sear… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
31
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 43 publications
(32 citation statements)
references
References 27 publications
1
31
0
Order By: Relevance
“…Once the supernet is trained, each sampled structure can be directly and quickly evaluated without training from scratch. In our work, we also leverage the weight sharing strategy and build a one-shot model [8] with TextNAS [22] search space, while random search is adopted to sample architectures. Besides, we incorporate knowledge distillation and efficiency constraints as search hints to obtain effective and light student models.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Once the supernet is trained, each sampled structure can be directly and quickly evaluated without training from scratch. In our work, we also leverage the weight sharing strategy and build a one-shot model [8] with TextNAS [22] search space, while random search is adopted to sample architectures. Besides, we incorporate knowledge distillation and efficiency constraints as search hints to obtain effective and light student models.…”
Section: Related Workmentioning
confidence: 99%
“…Neural Architecture Search: The task is to find an effective and efficient architecture for AutoADR sub-model through teacherstudent framework. We adopt the search space in TextNAS [22] and build a corresponding network subsuming all possible architectures, which is called supernet [8] for the one-shot search algorithm. To apply knowledge distillation, we train the supernet with uniform sampling under the guidance of the soft predictions from teacher model.…”
Section: Autoadr 41 Overviewmentioning
confidence: 99%
See 3 more Smart Citations