Proceedings of the Canadian Conference on Artificial Intelligence 2021
DOI: 10.21428/594757db.a4880a62
|View full text |Cite
|
Sign up to set email alerts
|

A BERT-based transfer learning approach to text classification on software requirements specifications

Abstract: In a software development life cycle, software requirements specifications (SRS) written in an incomprehensible language might hinder the success of the project in later stages. In such cases, the subjective and ambiguous nature of the natural languages can be considered as a cause for the failure of the final product. Redundancy and/or controversial information in the SRS documents might also result in additional costs and time loss, reducing the overall efficiency of the project. With the recent advances in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(18 citation statements)
references
References 24 publications
0
18
0
Order By: Relevance
“…Hey et al [25] proposed NoRBERT based on the pre-trained model BERT, which achieved excellent results. In addition, Kici et al [26] proposed distillation BERT, which is superior to LSTM and BiLSTM in requirements classification tasks, through experimental comparison.…”
Section: B Deep Learning Based Requirements Classification Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Hey et al [25] proposed NoRBERT based on the pre-trained model BERT, which achieved excellent results. In addition, Kici et al [26] proposed distillation BERT, which is superior to LSTM and BiLSTM in requirements classification tasks, through experimental comparison.…”
Section: B Deep Learning Based Requirements Classification Methodsmentioning
confidence: 99%
“…BERT is a pre-trained mask language model. Previous studies have demonstrated that BERT can significantly improve classification performance [25,26], far exceeding the performance of traditional feature-extraction methods.…”
Section: Introductionmentioning
confidence: 95%
“…Word Embedding Related Papers word2vec [43], [50], [58], [59], [67], [76], [101], [104], [106], [139] BERT [11], [47], [56], [64], [84], [91], [98] Embedding Layer [39], [51], [112]…”
Section: ) Advanced Embedding Representationsmentioning
confidence: 99%
“…Aggregation-based [11], [47], [56], [58], [59], [64], [66], [76], [84], [91], [98], [101], [104], [128] RNN-based [43], [51], [54], [67], [112], [137] CNN-based [39], [50] TABLE 3. the used statement embedding techniques with their related paper…”
Section: Related Papersmentioning
confidence: 99%
“…We recognized 21 papers proposing solutions to various requirements classification tasks. Most of them focused on Functional/Non-Functional classification tasks [39,40,41,10,11,42,43,44,45,46,47,48,49], while the remaining focused on other classification tasks: security/Not security [50,51,52], topic-based classification [53], and classification based on requirements importance level [54].…”
Section: Requirements Analysismentioning
confidence: 99%