2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) 2022
DOI: 10.1109/saner53432.2022.00054
|View full text |Cite
|
Sign up to set email alerts
|

Aspect-Based API Review Classification: How Far Can Pre-Trained Transformer Model Go?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 28 publications
(7 citation statements)
references
References 30 publications
0
7
0
Order By: Relevance
“…The results in Table 1 are close to the results reported by Wang [28] that evaluate the three models. 5…”
Section: Settings Of Victim Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…The results in Table 1 are close to the results reported by Wang [28] that evaluate the three models. 5…”
Section: Settings Of Victim Modelsmentioning
confidence: 99%
“…With the emergence of Open-Source Software (OSS) data and advances in Deep Neural Networks (DNN), recent years have witnessed a dramatic rise in applying DNNbased models to critical software engineering tasks [1], including function name prediction [2], code search [3], clone detection [4], API classification [5], StackOverflow post tagging [6], etc. Meanwhile, the security issues of these models have also become a growing concern.…”
Section: Introductionmentioning
confidence: 99%
“…We have described two representatives of encoder-only models, CodeBERT [10] and GraphCodeBERT [15], in Section 2.1. The two models have demonstrated good performance across multiple software engineering tasks, including API review [46], Stack Overflow post analysis [16], etc. There are some other encoder-only pretrained models of code.…”
Section: Pre-trained Models Of Codementioning
confidence: 99%
“…Machine learning (ML) projects are becoming increasingly popular and play essential roles in various domain, e.g., code processing [7], [8], self-driving cars, speech recognition [9], etc. Despite widespread usage and popularity, only a few research works try to examine AI and ML projects to identify unique properties, development patterns, and trends.…”
Section: Introductionmentioning
confidence: 99%
“…the code is split into different modules and no ad-hoc scripts; (3) check whether good documentations are provided; (4) check if the project uses issues to track new features and bugs; (5) check if the project uses a CI service, e.g. Travis, CircleCI, etc; (6) check if the project is updated within the last one month;(7) check how many active contributors the project has; (8) check whether the project provides a license.For every point in the guideline, we consider the following dimensions for the project assessment: unit testing for point(1), architecture for point(2), documentation for point (3), issues for point (4), CI for point(5), history for point(6), community for point(7), and license for point(8). Aside from providing a label on whether a project is engineered or not, the labellers also provide descriptive information for every dimension.…”
mentioning
confidence: 99%