2021
DOI: 10.3390/s21217401
|View full text |Cite
|
Sign up to set email alerts
|

Software Fault Localization through Aggregation-Based Neural Ranking for Static and Dynamic Features Selection

Abstract: The automatic localization of software faults plays a critical role in assisting software professionals in fixing problems quickly. Despite various existing models for fault tolerance based on static features, localization is still challenging. By considering the dynamic features, the capabilities of the fault recognition models will be significantly enhanced. The current study proposes a model that effectively ranks static and dynamic parameters through Aggregation-Based Neural Ranking (ABNR). The proposed mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 49 publications
0
2
0
Order By: Relevance
“…When independent factors have varying effects on the class label, weighing the attributes to boost performance, and the in-class labeling is feasible. The normalized mutual information (NMI) [62] is used as the feature weight among each feature and class label. This dataset contains two parameters, α, and β, over the sample i, j concerning the class label c. Here, the class label of the instances is required to determine the normalized mutual information.…”
Section: Explainable Feature Weight Initialization and Normalizationmentioning
confidence: 99%
“…When independent factors have varying effects on the class label, weighing the attributes to boost performance, and the in-class labeling is feasible. The normalized mutual information (NMI) [62] is used as the feature weight among each feature and class label. This dataset contains two parameters, α, and β, over the sample i, j concerning the class label c. Here, the class label of the instances is required to determine the normalized mutual information.…”
Section: Explainable Feature Weight Initialization and Normalizationmentioning
confidence: 99%
“…Diverse classification models can Symmetry 2023, 15, 1562 2 of 21 be constructed using various machine intelligence techniques to conduct testing more efficiently. Different machine intelligence techniques have been explored to predict erroneous code statements in software modules to improve software quality and minimize software testing expenses [4]. The quantity and length of assessments administered significantly influence the efficacy of tests.…”
Section: Introductionmentioning
confidence: 99%