Proceedings of the 16th ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences 2017
DOI: 10.1145/3136040.3136052
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing the impact of natural language processing over feature location in models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

4
1

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…4) The Domain Term Extraction and Stopword Removal techniques are applied to automatically filter terms in or out. We selected these NLP techniques to homogenize the text of the inputs of the evolutionary algorithm since they obtained the best results in a previous work [94].…”
Section: Natural Language Processingmentioning
confidence: 99%
“…4) The Domain Term Extraction and Stopword Removal techniques are applied to automatically filter terms in or out. We selected these NLP techniques to homogenize the text of the inputs of the evolutionary algorithm since they obtained the best results in a previous work [94].…”
Section: Natural Language Processingmentioning
confidence: 99%
“…For this reason, Natural Language Processing (NLP) techniques are used to process both the requirements and the model fragments before applying the encoding. Specifically, the requirements and the model fragments are processed by a combination of NLP techniques defined in [23], which consists of tokenizing, lowercasing, removal of duplicate keywords, syntactical analysis, lemmatization, and stopword removal.…”
Section: Model Fragmentmentioning
confidence: 99%
“…Since both queries and documents are based on natural language, Natural Language Processing (NLP) techniques are used to process them. In fact, NLP has a direct and beneficial impact on the results, so before applying LSI, the queries and the documents are processed by a combination of NLP techniques defined in [23], which consists of tokenizing, lowercasing, removal of duplicate keywords, syntactical analysis, lemmatization, and stopword removal. Then, LSI constructs vector representations of both a user query and a corpus of text documents by encoding them as a term-by-document co-occurrence matrix and analyzes the relationships between those vectors to get a similarity ranking between the query and the documents (see Figure 10).…”
Section: Tlr-ir: Information Retrieval Baselinementioning
confidence: 99%
“…For this reason, Natural Language Processing (NLP) techniques are used to process both the model fragments and the queries before applying the encoding. Specifically, the model fragments and the requirements are processed by a combination of NLP techniques defined in [104], which consists of tokenizing, lowercasing, removal of duplicate keywords, syntactical analysis, lemmatization, and stopword removal.…”
Section: Ontological Encodingmentioning
confidence: 99%
“…the TLR-IR approach). Specifically, the queries and the models are processed by a combination of NLP techniques defined in [104], which consists of tokenizing, lowercasing, removal of duplicate keywords, syntactical analysis, lemmatization, and stopword removal.…”
Section: Approaches Under Evaluationmentioning
confidence: 99%