2022
DOI: 10.1007/s41870-022-00884-2
|View full text |Cite
|
Sign up to set email alerts
|

MUCE: a multilingual use case model extractor using GPT-3

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 14 publications
0
10
0
Order By: Relevance
“…This key feature enables GPT-3 to predict the next word in a sentence sequence adeptly. It consists of two main components: an encoder and a decoder [23]. The encoder accepts the preceding word in a sentence and trans gures it into a vector format, thereby capturing its contextual essence.…”
Section: Generative Pre-trained Transformer-3 (Gpt-3) Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…This key feature enables GPT-3 to predict the next word in a sentence sequence adeptly. It consists of two main components: an encoder and a decoder [23]. The encoder accepts the preceding word in a sentence and trans gures it into a vector format, thereby capturing its contextual essence.…”
Section: Generative Pre-trained Transformer-3 (Gpt-3) Modelmentioning
confidence: 99%
“…Conversely, the remaining engines offer powerful yet economical alternatives. The Curie engine manages complex tasks requiring advanced knowledge; Babbage stands out in performing speci c operations, while Ada is ideally suited for simpler, more straightforward tasks [23]. The Davinci engine is the most tting choice for our tool, ClassDiagGen, which diligently analyse complex text to identify classes, attributes, methods, and relationships.…”
Section: Generative Pre-trained Transformer-3 (Gpt-3) Modelmentioning
confidence: 99%
“…Its critical advantage is its measure, it contains around 175 billion parameters and is 100 times bigger than GPT-2. It is prepared upon a 500billion-word information set i.e., Common Crawl collected from the web and available repositories [24]. The contrast between the three GPT models is their measure.…”
Section: Gpt-3mentioning
confidence: 99%
“…GPT-1 received the estimate and with GPT-2 the number of parameters was improved to 1.5 billion. With GPT-3, the number of parameters was boosted to 175 billion, making it the biggest artificial neural network [24,25]. (Shown in Table 1) (Table 2 and 3).…”
Section: Gpt-3mentioning
confidence: 99%
“…We assume that use case models are fundamental artifacts of any object-oriented system and are readily available to system designers. Use case set, UC A , can be gathered manually from the requirement documents or can be automated by parsing the given software requirements using NLP based language models as discussed in [44].…”
Section: B Proposed Approachmentioning
confidence: 99%