2020
DOI: 10.1016/j.knosys.2019.105422
|View full text |Cite
|
Sign up to set email alerts
|

A deep-learning approach to mining conditions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…To improve the understanding of sentences, recently developed pattern-based methods can be used. 77 Alternative embedding-based schemes, such as GloVe, fast-Text, Sentence-BERT, Universal Sentence Encoder and Word Mover's Embedding, can serve to generate word-emotion association. The proposed model should also be used in multi-class sentiment analysis, and new powerful supervised machine learning methods should be employed to automate the design of neural network models, such as neural dynamic classification 78 and dynamic ensembles of neural networks.…”
Section: Discussionmentioning
confidence: 99%
“…To improve the understanding of sentences, recently developed pattern-based methods can be used. 77 Alternative embedding-based schemes, such as GloVe, fast-Text, Sentence-BERT, Universal Sentence Encoder and Word Mover's Embedding, can serve to generate word-emotion association. The proposed model should also be used in multi-class sentiment analysis, and new powerful supervised machine learning methods should be employed to automate the design of neural network models, such as neural dynamic classification 78 and dynamic ensembles of neural networks.…”
Section: Discussionmentioning
confidence: 99%
“…Unfortunately, the need to introduce machine-learnt components to perform POS tagging or dependency analysis results in a proposal that cannot attain perfect parsing accuracy, which is a strong requirement. The proposals by Aiello et al, 16 Hatano et al, 17 Chittimalli et al, 21 Gallego and Corchuelo, 22,23 and Haj et al 24 used similar approaches and have the same problem. Only the proposals by Zámečníková and Kreslíková 18 and Hnatkowska and Gaweda 19 can achieve perfect parsing accuracy because they rely on grammars that can either parse a piece of text perfectly or report an error so that the user can correct it; unfortunately, none of them provide full support for SBVR-SE and the authors did not explore languages other than English; furthermore, it is not clear whether Zámečníková and Kreslíková's 18 proposal is open-domain or not and it produces non-executable decision tables.…”
Section: Decisionrules Digifi Hyperon Ibm Inrule Microsoft Oracle Peg...mentioning
confidence: 99%
“…Chittimalli et al 21 presented a recent follow‐up that uses a pipeline with three stages: first, two three‐gram statistical models are used to compute the probability that each sentence in the input text contains a rule or noise; second, the sentences that contain rules are POS tagged and transformed into dependency trees to which several heuristics are applied in order to extract the business domain (entities and relationships); third, several additional heuristics are used to extract the rules; note that the result consists in SBVR‐compliant rules in natural language. Gallego and Corchuelo 22,23 followed up on Hatano et al's 17 or Chittimalli et al's 21 proposals, but they used a neural‐based deep‐learning approach instead of using dependency parsing; their results proved to be promising since they were far more resilient than the competitors, but they did not put their focus on generating rules, but on parsing unusual forms of conditionals that might introduce business rules. Haj et al 24 presented the most recent approach of which we are aware; they use a pipeline in which the input document is first lemmatized, POS tagged, named entities are identified, and dependencies are identified; next, the business model is identified by using a number of pre‐defined patterns; finally, the system outputs SBVR‐compliant rules.…”
Section: Related Workmentioning
confidence: 99%