“…Chittimalli et al 21 presented a recent follow‐up that uses a pipeline with three stages: first, two three‐gram statistical models are used to compute the probability that each sentence in the input text contains a rule or noise; second, the sentences that contain rules are POS tagged and transformed into dependency trees to which several heuristics are applied in order to extract the business domain (entities and relationships); third, several additional heuristics are used to extract the rules; note that the result consists in SBVR‐compliant rules in natural language. Gallego and Corchuelo 22,23 followed up on Hatano et al's 17 or Chittimalli et al's 21 proposals, but they used a neural‐based deep‐learning approach instead of using dependency parsing; their results proved to be promising since they were far more resilient than the competitors, but they did not put their focus on generating rules, but on parsing unusual forms of conditionals that might introduce business rules. Haj et al 24 presented the most recent approach of which we are aware; they use a pipeline in which the input document is first lemmatized, POS tagged, named entities are identified, and dependencies are identified; next, the business model is identified by using a number of pre‐defined patterns; finally, the system outputs SBVR‐compliant rules.…”