2009
DOI: 10.1007/978-3-642-02737-6_22
|View full text |Cite
|
Sign up to set email alerts
|

Tight Bounds on the Descriptional Complexity of Regular Expressions

Abstract: Abstract. We improve on some recent results on lower bounds for conversion problems for regular expressions. In particular we consider the conversion of planar deterministic finite automata to regular expressions, study the effect of the complementation operation on the descriptional complexity of regular expressions, and the conversion of regular expressions extended by adding intersection or interleaving to ordinary regular expressions. Almost all obtained lower bounds are optimal, and the presented examples… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
22
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 22 publications
(24 citation statements)
references
References 21 publications
2
22
0
Order By: Relevance
“…At present these algorithms do not take into account the interleaving operator, but for Relax NG this would be wise as this would allow to learn significantly smaller expressions. It should be noted here that Gruber and Holzer independently obtained a similar result [18]. They show that any regular expression defining the language (a 1 b 1 ) * & (a 2 b 2 ) * & · · · & (a n b n ) * must be of size at least double exponential in n. Compared to the result in this paper, this gives a tighter bound (2 2 Ω(n) instead of 2 2 Ω( √ n) ), and shows that the double exponential size increase already occurs for very simple expressions.…”
Section: Re(#)supporting
confidence: 69%
See 2 more Smart Citations
“…At present these algorithms do not take into account the interleaving operator, but for Relax NG this would be wise as this would allow to learn significantly smaller expressions. It should be noted here that Gruber and Holzer independently obtained a similar result [18]. They show that any regular expression defining the language (a 1 b 1 ) * & (a 2 b 2 ) * & · · · & (a n b n ) * must be of size at least double exponential in n. Compared to the result in this paper, this gives a tighter bound (2 2 Ω(n) instead of 2 2 Ω( √ n) ), and shows that the double exponential size increase already occurs for very simple expressions.…”
Section: Re(#)supporting
confidence: 69%
“…2 Ω(n) [22] 2 2 Ω(n) (Proposition 7) 2 θ(n) [22] RE(∩) 2 Ω(n) (Proposition 5) 2 2 Ω(n) (Theorem 8) 2 2 Ω(n) [18] RE(&) 2 Ω(n) [ from Relax NG (which allows interleaving) to XML Schema Definitions (which does not). However, as XML Schema is the widespread W3C standard, and Relax NG is a more flexible alternative, such a translation would be more than desirable.…”
Section: Re(#)mentioning
confidence: 99%
See 1 more Smart Citation
“…Preliminary results were reported in [17] were a stunningly large gap between the lower and upper bound on the effect of complementation remained. Finally, the problem was solved in [22], and the above mentioned upper bound turned out to be tight already for binary alphabets [21,30]. The complementation problem for regular expressions over unary alphabets was already settled in [17].…”
Section: Operation Problem For Regular Expressionsmentioning
confidence: 99%
“…Nevertheless, together with clever encodings by star-height preserving homomorphisms as demonstrated in [30] makes the above theorem to a general purpose tool for proving lower bounds on alphabetic width for regular expressions. Let's briefly illustrate the application on a simple example: we aim to prove a lower bound for the intersection of languages K m = { w ∈ {a, b} * | |w| a ≡ 0 mod m } and L n = { w ∈ {a, b} * | |w| b ≡ 0 mod n }.…”
Section: Operation Problem For Regular Expressionsmentioning
confidence: 99%