Proceedings Eighth Working Conference on Reverse Engineering
DOI: 10.1109/wcre.2001.957829
|View full text |Cite
|
Sign up to set email alerts
|

Reverse engineering to achieve maintainable WWW sites

Abstract: The growth of the World Wide Web and the accelerated development of web sites and associated web technologies has resulted in a variety of maintenance problems. The maintenance problems associated with web sites and the WWW are examined. It is argued that currently web sites and the WWW lack both data abstractions and structures that could facilitate maintenance. A system to analyse existing web sites and extract duplicated content and style is described here. In designing the system, existing Reverse Engineer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
36
0

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(36 citation statements)
references
References 11 publications
0
36
0
Order By: Relevance
“…Web applications are prone to the threat of code clones due to the uninstructed development and maintenance processes as well as the inherent complexity [4,25]. Levenshtein is a common technique used in [25,26,30] to detect cloned web pages.…”
Section: Related Workmentioning
confidence: 99%
“…Web applications are prone to the threat of code clones due to the uninstructed development and maintenance processes as well as the inherent complexity [4,25]. Levenshtein is a common technique used in [25,26,30] to detect cloned web pages.…”
Section: Related Workmentioning
confidence: 99%
“…Web sites and web applications contain a higher proportion of cloned code than other softwareon average duplicated code amounts to 5-15% of the total amount of code in an application [9,10], whereas cloning ratios of 30% and higher are not uncommon for web sites [3]. The main reason for high cloning ratios in web documents is HTML's lack of code reuse tools.…”
Section: Removing Clones From Web Pagesmentioning
confidence: 99%
“…The main reason for high cloning ratios in web documents is HTML's lack of code reuse tools. Boldyreff and Kewish point out [3] that HTML lacks even an "include" directive, which is available in many programming languages. Therefore developers are forced to reuse code by cloning simply because no other alternative is available-if a piece of content must appear on several pages, the only way to do it is to place a copy in every page.…”
Section: Removing Clones From Web Pagesmentioning
confidence: 99%
See 1 more Smart Citation
“…Boldyreff and Kewish proposed a methodology for the reverse engineering of WAs, with the purpose of identifying duplications and improving maintainability [2]. Vanderdonckt et al proposed a tool, named Vaquista [18], for reverse engineering of WA presentation model.…”
Section: Related Workmentioning
confidence: 99%