1986
DOI: 10.1016/0020-0190(86)90091-8
|View full text |Cite
|
Sign up to set email alerts
|

A bit-string longest-common-subsequence algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
81
0

Year Published

1989
1989
2015
2015

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 103 publications
(81 citation statements)
references
References 10 publications
0
81
0
Order By: Relevance
“…The details of these datasets will be described later. Sorted neighborhood (Hernández and Stolfo, 1995) is used as a blocking algorithm and Longest Common Subsequence (LCS) (Allison and Dix, 1986) is utilized as a similarity function. In the next step, SVM, C4.5, Naïve bayes and Bayesian network classifiers are applied on selected training records in order to train and build the models.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The details of these datasets will be described later. Sorted neighborhood (Hernández and Stolfo, 1995) is used as a blocking algorithm and Longest Common Subsequence (LCS) (Allison and Dix, 1986) is utilized as a similarity function. In the next step, SVM, C4.5, Naïve bayes and Bayesian network classifiers are applied on selected training records in order to train and build the models.…”
Section: Methodsmentioning
confidence: 99%
“…Longest Common Subsequence (LCS) is an algorithm proposed in (Allison and Dix, 1986) and is used to find the longest subsequences which are common in two strings. It has been successfully experimented in several contexts such as record linkage.…”
Section: Methodsmentioning
confidence: 99%
“…It is important to observe that in all cases even ED 2 .x W y/ is significantly better than either H.x W y/, ED.x W y/, or ID.x W y/ currently in use. generalizations of the edit and insertion-deletion distances, and can be implemented in a computationally efficient way by means of the bit-vector approach to computing a dynamic programming matrix (Allison and Dix, 1986;Myers, 1999;Crochemore et al, 2001;Hyyrö et al, 2005).…”
Section: Hamming Edit and Insertion-deletion Similaritymentioning
confidence: 99%
“…. , 4]), after stopword removal, using greedy string tiling (Wise, 1996), longest common subsequences (Allison and Dix, 1986), Jaccard coefficient (Jaccard, 1901), word containment (Lyon et al, 2001), and cosine similarity. We also apply partial tree kernels (Moschitti, 2006) on shallow syntactic trees.…”
Section: Baseline Featuresmentioning
confidence: 99%