1999
DOI: 10.1002/(sici)1097-4571(1999)50:1<65::aid-asi8>3.0.co;2-g
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting parallelism in a structural scientific discovery system to improve scalability

Abstract: The large amount of data collected today is quickly overwhelming researchers' abilities to interpret the data and discover interesting patterns. Knowledge discovery and data mining approaches hold the potential to automate the interpretation process, but these approaches frequently utilize computationally expensive algorithms. In particular, scientific discovery systems focus on the utilization of richer data representation, sometimes without regard for scalability. This research investigates approaches for sc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

1999
1999
2019
2019

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…To make algorithms scalable also requires using more computational power -i.e., faster CPUs. In some cases this may be enhanced by multiple CPUs; however, the algorithms then need to be redesigned to take advantage of the explicit parallelism in hardware (Galal, Cook, and Holder, 1999). Consideration of which algorithm to be used might also depend on the specific variables that are employed.…”
Section: Discussionmentioning
confidence: 99%
“…To make algorithms scalable also requires using more computational power -i.e., faster CPUs. In some cases this may be enhanced by multiple CPUs; however, the algorithms then need to be redesigned to take advantage of the explicit parallelism in hardware (Galal, Cook, and Holder, 1999). Consideration of which algorithm to be used might also depend on the specific variables that are employed.…”
Section: Discussionmentioning
confidence: 99%
“…Before describing CRA in detail, we review existing automated text analysis approaches in the next section. Galal et al (1999) General content analysis approaches Humphrey (1999) Keyword frequency approaches Rice & Danowski (1993) WORDij (Danowski, 1992) TACT (Siemens, 1993) APPROACHES TO AUTOMATED TEXT ANALYSIS A time-tested method of dealing with data floods is computer processing, which will be the approach advocated here. Specifically, we believe that a computerized analysis can serve as a substitute (though not necessarily a replacement) for a complete reading of a text.…”
Section: "Be Careful What You Wish For"mentioning
confidence: 99%
“…Some important programs are Inexact graph matcher, Minimum Description Length, distributed MPI Subdue [4] and Subgraph Isomorphism.…”
Section: Subduementioning
confidence: 99%