2015
DOI: 10.2172/1244630
|View full text |Cite
|
Sign up to set email alerts
|

Developing and Implementing the Data Mining Algorithms in RAVEN

Abstract: The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for the analysis of data sets, simulation of physical phenomenon, etc. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mini… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…Automated techniques systematically partition the domain, typically in a greedy fashion, by splitting the data along either an original axis-aligned dimension or a reduced axis after performing dimensionality reduction. The criterion for splitting the data varies from minimizing numerical error [2,10,13,17,27] to more geometry and topology-based criteria [29,38].…”
Section: Partition-based Regression Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Automated techniques systematically partition the domain, typically in a greedy fashion, by splitting the data along either an original axis-aligned dimension or a reduced axis after performing dimensionality reduction. The criterion for splitting the data varies from minimizing numerical error [2,10,13,17,27] to more geometry and topology-based criteria [29,38].…”
Section: Partition-based Regression Methodsmentioning
confidence: 99%
“…hybrid dynamic event tree and adaptive hybrid dynamic even tree) [1], advanced static data mining capabilities (e.g. clustering, principal component analysis, manifold learning) [2], and ways to connect multiple reduced order model (able to reproduce scalar figure of merits) in order to create ensemble of models.…”
Section: Introductionmentioning
confidence: 99%
“…The answer as seen in Fig. 2 is that these areas, having association with KDD, focuses on all the processes of exploration of verbose information, how to store and retrieve the data, how algorithms can be scaled to huge data sets and still work effectively [23,24], how the results can be interpreted and visualized, and how the whole humanmachine interaction can be usefully modeled and supported.…”
Section: Fig 1 Knowledge Discovery In Databasesmentioning
confidence: 99%