Volume 3: Thermal-Hydraulics; Turbines, Generators, and Auxiliaries 2012
DOI: 10.1115/icone20-power2012-54566
|View full text |Cite
|
Sign up to set email alerts
|

Experiment of Adiabatic Two-Phase Flow in an Annulus Under Low-Frequency Vibration

Abstract: In order to investigate the possible effect of seismic vibration on two-phase flow dynamics and thermal-hydraulics of a nuclear reactor, experimental tests of adiabatic air-water two-phase flow under low-frequency vibration were carried out in this study. An eccentric cam vibration module operated at low motor speed (up to 390rpm) was attached to an annulus test section which was scaled down from a prototypic BWR fuel assembly sub-channel. The inner and outer diameters of the annulus are 19.1mm and 38.1mm, res… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…x is an independent variable, y is a dependent variable, ω is a weight vector, b is an offset, and φ(x) : R d → H is a nonlinear function that maps the data set S to a high-dimensional linear eigenspace and seeks optimality in the eigenvector. For the regression function, the optimization goals and constraints of the SVM are Equations (13) and (14), respectively. The ε insensitive loss function is used for a given training data set.…”
Section: Svm (Support Vector Machine) Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…x is an independent variable, y is a dependent variable, ω is a weight vector, b is an offset, and φ(x) : R d → H is a nonlinear function that maps the data set S to a high-dimensional linear eigenspace and seeks optimality in the eigenvector. For the regression function, the optimization goals and constraints of the SVM are Equations (13) and (14), respectively. The ε insensitive loss function is used for a given training data set.…”
Section: Svm (Support Vector Machine) Algorithmmentioning
confidence: 99%
“…K is the penalty coefficient, the larger value indicates the higher requirement for error, by introducing Lagrangian function, the optimization problem of Equations (13) and (14) is transformed into dual problem, which is obtained by solving the dual problem (the solution of Equation (11) [38]:…”
Section: Svm (Support Vector Machine) Algorithmmentioning
confidence: 99%