2016
DOI: 10.1016/j.cmpb.2016.07.006
|View full text |Cite
|
Sign up to set email alerts
|

Empirical search for factors affecting mean particle size of PLGA microspheres containing macromolecular drugs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
8
1

Relationship

3
6

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 22 publications
0
9
0
Order By: Relevance
“…After training and LOOCV, all models were used to predict of polymer REs (16 molecules in Table 2 ), and the performance of all models was analyzed [ 28 ] using root-mean-square error (RMSE) mean absolute error (MAE) and the average of relative error (ARE) where and are the reference and predicted values, respectively. In addition, the coefficient of determination (R 2 ) was used to describe the proportion of variability in a dataset that can be explained by the model [ 29 ].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…After training and LOOCV, all models were used to predict of polymer REs (16 molecules in Table 2 ), and the performance of all models was analyzed [ 28 ] using root-mean-square error (RMSE) mean absolute error (MAE) and the average of relative error (ARE) where and are the reference and predicted values, respectively. In addition, the coefficient of determination (R 2 ) was used to describe the proportion of variability in a dataset that can be explained by the model [ 29 ].…”
Section: Methodsmentioning
confidence: 99%
“…where y i and ŷi are the reference and predicted values, respectively. In addition, the coefficient of determination (R 2 ) was used to describe the proportion of variability in a dataset that can be explained by the model [29].…”
Section: Machine Learning Modelsmentioning
confidence: 99%
“…Feature ranking obtained with the use of fscaret package of the R environment was employed to reduce the number of variables in the data set. The main advantages of the package are the vast number of available models for feature ranking creation, automation, and models verification based on the results obtained in earlier research, where the number of input variables was successfully reduced to 2% or 5% of the original vector [ 28 , 29 ]. The fscaret work cycle involves training models, scaling each one according to the global performance, namely, mean squared error (MSE) or root mean squared error (RMSE), and summarizing results into the feature ranking.…”
Section: Methodsmentioning
confidence: 99%
“…Briefly, variable ranking is performed in three steps: model training, variable ranking extraction, and variable ranking scaling according to the generalization error. The final variable ranking is obtained by multiplying the raw variable importance with the fraction of minimal error obtained from models by the model's actual error according to the equation below [39]:…”
Section: Feature Rankingmentioning
confidence: 99%