2015
DOI: 10.2147/ijn.s75758
|View full text |Cite
|
Sign up to set email alerts
|

Empirical modeling of the fine particle fraction for carrier-based pulmonary delivery formulations

Abstract: In vitro study of the deposition of drug particles is commonly used during development of formulations for pulmonary delivery. The assay is demanding, complex, and depends on: properties of the drug and carrier particles, including size, surface characteristics, and shape; interactions between the drug and carrier particles and assay conditions, including flow rate, type of inhaler, and impactor. The aerodynamic properties of an aerosol are measured in vitro using impactors and in most cases are presented as t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
8

Relationship

4
4

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 20 publications
0
7
0
Order By: Relevance
“…The ensemble approach used the following methods for binary predictions: General Linear Model (GLM), Gradient Boosting Machines (GBM), Treebag, Linear Discriminant Analysis (LDA), and K-Nearest Neighbors (KNN), while ensemble regression tasks included GLM, GBM, KNN, ridge regression, and the least absolute shrinkage and selection operator (LASSO) method. For each model, a weighted-average mean squared error was used across methods to estimate prediction error on unseen test data, and its reported association using results across 10-times repeated 10-fold cross-validation ( Pacławski et al, 2015 ). For all tasks, variable importance rankings for feature selection were carried out for each data type separately (CorEx plasma, CorEx brain, plasma, brain).…”
Section: Methodsmentioning
confidence: 99%
“…The ensemble approach used the following methods for binary predictions: General Linear Model (GLM), Gradient Boosting Machines (GBM), Treebag, Linear Discriminant Analysis (LDA), and K-Nearest Neighbors (KNN), while ensemble regression tasks included GLM, GBM, KNN, ridge regression, and the least absolute shrinkage and selection operator (LASSO) method. For each model, a weighted-average mean squared error was used across methods to estimate prediction error on unseen test data, and its reported association using results across 10-times repeated 10-fold cross-validation ( Pacławski et al, 2015 ). For all tasks, variable importance rankings for feature selection were carried out for each data type separately (CorEx plasma, CorEx brain, plasma, brain).…”
Section: Methodsmentioning
confidence: 99%
“…The package fscaret allows semiautomatic feature selection, working as a wrapper for the caret package in R. Fscaret is specialized for in silico feature selection experiments, whereby approximately 120 different packages are used to fit models 37. Input feature ranking is extracted from trained models by using weighted averages.…”
Section: Methodsmentioning
confidence: 99%
“…It is a simple method that stores all available cases and predicts the new cases based on a similarity measure. In a simple way, k-NN works by selecting the k-closest samples to the considered point and predicting the output based on the weighted mean of the outputs of k-NN 28. Generally, the neighbors can be selected by different similarity metrics such as Euclidean, Manhattan and Canberra distance.…”
Section: Methodsmentioning
confidence: 99%