2022
DOI: 10.1007/s11095-022-03386-9
|View full text |Cite
|
Sign up to set email alerts
|

In Vitro and In Silico Investigations on Drug Delivery in the Mouth-Throat Models with Handihaler®

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
5
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 56 publications
1
5
0
Order By: Relevance
“…The simulated data on the deposition fraction of drug particles in the mouth and throat areas under different peak inspiratory rates are summarized in Table 3. Table 3 indicates that the rate of oropharyngeal deposition of drug particles at 30 L/min is approximately 56.47%, which is similar to the rates of oropharyngeal deposition obtained from previous in vitro experiments [31,32]. Thus, the results of the numerical simulation obtained in this study are reliable.…”
Section: Resultssupporting
confidence: 86%
“…The simulated data on the deposition fraction of drug particles in the mouth and throat areas under different peak inspiratory rates are summarized in Table 3. Table 3 indicates that the rate of oropharyngeal deposition of drug particles at 30 L/min is approximately 56.47%, which is similar to the rates of oropharyngeal deposition obtained from previous in vitro experiments [31,32]. Thus, the results of the numerical simulation obtained in this study are reliable.…”
Section: Resultssupporting
confidence: 86%
“…Because an MMAD below 5 µm is typically assumed to be necessary for deposition in the lower airways [ 83 ], it is concluded that all three rod formulations will reach the pertinent targets. Moreover, the efficacy of these particles is comparable to commercial DPI formulations using the HandiHaler ® [ 84 , 85 ] while being able to deliver higher amounts of API due to the larger volume of the delivery system.…”
Section: Resultsmentioning
confidence: 97%
“…Large language models (LLMs) pretrained on large text corpora have shown impressive performances on various tasks (Brown et al, 2020;Scao et al, 2022). Some recent works (Wei et al, 2022;Sanh et al, 2022) have explored multitask training of language models on labeled datasets paired with natural language instructions to enhance zero-shot generalization, a procedure called instruction finetuning. In contrast, MaNtLE generates explanations when prompted with feature-prediction pairs.…”
Section: Multi-task Training Of Language Modelsmentioning
confidence: 99%
“…The goal of MaNtLE is to explain the rationale of classifiers on realworld tasks. To develop MaNtLE, we fine-tune a T5-Large model on thousands of synthetic classification tasks, each paired with natural language explanations, in a multi-task learning setup following recent research (Wei et al, 2022;Sanh et al, 2022;. In §3.5, we discuss inference procedures to improve explanation quality and adapt the model trained on synthetic data for real-world tasks.…”
Section: Introductionmentioning
confidence: 99%