2022
DOI: 10.21203/rs.3.rs-2400821/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Endovascular treatment of large hemoptysis for pulmonary pseudoaneurysm: Report of 23 cases

Abstract: Purpose: To evaluate the safety and effectiveness of endovascular treatment for massive hemoptysis caused by pulmonary pseudoaneurysm (PAP). Methods: The clinical data, imaging data, and endovascular treatment of 23 patients with massive hemoptysis caused by continuous PAP were retrospectively analyzed. The success, complication, postoperative recurrence rate, and influence of the treatment on pulmonary artery pressure were also evaluated. Results:Nineteen patients with bronchial artery (BA) or NBSA-PA fistula… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Therefore, it is essential to carefully evaluate the model prediction performance and explanation quality. Other than exploring more advanced ML algorithms, LLMs [8,47,84,88] may offer a new method to generate appropriate and convincing explanation content. Besides, when an intervention is personalized, users, especially younger ones, could be biased towards being more receptive to adopting it [24].…”
Section: Ethical Concerns and Risk Of Ai-based Intervention Systemmentioning
confidence: 99%
“…Therefore, it is essential to carefully evaluate the model prediction performance and explanation quality. Other than exploring more advanced ML algorithms, LLMs [8,47,84,88] may offer a new method to generate appropriate and convincing explanation content. Besides, when an intervention is personalized, users, especially younger ones, could be biased towards being more receptive to adopting it [24].…”
Section: Ethical Concerns and Risk Of Ai-based Intervention Systemmentioning
confidence: 99%
“…Since the LLMs do not seem to share the same understanding of prompts with humans [49,69], a series of works in prompt engineering have been proposed [58,59,67,72]. Among them, one particular prompt strategy called Chain-of-thought (CoT) has been shown to elicit strong reasoning abilities in LLMs by asking the model to incorporate intermediate reasoning steps (rationales) while solving a problem [38,40,66,72]. Wang et al [67] sample from the reasoning path and vote for the majority result.…”
Section: In-context Learning and Chain-of-thoughtmentioning
confidence: 99%
“…Such reasoning ability is not a feature only found in LLMs, and several works have explored incorporating it in small models. Li et al [38] used CoT-like reasoning from LLMs to train smaller models on a joint task of generating the solution and explaining the solution generated. With multi-step reasoning, Fu et al [26] concentrate small models on a specific task and sacrifice its generalizability in exchange for the high performance.…”
Section: In-context Learning and Chain-of-thoughtmentioning
confidence: 99%