2023
DOI: 10.21203/rs.3.rs-2476989/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Tracking tumor alteration in glioma through serum fibroblast activation protein combined with image

Abstract: Purpose Detecting tumor progression remains difficult in patients with glioma. Fibroblast activation protein (FAP) in gliomas has been showed to promote tumor progression. Glioma-circulating biomarkers have not yet been used in clinical practice. This study seeks to evaluate the feasibility of glioma detection using a serum FAP marker. Methods We adopted enzyme-linked immunoadsorbent assay (ELISA) to determine serum FAP level in 87 gliomas. The relationship between preoperative serum FAP levels and postoperati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…In Section 6.4, we show extensive visual results comparison between our HQ-SAM and SAM on COCO [31], DIS-test [34], HR-SOD [50], NDD20 [40], DAVIS [33], and YTVIS [46]. Comparison with Adapter Tuning Strategy In Table 10, we also compare our efficient token adaptation strategy to the recent Adapter Tuning [47]. We introduce lightweight adapters to ViT layers of SAM's encoder for encoder tuning and identify that this strategy leads to overfitting and its zero-shot performance on COCO decreases from 33.3 to 29.6.…”
Section: Appendixmentioning
confidence: 99%
See 1 more Smart Citation
“…In Section 6.4, we show extensive visual results comparison between our HQ-SAM and SAM on COCO [31], DIS-test [34], HR-SOD [50], NDD20 [40], DAVIS [33], and YTVIS [46]. Comparison with Adapter Tuning Strategy In Table 10, we also compare our efficient token adaptation strategy to the recent Adapter Tuning [47]. We introduce lightweight adapters to ViT layers of SAM's encoder for encoder tuning and identify that this strategy leads to overfitting and its zero-shot performance on COCO decreases from 33.3 to 29.6.…”
Section: Appendixmentioning
confidence: 99%
“…Comparison to Adapter Tuning[47] in SAM's encoder using ViT-L based SAM. For the COCO dataset, we use the SOTA detector FocalNet-DINO[51] trained on the COCO dataset as our box prompt generator.…”
mentioning
confidence: 99%
“…Therefore, recent video-based approaches typically freeze the encoder and adopt the CLIP representations along with additional learnable components. These components include a transformer-based temporal module (Ju et al 2022), new cross-frame communication attention for video temporal modeling and a video-specific prompting technique (Ni et al 2022), textual or visual prompts (Wang et al 2021), a spatial adaptation, temporal adaptation, and joint adaptation module (Yang et al 2023), that are learned while keeping the CLIP backbone frozen or adapting the CLIP encoders as well. These are designed to adapt CLIP while learning them quickly.…”
Section: Introductionmentioning
confidence: 99%