2023
DOI: 10.21203/rs.3.rs-2575309/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Prenatal Genetic Analysis of Kidney Abnormalities

Abstract: Objective: To systematically analyze the genetic features of fetal renal abnormalities and the prenatal characteristics of 17q12 microdeletion syndrome. Methods: We retrospective analysis of fetal diagnosed with renal abnormalities from January 2016 to August 2022. Chromosome test, fetal renal abnormalities and pregnancy outcomes were performed in a descriptive analysis. Results: 141 patients (4.5%) showed abnormal renal development and 26 patients (26/141) with hyperechogenic kidneys (HCK), 14 (14/26) cases s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…The ability to understand and generate natural language from visual information is a critical component of many real-world applications, including visual question answering (VQA), visual reasoning, and multimodal information retrieval. In recent years, the success of deep learning in natural language processing (NLP) has led to the development of large-scale vision-language models (VLMs) (Tan and Bansal, 2019;Li et al, 2021b;Kim et al, 2021a;Alayrac et al, 2022;Wang et al, 2022c;Shen et al, 2022b;Li et al, 2021a;Shen et al, 2022a;Jia et al, 2021; that leverage powerful neural network architectures to encode and decode multimodal information. However, state-of-the-art vision-language models like Flamingo-80B (Alayrac et al, 2022), BEIT-3-1.9B , and PaLI-17B can be computationally expensive and difficult to train, which has motivated researchers to explore ways of improving their efficiency and effectiveness.…”
Section: Introductionmentioning
confidence: 99%
“…The ability to understand and generate natural language from visual information is a critical component of many real-world applications, including visual question answering (VQA), visual reasoning, and multimodal information retrieval. In recent years, the success of deep learning in natural language processing (NLP) has led to the development of large-scale vision-language models (VLMs) (Tan and Bansal, 2019;Li et al, 2021b;Kim et al, 2021a;Alayrac et al, 2022;Wang et al, 2022c;Shen et al, 2022b;Li et al, 2021a;Shen et al, 2022a;Jia et al, 2021; that leverage powerful neural network architectures to encode and decode multimodal information. However, state-of-the-art vision-language models like Flamingo-80B (Alayrac et al, 2022), BEIT-3-1.9B , and PaLI-17B can be computationally expensive and difficult to train, which has motivated researchers to explore ways of improving their efficiency and effectiveness.…”
Section: Introductionmentioning
confidence: 99%