2020
DOI: 10.21203/rs.3.rs-16069/v4
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Identification of ATP2C1 mutations in the patients of Hailey-Hailey disease

Abstract: Background: Familial benign chronic pemphigus, also known as Hailey-Hailey disease (HHD), is a clinically rare bullous Dermatosis. However the mechanism has not been clarified. The study aim to detect novel mutations in exons of ATP2C1 gene in HHD patients; to explore the possible mechnism of HHD pathogenesis by examining the expression profile of hSPCA1, miR-203, p63, Notch1 and HKⅡ proteins in the skin lesions of HHD patients. Methods: Genomic DNA was extracted from peripheral blood of HHD patients. All exon… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 1 publication
0
8
0
Order By: Relevance
“…In light of emergence [50], larger models are however often preferable. We are already able to extract 73% and 47% of the text and code samples, even larger models like CodeX [14] or Starcoder [34] might memorise even more data.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In light of emergence [50], larger models are however often preferable. We are already able to extract 73% and 47% of the text and code samples, even larger models like CodeX [14] or Starcoder [34] might memorise even more data.…”
Section: Discussionmentioning
confidence: 99%
“…This risk extends to pre-trained models, as some pre-training corpora, including the Pile [22], also contain code licenced under non-permissive licences [2]. The risk can be avoided by training models with code licenced under permissive licences (such as BSD-3 or MIT) or providing provenance information to trace the code back to its source so that the user of the output can abide by the original licence [23,34].…”
Section: Discussionmentioning
confidence: 99%
“…To address IRQ1, we evaluate GPT-2 on a simplified task, next token prediction, a widely used task for pre-training LCMs [18,22,37]. In this task, our objective is to find the minimum number of layers in the GPT-2 model necessary to achieve accurate predictions for the next token in code completion.…”
Section: Irq 1: Number Of Indispensable Layersmentioning
confidence: 99%
“…Limited by our devices, we only conduct experiments on two LCMs: GPT-2 and CodeGen, both containing hundreds of millions of parameters. Nevertheless, their architectures serve as prevalent templates for current LCMs, including newer models like StarCoder [18], which achieved state-of-the-art performance by directly adopting the GPT-2 architecture with increased parameters and training samples. Remarkably, while the model size is expanding, the fundamental issue of computation waste and unhelpful completions caused by the fixed inference process remains unaddressed.…”
Section: Threats To Validitymentioning
confidence: 99%
“…Many studies on these two diseases have revealed their close relationship, such as hyperbilirubinemia is observed in erythropoietic porphyrias [32]. For coronary artery disease and familial benign chronic pemphigus, it has been detected that mutations in exons of ATP2C1 gene in the patients of familial benign chronic pemphigus [33], Nassa et al found that ATP2C1 may induce coronary artery disease [34].…”
Section: Effectivenessmentioning
confidence: 99%