2023
DOI: 10.1101/2023.01.03.521284
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The genetic architecture of the human skeletal form

Abstract: The human skeletal form underlies our ability to walk on two legs, but unlike standing height, the genetic basis of limb lengths and skeletal proportions is less well understood. Here we applied a deep learning model to 31,221 whole body dual-energy X-ray absorptiometry (DXA) images from the UK Biobank (UKB) to extract 23 different image-derived phenotypes (IDPs) that include all long bone lengths as well as hip and shoulder width, which we analyzed while controlling for height. All skeletal proportions are hi… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
26
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(30 citation statements)
references
References 135 publications
2
26
0
Order By: Relevance
“…We first restricted the dataset to individuals of white British ancestry, applied standard variant and sample QC and analyzed 12.1 million common bi-allelic SNPs with minor allele frequency > 0.1% 1 ( Methods : Genetic QC). Next, as the bulk imaging data from the UKB comprised of DXA images that reflect scans of different body parts, we used a deep learning approach 15 to subset the imaging dataset to only AP view knee scans. We then removed individuals that had outlier image resolutions or poor quality DXA scans, and padded images to a standard size for processing (see Methods : Image segmentation, phenotype measurement and quality control).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We first restricted the dataset to individuals of white British ancestry, applied standard variant and sample QC and analyzed 12.1 million common bi-allelic SNPs with minor allele frequency > 0.1% 1 ( Methods : Genetic QC). Next, as the bulk imaging data from the UKB comprised of DXA images that reflect scans of different body parts, we used a deep learning approach 15 to subset the imaging dataset to only AP view knee scans. We then removed individuals that had outlier image resolutions or poor quality DXA scans, and padded images to a standard size for processing (see Methods : Image segmentation, phenotype measurement and quality control).…”
Section: Resultsmentioning
confidence: 99%
“…DXA images in DICOM format were first organized by anatomy following the manifest files located in each directory output by the imaging machine. DXA scans were subject to further quality control following the methods described in Kun et al, 2022 15 . Following initial data cleaning, AP view knee DXA scans were converted from DICOM to JPG format using the pydicom library 35 .…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Each individual in the UKBiobank had a DXA image folder containing up to 8 different body parts. In order to check the labels of these body parts that were defined using their file name, we built a convolutional neural network (CNN) to sort the images by body part through the use of a multiclass classification model using a previously published protocol ( 36 ). After sorting and removal of images, we were left with 42,228 full skeleton X-rays ( Table S3 ).…”
Section: Methodsmentioning
confidence: 99%
“…S8), suggesting that an additional round of training was useful to reduce the variation in manual annotation to a minimum. The left panel displays the scatter plot of the left-to-right arm ratio from two imaging visits using HRNet, sourced from (36). The right panel shows the scatter plot of the same ratio from two imaging visits but using our optimized HRNet model.…”
Section: A Deep Learning Model To Identify Pelvic Landmarks On Dxa Scansmentioning
confidence: 99%