2019
DOI: 10.1186/s12874-019-0848-z
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of machine learning techniques for classification of HIV patients with antiretroviral therapy-induced mitochondrial toxicity from those without mitochondrial toxicity

Abstract: BackgroundAntiretroviral therapy (ART) has significantly reduced HIV-related morbidity and mortality. However, therapeutic benefit of ART is often limited by delayed drug-associated toxicity. Nucleoside reverse transcriptase inhibitors (NRTIs) are the backbone of ART regimens. NRTIs compete with endogenous deoxyribonucleotide triphosphates (dNTPs) in incorporation into elongating DNA chain resulting in their cytotoxic or antiviral effect. Thus, the efficacy of NRTIs could be affected by direct competition with… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 33 publications
0
13
0
2
Order By: Relevance
“…The above workflow was run in a 1000-fold cross-validation scenario as advised for example in [ 74 ], using Monte-Carlo [ 75 ] resampling to split the data set class-proportionally into two disjoint subsets, of which 2/3 of the original data served as the training data subset and the remaining 1/3 served as the test data subset. The size of the final set of variables (features) selected corresponded to the most frequent size of subsets “A” in the 1000 runs, and its members were the variables most frequently placed in the ABC subset “A” in the 1000 runs, in descending order of their occurrence in the retained sets.…”
Section: Methodsmentioning
confidence: 99%
“…The above workflow was run in a 1000-fold cross-validation scenario as advised for example in [ 74 ], using Monte-Carlo [ 75 ] resampling to split the data set class-proportionally into two disjoint subsets, of which 2/3 of the original data served as the training data subset and the remaining 1/3 served as the test data subset. The size of the final set of variables (features) selected corresponded to the most frequent size of subsets “A” in the 1000 runs, and its members were the variables most frequently placed in the ABC subset “A” in the 1000 runs, in descending order of their occurrence in the retained sets.…”
Section: Methodsmentioning
confidence: 99%
“…KNN makes predictions for new instances by searching through the entire training set for the K most similar instances (the neighbors) and summarizing the output variable for those K instances, for our study, classifying the mode (or the most common) class value. To determine which of the K instances in the training dataset are most like new input, we used a Euclidean distance measure (24,25).…”
Section: Minimum Redundancy and Maximum Relevance (Mrmr)mentioning
confidence: 99%
“…Ridge regression creates a regression model penalized with the L2-norm, which shrinks the coe cient values allowing coe cients with a minor contribution to the target variable to get close to zero. On the other hand, LASSO creates a regression model penalized with the L1-norm, which affects shrinking coe cient values allowing some with a minor impact on the target variable to become zero (23,24). Elastic net penalization creates a regression model with both the L1-norm and L2-norm.…”
Section: Minimum Redundancy and Maximum Relevance (Mrmr)mentioning
confidence: 99%
“…Training of the machine-learned of the classifiers was performed in 1000-fold repeated random subsampling cross-validation experiments as advised, for example, in [80], using Monte-Carlo [81] resampling to split the data set class-proportionally into two disjoint subsets, of which two-thirds of the original data served as the training data subset, and the remaining third served as the test data subset. This was done using the R library "sampling" (https://cran.r-project.org/package=sampling [82]).…”
Section: Evaluation Of the Utility Of Olfactory And Metabolomic Information In Predicting Diabetes Riskmentioning
confidence: 99%