2022
DOI: 10.1039/d1cp05172e
|View full text |Cite
|
Sign up to set email alerts
|

Multitask deep learning with dynamic task balancing for quantum mechanical properties prediction

Abstract: Predicting quantum mechanical properties (QMPs) is greatly important for the innovation of material and chemistry science. Multitask deep learning models had been widely used in QMPs prediction. However, existing multitask...

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…GNNs cannot be fully trusted without understanding and verifying their working mechanisms, , which limits their application in drug discovery scenarios. In this section, we conduct two visual explanation experiments to rationalize GIGN.…”
Section: Visual Explanations For Gignmentioning
confidence: 99%
“…GNNs cannot be fully trusted without understanding and verifying their working mechanisms, , which limits their application in drug discovery scenarios. In this section, we conduct two visual explanation experiments to rationalize GIGN.…”
Section: Visual Explanations For Gignmentioning
confidence: 99%
“…In the main task, the loss function is based on the extraction of affective dimensions from the FA, and tasks of the same category are learned at the same speed. Hence, the tasks of each category have the same degree of importance and receive the same a ention, balancing the loss and avoiding seesaw phenomena [50][51][52]. MAE is used for the affective regression task and categorical cross-entropy loss for the classification task.…”
Section: Joint Loss Functionmentioning
confidence: 99%
“…GNNs cannot be fully trusted without understanding and verifying their inner working mechanisms, which limits their application in drug discovery scenarios. 48,49 In this study, we conducted two visual explanation-related experiments to rationalize the SA-DDI. First, to investigate how the atom hidden vectors evolved during the learning process, we obtained the similarity coefficient between atom pairs by measuring the Pearson correlation coefficient for those hidden vectors.…”
Section: Visual Explanations For Sa-ddimentioning
confidence: 99%