2023
DOI: 10.1101/2023.11.11.566725
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GexMolGen: Cross-modal Generation of Hit-like Molecules via Large Language Model Encoding of Gene Expression Signatures

Jiabei Cheng,
Xiaoyong Pan,
Yi Fang
et al.

Abstract: Designing hit-like molecules from gene expression signatures takes into account multiple targets and complex biological effects, enabling the discovery of multi-target drugs for complex diseases. Traditional methods relying on similarity searching against a database are limited by the quality and size of the databases. Instead, multimodal deep learning offers the potential to overcome this bottleneck. Additionally, the recent development of foundation models provides a new perspective to understand gene expres… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 75 publications
(199 reference statements)
0
1
0
Order By: Relevance
“…These models, collectively referred to as single-cell large language models (scLLMs), have attracted significant attention and subsequent research [17][18][19][20][21][22][23][24], which has investigated their reusability, extendibility, and applicability. For example, Kasia Z. Kedzierska et al [17] benchmarked scGPT and Geneformer in zero-shot settings and found that these models did not perform well in such scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…These models, collectively referred to as single-cell large language models (scLLMs), have attracted significant attention and subsequent research [17][18][19][20][21][22][23][24], which has investigated their reusability, extendibility, and applicability. For example, Kasia Z. Kedzierska et al [17] benchmarked scGPT and Geneformer in zero-shot settings and found that these models did not perform well in such scenarios.…”
Section: Introductionmentioning
confidence: 99%