2022
DOI: 10.48550/arxiv.2207.01848
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 0 publications
0
19
0
Order By: Relevance
“…For interactome signature discovery, GO terms were used to retain functionally coherent proteins per fragments, followed by NMF for fragment-protein matrix decomposition. The ML methodology included pretraining of the FFF descriptor with a blend of topological and physicochemical descriptors, (41,69,70) binary classification with TabPFN models (40,71), and interpretability of promiscuity predictions using Shapley value analysis (42,72,73). A fully automated ML modeler is provided as part of the ligand discovery web resource.…”
Section: Methods Summarymentioning
confidence: 99%
See 1 more Smart Citation
“…For interactome signature discovery, GO terms were used to retain functionally coherent proteins per fragments, followed by NMF for fragment-protein matrix decomposition. The ML methodology included pretraining of the FFF descriptor with a blend of topological and physicochemical descriptors, (41,69,70) binary classification with TabPFN models (40,71), and interpretability of promiscuity predictions using Shapley value analysis (42,72,73). A fully automated ML modeler is provided as part of the ligand discovery web resource.…”
Section: Methods Summarymentioning
confidence: 99%
“…In brief, we first labeled screened fragments as promiscuous (1) or nonpromiscuous (0), according to thresholds in protein-interaction counts. Then, we used a transformer-based ML model (TabPFN) to map a compound's FFF descriptor to a classification score (0 or 1) (40). TabPFN is a fully learned model that approximates Bayesian inference and requires no hyperparameter tuning, making it straightforward to obtain performant ML classifiers based on our chemoproteomics profiling data.…”
Section: Fragment Promiscuity Predictionmentioning
confidence: 99%
“…Our work is inspired by [17] which studies ICL in synthetic settings and demonstrates transformers can serve as complex classifiers through ICL. In parallel, [19] uses ICL as an AutoML (i.e. model-selection, hyperparameter tuning) framework where they plug in a dataset to transformer and use it as a classifier for new test points.…”
Section: Related Workmentioning
confidence: 99%
“…In this section, we will discuss how ICL can be interpreted as an implicit model selection procedure building on the formalism that transformer is a learning algorithm. Following Figure 2 and prior works [17,24,19], a plausible assumption is that, transformer can implement ERM algorithms up to a certain accuracy. Then, model selection can be formalized by the selection of the right hypothesis class so that running ERM on that hypothesis class can strike a good bias-variance tradeoff during ICL.…”
Section: Interpreting In-context Learning As a Model Selection Proced...mentioning
confidence: 99%
See 1 more Smart Citation