2020
DOI: 10.48550/arxiv.2005.13930
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Variational Autoencoder with Embedded Student-$t$ Mixture Model for Authorship Attribution

Abstract: Traditional computational authorship attribution describes a classification task in a closed-set scenario. Given a finite set of candidate authors and corresponding labeled texts, the objective is to determine which of the authors has written another set of anonymous or disputed texts. In this work, we propose a probabilistic autoencoding framework to deal with this supervised classification task. More precisely, we are extending a variational autoencoder (VAE) with embedded Gaussian mixture model to a Student… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…To position our work in context, note that the encoder q(z | x) may be viewed as a variational approximation to the posterior p(z | x) defined by the decoder model p(x | z) and the prior p(z). Our work differs from (Mathieu et al, 2019;Chen et al, 2020;Boenninghoff et al, 2020) in that we consider fat-tailed variational approximations q(z | x) rather than priors p(z). Although (Abiri & Ohlsson, 2020) also considers a StudentT approximate posterior, our work involves a more general variational family which use normalizing flows.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…To position our work in context, note that the encoder q(z | x) may be viewed as a variational approximation to the posterior p(z | x) defined by the decoder model p(x | z) and the prior p(z). Our work differs from (Mathieu et al, 2019;Chen et al, 2020;Boenninghoff et al, 2020) in that we consider fat-tailed variational approximations q(z | x) rather than priors p(z). Although (Abiri & Ohlsson, 2020) also considers a StudentT approximate posterior, our work involves a more general variational family which use normalizing flows.…”
Section: Related Workmentioning
confidence: 99%
“…Fat-tails in variational inference. Recent work in variational autoencoders (VAEs) have considered relaxing Gaussian assumptions to heavier-tailed distributions (Mathieu et al, 2019;Chen et al, 2019;Boenninghoff et al, 2020;Abiri & Ohlsson, 2020). In (Mathieu et al, 2019), a Stu-dentT prior distribution p(z) is considered over the latent code z in a VAE with Gaussian encoder q(z | x).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation