2016
DOI: 10.48550/arxiv.1609.09353
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Multi-Species Embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 15 publications
1
5
0
Order By: Relevance
“…Interest in joint species distribution modeling with neural networks has only grown as deep learning has come to maturity [89]. Convolutional neural networks in particular have created a new opportunity: the ability to extract features from spatial arrays of environmental features [43,51] instead of using hand-selected environmental feature vectors.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Interest in joint species distribution modeling with neural networks has only grown as deep learning has come to maturity [89]. Convolutional neural networks in particular have created a new opportunity: the ability to extract features from spatial arrays of environmental features [43,51] instead of using hand-selected environmental feature vectors.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…There is a considerably amount of domain knowledge and ecological theory which would ideally be incorporated into SDMs [85]. This might include knowledge about species dispersal [25,52,72,126], spatial patterns of community composition [44,49,103], and constraints on species ranges (e.g. cliffs, water) [47,65,69,126].…”
Section: Incorporating Ecological Theory and Expert Knowledgementioning
confidence: 99%
“…Species coexistence in nature follows complex, unknown patterns that Machine Learning should be able to capture thanks to its high degree of expressivity (i.e. the capacity of a model to express complex relations) (Balamurugan et al, 2019;Chen et al, 2017;Harris, 2015;Raghu et al, 2017;Tang et al, 2018). Here, we ex- G maximizes the likelihood that the discriminative model makes a mistake (Goodfellow et al, 2014).…”
Section: Introductionmentioning
confidence: 99%
“…Multi-label learning (MLL) learns from examples each associated with multiple labels simultaneously and aims to derive a predictive model which can assign a set of relevant labels for the unseen instance [30,44]. During the past decade, multi-label learning has been widely employed to learn from data with rich semantics, such as multimedia content annotation [38,33], text categorization [29,27], music emotion analysis [21,35], and bioinformatics analysis [3], etc. However, in practice, obtaining ground-truth labels for training datasets is costly due to the expensive and time-consuming manual annotations.…”
Section: Introductionmentioning
confidence: 99%