2019
DOI: 10.1016/j.ins.2018.12.041
|View full text |Cite
|
Sign up to set email alerts
|

An anatomy for neural search engines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(5 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…In fact, attempts to do away with retrieval based on sparse vector representations date back to latent semantic analysis from the 1990s [Deerwester et al, 1990]. A more recent example is the work of Nakamura et al [2019]: in a standard design where BM25-based first-stage retrieval feeds DRMM for reranking, the authors experimented with replacing first-stage retrieval with approximate nearest-neighbor search based on representations from a deep averaging network [Iyyer et al, 2015]. Unfortunately, the end-to-end effectiveness was much worse, but this was "pre-BERT", prior to the advent of the latest transformer techniques.…”
Section: Open Research Questionsmentioning
confidence: 99%
“…In fact, attempts to do away with retrieval based on sparse vector representations date back to latent semantic analysis from the 1990s [Deerwester et al, 1990]. A more recent example is the work of Nakamura et al [2019]: in a standard design where BM25-based first-stage retrieval feeds DRMM for reranking, the authors experimented with replacing first-stage retrieval with approximate nearest-neighbor search based on representations from a deep averaging network [Iyyer et al, 2015]. Unfortunately, the end-to-end effectiveness was much worse, but this was "pre-BERT", prior to the advent of the latest transformer techniques.…”
Section: Open Research Questionsmentioning
confidence: 99%
“…• Neural models: use shallow or deep neural networks to rank search results in response to a query. Learn language representations from raw text, bridging the gap between the query and document vocabulary [9].…”
Section: Fundamentals Of Information Retrievalmentioning
confidence: 99%
“…This can be due to poor-quality content being added and the daily refresh interval of what users view as up-to-date [18]. The relevance scoring function for documents relevant to a query cannot always be ran for all documents in a large scale search engine due to the impact of the corpuses size on computational cost [19]. Both arguments have a huge impact on news relevance marking.…”
Section: Relevance Markingmentioning
confidence: 99%