2015
DOI: 10.1093/database/bav081
|View full text |Cite
|
Sign up to set email alerts
|

Deep Question Answering for protein annotation

Abstract: Biomedical professionals have access to a huge amount of literature, but when they use a search engine, they often have to deal with too many documents to efficiently find the appropriate information in a reasonable time. In this perspective, question-answering (QA) engines are designed to display answers, which were automatically extracted from the retrieved documents. Standard QA engines in literature process a user question, then retrieve relevant documents and finally extract some possible answers out of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(23 citation statements)
references
References 22 publications
0
23
0
Order By: Relevance
“…In the meantime, prediction methods can help close the sequence-annotation gap [87,91], but with respect to deep annotations of function, in silico methods remain as limited as their experimental ‘teachers’ [39,92,93]. Machine learning plays critical roles in capturing protein function from the vast biomedical data resources [39,89].…”
Section: Precision Medicine: Proteins and Disease Mechanismsmentioning
confidence: 99%
“…In the meantime, prediction methods can help close the sequence-annotation gap [87,91], but with respect to deep annotations of function, in silico methods remain as limited as their experimental ‘teachers’ [39,92,93]. Machine learning plays critical roles in capturing protein function from the vast biomedical data resources [39,89].…”
Section: Precision Medicine: Proteins and Disease Mechanismsmentioning
confidence: 99%
“…We are only aware of three other biomedical QA services, as surveyed in (Bauer and Berleant, 2012), namely, askHERMES (Cao et al, 2011), EAGLi (Gobeill et al, 2015) and HONQA (Cruchet et al, 2009). However, none of the these systems performs robustly to most question types.…”
Section: Related Workmentioning
confidence: 99%
“…EAGLi (Gobeill et al, 2015) provides answers based on concepts from the Gene Ontology (GO). Even when no answers are found for a question, EAGLi always outputs a list of relevant publications.…”
Section: Related Workmentioning
confidence: 99%