Proceedings of the Eighteenth Conference on Computational Natural Language Learning 2014
DOI: 10.3115/v1/w14-1605
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Rank Answer Candidates for Automatic Resolution of Crossword Puzzles

Abstract: In this paper, we study the impact of relational and syntactic representations for an interesting and challenging task: the automatic resolution of crossword puzzles. Automatic solvers are typically based on two answer retrieval modules: (i) a web search engine, e.g., Google, Bing, etc. and (ii) a database (DB) system for accessing previously resolved crossword puzzles. We show that learning to rank models based on relational syntactic structures defined between the clues and the answer can improve both module… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 13 publications
(17 citation statements)
references
References 23 publications
0
17
0
Order By: Relevance
“…Since the beginning of the LiMoSINe project, the platform has been used for providing robust preprocessing for a variety of high-level tasks. Thus, we have recently shown how structural representations, extracted with our pipeline, improve multilingual opinion mining on YouTube (Severyn et al, 2015) or crossword puzzle resolution (Barlacchi et al, 2014).…”
Section: Conclusion and Future/ongoing Workmentioning
confidence: 99%
“…Since the beginning of the LiMoSINe project, the platform has been used for providing robust preprocessing for a variety of high-level tasks. Thus, we have recently shown how structural representations, extracted with our pipeline, improve multilingual opinion mining on YouTube (Severyn et al, 2015) or crossword puzzle resolution (Barlacchi et al, 2014).…”
Section: Conclusion and Future/ongoing Workmentioning
confidence: 99%
“…And the rule-based module and the dictionary module are mentioned in his work. The tree kernel is used to rerank the candidates proposed by Barlacchi et al (2014) on Chinese language cultures, such as the couplet generation and the poem generation. A statistical machine translation (SMT) framework is proposed to generate Chinese couplets and classic Chinese poetry (He et al, 2012;Zhou et al, 2009;Jiang and Zhou, 2008).…”
Section: Related Workmentioning
confidence: 99%
“…And the rule-based module and the dictionary module are mentioned in his work. The tree kernel is used to rerank the candidates proposed by Barlacchi et al (2014) for automatic resolution of crossword puzzles.…”
Section: Related Workmentioning
confidence: 99%
“…Then, both query and candidates are represented by shallow syntactic structures (generated by running a set of NLP parsers) and traditional similarity features which are fed to a kernelbased reranker. Hereafter, we give a brief description of our models for clue reranking whereas the reader can refer to our previous work (Barlacchi et al, 2014a;Barlacchi et al, 2014b) for more specific details.…”
Section: Reranking With Kernelsmentioning
confidence: 99%
“…In (Barlacchi et al, 2014a), we proposed the BM25 retrieval model to generate clue lists, which were further refined by applying our reranking models. The latter promote the most similar, which are probably associated with the same answer of the query clue, to the top.…”
Section: Clue Retrieval and Rerankingmentioning
confidence: 99%