According to screenwriting theory, turning points (e.g., change of plans, major setback, climax) are crucial narrative moments within a screenplay: they define the plot structure, determine its progression and thematic units (e.g., setup, complications, aftermath). We propose the task of turning point identification in movies as a means of analyzing their narrative structure. We argue that turning points and the segmentation they provide can facilitate processing long, complex narratives, such as screenplays, for summarization and question answering. We introduce a dataset consisting of screenplays and plot synopses annotated with turning points and present an end-to-end neural network model that identifies turning points in plot synopses and projects them onto scenes in screenplays. Our model outperforms strong baselines based on state-of-the-art sentence representations and the expected position of turning points.Recently divorced Meg Altman and her 11-year-old daughter Sarah have just purchased a four-story brownstone on New York City. The house's previous owner installed an isolated room used to protect the house's occupants from intruders. On the night the two move into the home, it is broken by Junior, the previous owner's grandson; Burnham, an employee of the residence's security company; and Raoul; a ski mask-wearing gunman.
In this paper we present two deep-learning systems that competed at SemEval-2018 Task 3 "Irony detection in English tweets". We design and ensemble two independent models, based on recurrent neural networks (Bi-LSTM), which operate at the word and character level, in order to capture both the semantic and syntactic information in tweets. Our models are augmented with a self-attention mechanism, in order to identify the most informative words. The embedding layer of our wordlevel model is initialized with word2vec word embeddings, pretrained on a collection of 550 million English tweets. We did not utilize any handcrafted features, lexicons or external datasets as prior information and our models are trained end-to-end using back propagation on constrained data. Furthermore, we provide visualizations of tweets with annotations for the salient tokens of the attention layer that can help to interpret the inner workings of the proposed models. We ranked 2 nd out of 42 teams in Subtask A and 2 nd out of 31 teams in Subtask B. However, post-task-completion enhancements of our models achieve state-ofthe-art results ranking 1 st for both subtasks.
In this work, we present a model that incorporates Dialogue Act (DA) semantics in the framework of Recurrent Neural Networks (RNNs) for DA classification. Specifically, we propose a novel scheme for automatically encoding DA semantics via the extraction of salient keywords that are representative of the DA tags. The proposed model is applied to the Switchboard corpus and achieves 1.7% (absolute) improvement in classification accuracy with respect to the baseline model. We demonstrate that the addition of discourse-level features enhances the DA classification as well as makes the algorithm more robust: the proposed model does not require the preprocessing of dialogue transcriptions.
Most general-purpose extractive summarization models are trained on news articles, which are short and present all important information upfront. As a result, such models are biased by position and often perform a smart selection of sentences from the beginning of the document. When summarizing long narratives, which have complex structure and present information piecemeal, simple position heuristics are not sufficient. In this paper, we propose to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models. We formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays (i.e., extract an optimal sequence of scenes). Experimental results on the CSI corpus of TV screenplays, which we augment with scene-level summarization labels, show that latent turning points correlate with important aspects of a CSI episode and improve summarization performance over general extractive algorithms, leading to more complete and diverse summaries.Victim: Mike Kimble, found in a Body Farm. Died 6 hours ago, unknown cause of death. CSI discover cow tissue in Mike's body.Cross-contamination is suggested. Probable cause of death: Mike's house has been set on fire. CSI finds blood: Mike was murdered, fire was a cover up. First suspects: Mike's fiance, Jane and her ex-husband, Russ. CSI finds photos in Mike's house of Jane's daughter, Jodie, posing naked.Mike is now a suspect of abusing Jodie. Russ allows CSI to examine his gun. CSI discovers that the bullet that killed Mike was made of frozen beef that melt inside him. They also find beef in Russ' gun. Russ confesses that he knew that Mike was abusing Jody, so he confronted and killed him.CSI discovers that the naked photos were taken on a boat, which belongs to Russ. CSI discovers that it was Russ who was abusing his daughter based on fluids found in his sleeping bag and later killed Mike who tried to help Jodie.Russ is given bail, since no jury would convict a protective father.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.