<span>In an Indian law system, different courts publish their legal proceedings every month for future reference of legal experts and common people. Extensive manual labor and time are required to analyze and process the information stored in these lengthy complex legal documents. Automatic legal document processing is the solution to overcome drawbacks of manual processing and will be very helpful to the common man for a better understanding of a legal domain. In this paper, we are exploring the recent advances in the field of legal text processing and provide a comparative analysis of approaches used for it. In this work, we have divided the approaches into three classes NLP based, deep learning-based and, KBP based approaches. We have put special emphasis on the KBP approach as we strongly believe that this approach can handle the complexities of the legal domain well. We finally discuss some of the possible future research directions for legal document analysis and processing.</span>
Search engines are popularly utilized for extracting desired information from World Wide Web by users. Efficiency of these search engines are dependent on how fast search results can be retrieved and whether these results reflects the desired info or not. For a particular query, vast amount of relevant information is scattered across the multiple web pages. Search engines generate multiple web links as a output. It has been a jigsaw puzzle for users to identify and select relevant links to extract further desired information. To address this issue, we are proposing an approach for Query Recommendation for getting relevant search results from web using facet mining techniques. Facets are the semantically related words for a query which defines its multiple aspects. We are extracting these aspects of a query from Wikipedia pages which is considered to be a trustworthy resource on the web. Our proposed system uses various text processing techniques to refine the results using lexical resource like WorldNet. In this paper we are discussing our approach and its implementation and results obtained. In the paper , Discussion on future research direction is included to conclude.
Calculating the similarity between two legal documents to find similar legal judgments is an important challenge in legal information. Efficiently computing this similarity by expanding widely used information retrieval and search engine techniques has practical applications in a number of tasks, like locating pertinent prior cases for a specific case document. Programmed data recovery frameworks or reports are the main parts of today’s selected emotional support networks or web indexes to reduce data overload. Investigating methodologies to work on the presentation of report recovery frameworks and web search tools is a working area of research. Various methods have been pro- posed in this research paper to explore ways to search the common law system for cases with a similar outcome. Building a legal decision support system is intended to increase efficiency by assisting stakeholders—including judges and attorneys—in finding related rulings promptly. In order to prepare arguments, a lawyer typically has to review earlier decisions that are comparable to (or pertinent to) the current case. The attorney examines the judgement database to discover similar judgements. Legal rulings are complex in nature and refer to other judgments. For this, proper techniques are needed for quality analysis of judgments and correct deductions from them. A proper analysis of several types of similarity measures, such as all-term-based similarity methods, legal terms, co-citations, and bibliographic links, performed to look for comparable conclusions. According to experimental findings, the law term similarity approach outperforms all term cosine similarity methods. The out- comes also demonstrate that the co-citation approach performs worse than the bibliographic linkage similarity method and improves performance over the co-citation approach. After proper analysis of various methods in this field, proper comparison can be made between documents and similar legal documents can also be easily searched based on their similarity pattern and can be used to make meaningful deductions.
Due to Digital Revolution, most books and newspaper articles are now available online. Particularly for kids and students, prolonged screen time might be bad for eyesight and attention span. As a result, summarizing algorithms are required to provide long web content in an easily digestible style. The proposed methodology is using term frequency and inverse document frequency driven model, in which the document summary is generated based on each word in a corpus. According to the preferred method, each sentence is rated according to its tf-idf score, and the document summary is produced in a fixed ratio to the original text. Expert summaries froma data set are used for measuring precision and recall using the proposed approach’s ROUGE model. towards the development of such a framework is presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.