In this paper, the authors propose a new approach for automatic text summarization by extraction based on Saving Energy Function where the first step constitute to use two techniques of extraction: scoring of phrases, and similarity that aims to eliminate redundant phrases without losing the theme of the text. While the second step aims to optimize the results of the previous layer by the metaheuristic based on Bee Algorithm, the objective function of the optimization is to maximize the sum of similarity between phrases of the candidate summary in order to keep the theme of the text, minimize the sum of scores in order to increase the summarization rate, this optimization also will give a candidate's summary where the order of the phrases changes compared to the original text. The third and final layer aims to choose the best summary from the candidate summaries generated by bee optimization, the authors opted for the technique of voting with a simple majority.
This article is a comparative study between two bio-inspired approach based on the swarm intelligence for automatic text summaries: Social Spiders and Social Bees. The authors use two techniques of extraction, one after the other: scoring of phrases, and similarity that aims to eliminate redundant phrases without losing the theme of the text. While the optimization use the bio-inspired approach to performs the results of the previous step. Its objective function of the optimization is to maximize the sum of similarity between phrases of the candidate summary in order to keep the theme of the text, minimize the sum of scores in order to increase the summarization rate; this optimization also will give a candidate's summary where the order of the phrases changes compared to the original text. The third and final step concerned in choosing a best summary from all candidates summaries generated by optimization layer, the authors opted for the technique of voting with a simple majority.
Hadoop MapReduce is one of the solutions for the process of large and big data, with-it the authors can analyze and process data, it does this by distributing the computational in a large set of machines. Process mining provides an important bridge between data mining and business process analysis, his techniques allow for mining data information from event logs. Firstly, the work consists to mine small patterns from a log traces, those patterns are the workflow of the execution traces of business process. The authors' work is an amelioration of the existing techniques who mine only one general workflow, the workflow present the general traces of two web applications; they use existing techniques; the patterns are represented by finite state automaton; the final model is the combination of only two types of patterns whom are represented by the regular expressions. Secondly, the authors compute these patterns in parallel, and then combine those patterns using MapReduce, they have two parts the first is the Map Step, they mine patterns from execution traces and the second is the combination of these small patterns as reduce step. The results are promising; they show that the approach is scalable, general and precise. It reduces the execution time by the use of Hadoop MapReduce Framework.
Hadoop MapReduce has arrived to solve the problem of treatment of big data, also the parallel treatment, with this framework the authors analyze, process a large size of data. It based for distributing the work in two big steps, the map and the reduce steps in a cluster or big set of machines. They apply the MapReduce framework to solve some problems in the domain of process mining how provides a bridge between data mining and business process analysis, this technique consists to mine lot of information from the process traces; In process mining, there are two steps, correlation definition and the process inference. The work consists in first time of mining patterns whom are the work flow of the process from execution traces, those patterns present the work or the history of each party of the process, the authors' small patterns are represented in this work by finite state automaton or their regular expression, the authors have only two patterns to facilitate the process, the general presentation of the process is the combination of the small mining patterns. The patterns are represented by the regular expressions (ab)* and (ab*c)*. Secondly, they compute the patterns, and combine them using the Hadoop MapReduce framework, in this work they have two general steps, first the Map step, they mine small patterns or small models from business process, and the second is the combination of models as reduce step. The authors use the business process of two web applications, the SKYPE, and VIBER applications. The general result shown that the parallel distributed process by using the Hadoop MapReduce framework is scalable, and minimizes the execution time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.