Online judges are systems designed for the reliable evaluation of algorithm source code submitted by users, which is next compiled and tested in a homogeneous environment. Online judges are becoming popular in various applications. Thus, we would like to review the state of the art for these systems. We classify them according to their principal objectives into systems supporting organization of competitive programming contests, enhancing education and recruitment processes, facilitating the solving of data mining challenges, online compilers and development platforms integrated as components of other custom systems. Moreover, we introduce a formal definition of an online judge system and summarize the common evaluation methodology supported by such systems. Finally, we briefly discuss an Optil.io platform as an example of an online judge system, which has been proposed for the solving of complex optimization problems. We also analyze the competition results conducted using this platform. The competition proved that online judge systems, strengthened by crowdsourcing concepts, can be successfully applied to accurately and efficiently solve complex industrial- and science-driven challenges.Comment: Authors pre-print of the article accepted for publication in ACM Computing Surveys (accepted on 19-Sep-2017
UPDATED-January 2, 2016. The main objective of the presented research is to design a platform for continuous evaluation of optimization algorithms using crowdsourcing technique. The resulting platform, called Optil.io, runs in a cloud using platform as a service model and allows researchers from all over the world to collaboratively solve computational problems. This is the approach that has been already proved to be very successful for data mining problems by web services such as Kaggle. During our project we adapted this concept for solving computational problems that require implementation of software. To achieve this we designed the on-line judge system that receives algorithmic solutions in a form of source code from the crowd of programmers, compiles it, executes in a homogeneous run-time environment and objectively evaluates using the set of test cases. It was verified during internal experiments at the Poznan University of Technology and it is now ready to be presented to wider audience.
Motivation There are very few methods for de novo genome assembly based on the overlap graph approach. It is considered as giving more exact results than the so-called de Bruijn graph approach but in much greater time and of much higher memory usage. It is not uncommon that assembly methods involving the overlap graph model are not able to successfully compute greater data sets, mainly due to memory limitation of a computer. This was the reason for developing in last decades mainly de Bruijn-based assembly methods, fast and fairly accurate. However, the latter methods can fail for longer or more repetitive genomes, as they decompose reads to shorter fragments and lose a part of information. An efficient assembler for processing big data sets and using the overlap graph model is still looked out. Results We propose a new genome-scale de novo assembler based on the overlap graph approach, designed for short-read sequencing data. The method, ALGA, incorporates several new ideas resulting in more exact contigs produced in short time. Among these ideas we have creation of a sparse but quite informative graph, reduction of the graph including a procedure referring to the problem of minimum spanning tree of a local subgraph, and graph traversal connected with simultaneous analysis of contigs stored so far. What is rare in genome assembly, the algorithm is almost parameter-free, with only one optional parameter to be set by a user. ALGA was compared with nine state-of-the-art assemblers in tests on genome-scale sequencing data obtained from real experiments on six organisms, differing in size, coverage, GC content, and repetition rate. ALGA produced best results in the sense of overall quality of genome reconstruction, understood as a good balance between genome coverage, accuracy, and length of resulting sequences. The algorithm is one of tools involved in processing data in currently realized national project Genomic Map of Poland. Availability ALGA is available at http://alga.put.poznan.pl. Supplementary information Supplementary material is available at Bioinformatics online.
Chessboard and chess piece recognition is a computer vision problem that has not yet been efficiently solved. Digitization of a chess game state from a picture of a chessboard is a task typically performed by humans or with the aid of specialized chessboards and pieces. However, those solutions are neither easy nor convenient. To solve this problem, we propose a novel algorithm for digitizing chessboard configurations.We designed a method of chessboard recognition and pieces detection that is resistant to lighting conditions and the angle at which images are captured, and works correctly with numerous chessboard styles. Detecting the board and recognizing chess pieces are crucial steps of board state digitization.The algorithm achieves 95% accuracy (compared to 60% for the best alternative) for positioning the chessboard in an image, and almost 95% for chess pieces recognition. Furthermore, the sub-process of detecting straight lines and finding lattice points performs extraordinarily well, achieving over 99.5% accuracy (compared to the 74% for the best alternative).
Next generation sequencers produce billions of short DNA sequences in a massively parallel manner, which causes a great computational challenge in accurately reconstructing a genome sequence de novo using these short sequences. Here, we propose the GRASShopPER assembler, which follows an approach of overlap-layout-consensus. It uses an efficient GPU implementation for the sequence alignment during the graph construction stage and a greedy hyper-heuristic algorithm at the fork detection stage. A two-part fork detection method allows us to identify repeated fragments of a genome and to reconstruct them without misassemblies. The assemblies of data sets of bacteria Candidatus Microthrix, nematode Caenorhabditis elegans, and human chromosome 14 were evaluated with the golden standard tool QUAST. In comparison with other assemblers, GRASShopPER provided contigs that covered the largest part of the genomes and, at the same time, kept good values of other metrics, e.g., NG50 and misassembly rate.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.