This paper compares two different approaches of teaching introductory programming by quantitatively analysing the student assessments in a real classroom. The first approach is to emphasise the principles of object-oriented programming and design using Java from the very beginning. The second approach is to first teach the basic programming concepts (loops, branch, and use of libraries) using Python and then move on to oriented programming using Java. Each approach was adopted for one academic year (2008-09 and 2009-10) with first year undergraduate students. Quantitative analysis of the student assessments from the first semester of each year was then carried out. The results of this analysis are presented in this paper. These results suggest that the later approach leads to enhanced learning of introductory programming concepts by students.
In software engineering, optimal feature selection for software product lines (SPLs) is an important and complicated task, involving simultaneous optimization of multiple competing objectives in large but highly constrained search spaces. A feature model is the standard representation of features of all possible products as well as the relationships among them for an SPL. Recently, various multi-objective evolutionary algorithms have been used to search for valid product configurations. However, the issue of the balance between correctness and diversity of solutions obtained in a reasonable time has been found very challenging for these algorithms. To tackle this problem, this paper proposes a novel aggregation-based dominance (ADO) for Pareto-based evolutionary algorithms to direct the search for high-quality solutions. Our method was tested on two widely used Pareto-based evolutionary algorithms: NSGA-II and SPEA2+SDE and validated on nine different SPLs with up to 10, 000 features and two real-world SPLs with up to 7 objectives. Our experiments have shown the effectiveness and efficiency of both ADO-based NSGA-II and SPEA2+SDE: (1) Both algorithms could generate 100% valid solutions for all feature models. (2) The performance of both algorithms was improved as measured by the hypervolume metric in 7/9 and 8/9 feature models. (3) Even for the largest tested feature model with 10,000 features, it required under 40 seconds on a standard desktop to find 100% valid solutions in a single run of both algorithms.
The performance and reliability of converting natural language into structured query language can be problematic in handling nuances that are prevalent in natural language. Relational databases are not designed to understand language nuance, therefore the question why we must handle nuance has to be asked. This paper is looking at an alternative solution for the conversion of a Natural Language Query into a Structured Query Language (SQL) capable of being used to search a relational database. The process uses the natural language concept, Part of Speech to identify words that can be used to identify database tables and table columns. The use of Open NLP based grammar files, as well as additional configuration files, assist in the translation from natural language to query language. Having identified which tables and which columns contain the pertinent data the next step is to create the SQL statement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.