Abstract-Shake Them All is a popular "Wallpaper" application exceeding millions of downloads on Google Play. At installation, this application is given permission to (1) access the Internet (for updating wallpapers) and (2) use the device microphone (to change background following noise changes). With these permissions, the application could silently record user conversations and upload them remotely. To give more confidence about how Shake Them All actually processes what it records, it is necessary to build a precise analysis tool that tracks the flow of any sensitive data from its source point to any sink, especially if those are in different components.Since Android applications may leak private data carelessly or maliciously, we propose IccTA, a static taint analyzer to detect privacy leaks among components in Android applications. IccTA goes beyond state-of-the-art approaches by supporting intercomponent detection. By propagating context information among components, IccTA improves the precision of the analysis. IccTA outperforms existing tools on two benchmarks for ICC-leak detectors: DroidBench and ICC-Bench. Moreover, our approach detects 534 ICC leaks in 108 apps from MalGenome and 2,395 ICC leaks in 337 apps in a set of 15,000 Google Play apps.
Large Software Product Lines (SPLs) are common in industry, thus introducing the need of practical solutions to test them. To this end, t-wise can help to drastically reduce the number of product configurations to test. Current t-wise approaches for SPLs are restricted to small values of t. In addition, these techniques fail at providing means to finely control the configuration process. In view of this, means for automatically generating and prioritizing product configurations for large SPLs are required. This paper proposes (a) a search-based approach capable of generating product configurations for large SPLs, forming a scalable and flexible alternative to current techniques and (b) prioritization algorithms for any set of product configurations. Both these techniques employ a similarity heuristic. The ability of the proposed techniques is assessed in an empirical study through a comparison with state of the art tools. The comparison focuses on both the product configuration generation and the prioritization aspects. The results demonstrate that existing t-wise tools and prioritization techniques fail to handle large SPLs. On the contrary, the proposed techniques are both effective and scalable. Additionally, the experiments show that the similarity heuristic can be used as a viable alternative to t-wise.
Abstract-Software Product Lines (SPL) are difficult to validate due to combinatorics induced by variability across their features. This leads to combinatorial explosion of the number of derivable products. Exhaustive testing in such a large space of products is infeasible. One possible option is to test SPLs by generating test cases that cover all possible T feature interactions (T -wise). T -wise dramatically reduces the number of test products while ensuring reasonable SPL coverage. However, automatic generation of test cases satisfying T -wise using SAT solvers raises two issues. The encoding of SPL models and Twise criteria into a set of formulas acceptable by the solver and their satisfaction which fails when processed "all-at-once". We propose a scalable toolset using Alloy to automatically generate test cases satisfying T -wise from SPL models. We define strategies to split T -wise combinations into solvable subsets. We design and compute metrics to evaluate strategies on AspectOPTIMA, a concrete transactional SPL.
Context: Static analysis approaches have been proposed to assess the security of Android apps, by searching for known vulnerabilities or actual malicious code. The literature thus has proposed a large body of works, each of which attempts to tackle one or more of the several challenges that program analyzers face when dealing with Android apps.Objective: We aim to provide a clear view of the state-of-the-art works that statically analyze Android apps, from which we highlight the trends of static analysis approaches, pinpoint where the focus has been put and enumerate the key aspects where future researches are still needed.Method: We have performed a systematic literature review which involves studying around 90 research papers published in software engineering, programming languages and security venues. This review is performed mainly in five dimensions: problems targeted by the approach, fundamental techniques used by authors, static analysis sensitivities considered, android characteristics taken into account and the scale of evaluation performed.Results: Our in-depth examination have led to several key findings: 1) Static analysis is largely performed to uncover security and privacy issues; 2) The Soot framework and the Jimple intermediate representation are the most adopted basic support tool and format, respectively; 3) Taint analysis remains the most applied technique in research approaches; 4) Most approaches support several analysis sensitivities, but very few approaches consider path-sensitivity; 5) There is no single work that has been proposed to tackle all challenges of static analysis that are related to Android programming; and 6) Only a small portion of state-of-the-art works have made their artifacts publicly available.Conclusion: The research community is still facing a number of challenges for building approaches that are aware altogether of implicit-Flows, dynamic code loading features, reflective calls, native code and multi-threading, in order to implement sound and highly precise static analyzers.
To address the issue of malware detection through large sets of applications, researchers have recently started to investigate the capabilities of machine-learning techniques for proposing effective approaches. So far, several promising results were recorded in the literature, many approaches being assessed with what we call in the lab validation scenarios. This paper revisits the purpose of malware detection to discuss whether such in the lab validation scenarios provide reliable indications on the performance of malware detectors in real-world settings, aka in the wild.To this end, we have devised several Machine Learning classifiers that rely on a set of features built from applications' CFGs. We use a sizeable dataset of over 50 000 Android applications collected from sources where state-of-the art approaches have selected their data. We show that, in the lab, our approach outperforms existing machine learning-based approaches. However, this high performance does not translate in high performance in the wild. The performance gap we observed-F-measures dropping from over 0.9 in the lab to below 0.1 in the wild -raises one important question: How do state-of-the-art approaches perform in the wild ?
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.