Stealing of sensitive information from apps is always considered to be one of the most critical threats to Android security. Recent studies show that this can happen even to the apps without explicit implementation flaws, through exploiting some design weaknesses of the operating system, e.g., shared communication channels such as Bluetooth, and side channels such as memory and network-data usages. In all these attacks, a malicious app needs to run side-by-side with the target app (the victim) to collect its runtime information. Examples include recording phone conversations from the phone app, gathering WebMD's data usages to infer the disease condition the user looks at, etc. This runtime-information-gathering (RIG) threat is realistic and serious, as demonstrated by prior research and our new findings, which reveal that the malware monitoring popular Android-based home security systems can figure out when the house is empty and the user is not looking at surveillance cameras, and even turn off the alarm delivered to her phone.To defend against this new category of attacks, we propose a novel technique that changes neither the operating system nor the target apps, and provides immediate protection as soon as an ordinary app (with only normal and dangerous permissions) is installed. This new approach, called App Guardian, thwarts a malicious app's runtime monitoring attempt by pausing all suspicious background processes when the target app (called principal) is running in the foreground, and resuming them after the app stops and its runtime environment is cleaned up. Our technique leverages a unique feature of Android, on which thirdparty apps running in the background are often considered to be disposable and can be stopped anytime with only a minor performance and utility implication. We further limit such an impact by only focusing on a small set of suspicious background apps, which are identified by their behaviors inferred from their side channels (e.g., thread names, CPU scheduling and kernel time). App Guardian is also carefully designed to choose the right moments to start and end the protection procedure, and effectively protect itself against malicious apps. Our experimental studies show that this new technique defeated all known RIG attacks, with small impacts on the utility of legitimate apps and the performance of the OS. Most importantly, the idea underlying our approach, including app-level protection, side-channel based defense and lightweight response, not only significantly raises the bar for the RIG attacks and the research on this subject but can also inspire the follow-up effort on new detection systems practically deployable in the fragmented Android ecosystem.
Abstract-Android is a fast evolving system, with new updates coming out one after another. These updates often completely overhaul a running system, replacing and adding tens of thousands of files across Android's complex architecture, in the presence of critical user data and applications (apps for short). To avoid accidental damages to such data and existing apps, the upgrade process involves complicated program logic, whose security implications, however, are less known. In this paper, we report the first systematic study on the Android updating mechanism, focusing on its Package Management Service (PMS). Our research brought to light a new type of security-critical vulnerabilities, called Pileup flaws, through which a malicious app can strategically declare a set of privileges and attributes on a low-version operating system (OS) and wait until it is upgraded to escalate its privileges on the new system. Specifically, we found that by exploiting the Pileup vulnerabilities, the app can not only acquire a set of newly added system and signature permissions but also determine their settings (e.g., protection levels), and it can further substitute for new system apps, contaminate their data (e.g., cache, cookies of Android default browser) to steal sensitive user information or change security configurations, and prevent installation of critical system services. We systematically analyzed the source code of PMS using a program verification tool and confirmed the presence of those security flaws on all Android official versions and over 3,000 customized versions. Our research also identified hundreds of exploit opportunities the adversary can leverage over thousands of devices across different device manufacturers, carriers and countries. To mitigate this threat without endangering user data and apps during an upgrade, we also developed a new detection service, called SecUP, which deploys a scanner on the user's device to capture the malicious apps designed to exploit Pileup vulnerabilities, based upon the vulnerability-related information automatically collected from newly released Android OS images.
Recent years have witnessed the rapid progress in deep learning (DL), which also brings their potential weaknesses to the spotlights of security and machine learning studies. With important discoveries made by adversarial learning research, surprisingly little attention, however, has been paid to the realworld adversarial techniques deployed by the cybercriminal to evade image-based detection. Unlike the adversarial examples that induce misclassification using nearly imperceivable perturbation, real-world adversarial images tend to be less optimal yet equally effective. As a first step to understand the threat, we report in the paper a study on adversarial promotional porn images (APPIs) that are extensively used in underground advertising. We show that the adversary today's strategically constructs the APPIs to evade explicit content detection while still preserving their sexual appeal, even though the distortions and noise introduced are clearly observable to humans.To understand such real-world adversarial images and the underground business behind them, we develop a novel DL-based methodology called Malèna, which focuses on the regions of an image where sexual content is least obfuscated and therefore visible to the target audience of a promotion. Using this technique, we have discovered over 4,000 APPIs from 4,042,690 images crawled from popular social media, and further brought to light the unique techniques they use to evade popular explicit content detectors (e.g., Google Cloud Vision API, Yahoo Open NSFW model), and the reason that these techniques work. Also studied are the ecosystem of such illicit promotions, including the obfuscated contacts advertised through those images, compromised accounts used to disseminate them, and large APPI campaigns involving thousands of images. Another interesting finding is the apparent attempt made by cybercriminals to steal others' images for their advertising. The study highlights the importance of the research on real-world adversarial learning and makes the first step towards mitigating the threats it poses.
No abstract
Mobile apps include privacy settings that allow their users to configure how their data should be shared. These settings, however, are often hard to locate and hard to understand by the users, even in popular apps, such as Facebook. More seriously, they are often set to share user data by default, exposing her privacy without proper consent. In this paper, we report the first systematic study on the problem, which is made possible through an in-depth analysis of user perception of the privacy settings. More specifically, we first conduct two user studies (involving nearly one thousand users) to understand privacy settings from the user's perspective, and identify these hard-to-find settings. Then we select 14 features that uniquely characterize such hidden privacy settings and utilize a novel technique called semanticsbased UI tracing to extract them from a given app. On top of these features, a classifier is trained to automatically discover the hidden privacy settings, which together with other innovations, has been implemented into a tool called Hound. Over our labeled data set, the tool achieves an accuracy of 93.54%. Further running it on 100,000 latest apps from both Google Play and third-party markets, we find that over a third (36.29%) of the privacy settings identified from these apps are "hidden". Looking into these settings, we observe that they become hard to discover and hard to understand primarily due to the problematic categorization on the apps' user interfaces and/or confusing descriptions. Further importantly, though more privacy options have been offered to the user over time, also discovered is the persistence of their usability issue, which becomes even more serious, e.g., originally easy-to-find settings now harder to locate. And among all such hidden privacy settings, 82.16% are set to leak user privacy by default. We provide suggestions for improving the usability of these privacy settings at the end of our study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.