Session Keynote-2

Keynote Session 2

Conference
9:00 AM — 10:00 AM HKT
Local
Jun 8 Tue, 6:00 PM — 7:00 PM PDT

Streamlet: An Absurdly Simple, Textbook Blockchain Protocol

Elaine Shi (Associate Professor, Computer Science Department, Electrical and Computer Engineering, CyLab Security and Privacy Institute, Carnegie Mellon University, Pittsburgh, PA, USA)

0
Numerous works in the past have focused on constructing simple and understandable distributed consensus protocols. In this talk, I will present an absurdly simple consensus protocol called Streamlet. The entire protocol is: every epoch, a leader proposes a block extending the longest chain it has seen so far. Everyone votes for (i.e., signs) the first block proposed by the leader if it extends from one of the longest notarized chains they have seen so far. When a block collects votes from 2/3 of the nodes, it becomes notarized. Notarized does not mean final. Finality is decided with the following rule: for any chain in which all blocks are notarized and moreover, the last three blocks have consecutive epoch numbers, the entire chain except the first block is final. Streamlet is inspired by the community's past five years of、work on consensus motivated by decentralized blockchains. To the best of our knowledge, it is the simplest embodiment known thus far, and it subsumes classical landmark protocols such as PBFT/Paxos and their numerous variants. It is a great fit for pedagogy. Streamlet has been incorporated into courses at universities such as Stanford and CMU. Streamlet is also part of my new distributed consensus textbook available at http://distributedconsensus.net/ This is joint work with Benjamin Chan.

Session Chair

Moti Yung

Session 4A

ML and security (III)

Conference
10:30 AM — 11:50 AM HKT
Local
Jun 8 Tue, 7:30 PM — 8:50 PM PDT

REFIT: a Unified Watermark Removal Framework for Deep Learning Systems with Limited Data

Xinyun Chen (University of California, Berkeley, USA), Wenxiao Wang (Tsinghua University, China), Chris Bender (University of California, Berkeley, USA), Yiming Ding (University of California, Berkeley, USA), Ruoxi Jia (Virginia Tech, USA), Bo Li (University of Illinois at Urbana-Champaign, USA), Dawn Song (University of California, Berkeley, USA)

0
Training deep neural networks from scratch could be computationally expensive and requires a lot of training data. Recent work has explored different watermarking techniques to protect the pretrained deep neural networks from potential copyright infringements. However, these techniques could be vulnerable to watermark removal attacks. In this work, we propose REFIT, a unified watermark removal framework based on fine-tuning, which does not rely on the knowledge of the watermarks, and is effective against a wide range of watermarking schemes. In particular, we conduct a comprehensive study of a realistic attack scenario where the adversary has limited training data, which has not been emphasized in prior work on attacks against watermarking schemes. To effectively remove the watermarks without compromising the model functionality under this weak threat model, we propose two techniques that are incorporated into our fine-tuning framework: (1) an adaption of the elastic weight consolidation (EWC) algorithm, which is originally proposed for mitigating the catastrophic forgetting phenomenon; and (2) unlabeled data augmentation (AU), where we leverage auxiliary unlabeled data from other sources. Our extensive evaluation shows the effectiveness of REFIT against diverse watermark embedding schemes. The experimental results demonstrate that our fine-tuning-based watermark removal attacks could pose real threats to the copyright of pre-trained models, and thus highlight the importance of further investigating the watermarking problem and proposing more robust watermark embedding schemes against the attacks.

Recompose Event Sequences vs. Predict Next Events: A Novel Anomaly Detection Approach for Discrete Event Logs

Lun-Pin Yuan (Penn State University, USA), Peng Liu (Information Sciences and Technology, Pennsylvania State University, USA), Sencun Zhu (The Pennsylvania State University, USA)

0
One of the most challenging problems in the field of intrusion detection is anomaly detection for discrete event logs. While most earlier work focused on applying unsupervised learning upon engineered features, most recent work has started to resolve this challenge by applying deep learning methodology to abstraction of discrete event entries. Inspired by natural language processing, LSTM-based anomaly detection models were proposed. They try to predict upcoming events, and raise an anomaly alert when a prediction fails to meet a certain criterion. However, such a predict-next-event methodology has a fundamental limitation: event predictions may not be able to fully exploit the distinctive characteristics of sequences. This limitation leads to high false positives (FPs). It is also critical to examine the structure of sequences and the bi-directional causality among individual events. To this end, we propose a new methodology: Recomposing event sequences as anomaly detection.We propose DabLog, a LSTM-based Deep Autoencoder-Based anomaly detection method for discrete event Logs. The fundamental difference is that, rather than predicting upcoming events, our approach determines whether a sequence is normal or abnormal by analyzing (encoding) and reconstructing (decoding) the given sequence. Our evaluation results show that our new methodology can significantly reduce the numbers of FPs, hence achieving a higher 𝐹1 score.

Robust Roadside Physical Adversarial Attack Against Deep Learning in Lidar Perception Modules

Kaichen Yang (University of Florida, USA), Tzungyu Tsai (National Tsing Hua University, Taiwan), Honggang Yu (University of Florida, USA), Max Panoff (University of Florida, USA), Tsung-Yi Ho (National Tsing Hua University, Taiwan), Yier Jin (University of Florida, USA)

0
As Autonomous Vehicles (AVs) mature into viable transportation solutions, mitigating potential vehicle control security risks becomes increasingly important. Perception modules in AVs combine multiple sensors to perceive the surrounding environment. As such, they have been the focus of efforts to exploit the aforementioned risks due to their critical role in controlling autonomous driving technology. Despite extensive and thorough research into the vulnerability of camera-based sensors, vulnerabilities originating from Lidar sensors and their corresponding deep learning models in AVs remain comparatively untouched. Being aware that small roadside objects can be occasionally incorrectly identified as vehicles through on-board deep learning models, we propose a novel adversarial attack inspired by this phenomenon in both white-box and black-box scenarios. The adversarial attacks proposed in this paper are launched against deep learning models that perform object detection tasks through raw 3D points collected by a Lidar sensor in an autonomous driving scenario. In comparison to existing works, our attack creates not only adversarial point clouds in simulated environments, but also robust adversarial objects that can cause behavioral reactions in state of the art autonomous driving systems. Defense methods are then proposed and evaluated against this type of adversarial objects.

DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

Yi Zeng (University of California San Diego, USA), Han Qiu (Telecom Paris, France), Tianwei Zhang (Nanyang Technological University, Singapore), Shangwei Guo (Nanyang Technological University, Singapore), Meikang Qiu (Texas A&M University Commerce, USA), Bhavani Thuraisingham (University of Texas Dallas, USA)

0
Public resources and services (e.g., datasets, training platforms, pretrained models) have been widely adopted to ease the development of Deep Learning-based applications. However, if the third-party providers are untrusted, they can inject poisoned samples into the datasets or embed backdoors in those models. Such an integrity breach can cause severe consequences, especially in safety- and security-critical applications. Various backdoor attack techniques have been proposed for higher effectiveness and stealthiness. Unfortunately, existing defense solutions are not practical to thwart those attacks in a comprehensive way. In this paper, we investigate the effectiveness of data augmentation techniques in mitigating backdoor attacks and enhancing DL models’ robustness. An evaluation framework is introduced to achieve this goal. Specifically, we consider a unified defense solution, which (1) adopts a data augmentation policy to fine-tune the infected model and eliminate the effects of the embedded backdoor; (2) uses another augmentation policy to preprocess input samples and invalidate the triggers during inference. We propose a systematic approach to discover the optimal policies for defending against different backdoor attacks by comprehensively evaluating 71 state-of-the-art data augmentation functions. Extensive experiments show that our identified policy can effectively mitigate eight different kinds of backdoor attacks and outperform five existing defense methods.We envision this framework can be a good benchmark tool to advance future DNN backdoor studies.

Session Chair

Neil Gong

Session 4B

Measurement/Empirical Security

Conference
10:30 AM — 11:50 AM HKT
Local
Jun 8 Tue, 7:30 PM — 8:50 PM PDT

Morshed: Guiding Behavioral Decision-Makers towards Better Security Investment in Interdependent Systems

Mustafa Abdallah (Purdue University, USA), Daniel Woods (Purdue University, USA), Parinaz Naghizadeh (Ohio State University, USA), Issa Khalil (Qatar Computing Research Institute (QCRI), HBKU, Qatar), Timothy Cason (Purdue University, USA), Shreyas Sundaram (Purdue University, USA0, Saurabh Bagchi (Purdue University, USA)

1
We model the behavioral biases of human decision-making in securing interdependent systems and show that such behavioral decision-making leads to a suboptimal pattern of resource allocation compared to non-behavioral (rational) decision-making.We provide empirical evidence for the existence of such behavioral bias model through a controlled subject study with 145 participants. We then propose three learning techniques for enhancing decision-making in multi-round setups. We illustrate the benefits of our decision-making model through multiple interdependent real-world systems and quantify the level of gain compared to the case in which the defenders are behavioral. We also show the benefit of our learning techniques against different attack models. We identify the effects of different system parameters (e.g., the defenders’ security budget availability and distribution, the degree of interdependency among defenders, and collaborative defense strategies) on the degree of suboptimality of security outcomes due to behavioral decision-making.

Analyzing the Overhead of Protection on File Accessing Using Linux Security Modules

Wenhui Zhang (Penn State University, USA), Peng Liu (Penn State University, USA), Trent Jaeger (Penn State University, USA)

1
Over the years, the complexity of the Linux Security Module (LSM) is keeping increasing (e.g. 10,684 LOC in Linux v2.6.0 vs. 64,018 LOC in v5.3), and the count of the authorization hooks is nearly doubled (e.g. 122 hooks in v2.6.0 vs. 224 hooks in v5.3). In addition, the computer industry has seen tremendous advancement in hardware (e.g., memory and processor frequency) in the past decade. These make the previous evaluation on LSM, which was done 18 years ago, less relevant nowadays. It is important to provide up-to-date measurement results of LSM for system practitioners so that they can make prudent trade-offs between security and performance. This work evaluates the overhead of LSM for file accesses on Linux v5.3.0. We build a performance evaluation framework for LSM. It has two parts, an extension of LMBench2.5 to evaluate the overhead of file operations for different security modules, and a security module with tunable latency for policy enforcement to study the impact of the latency of policy enforcement on the end-to-end latency of file operations. In our evaluation, we find opening a file would see about 87% (Linux v5.3) performance drop when the kernel is integrated with SELinux hooks (policy enforcement disabled) than without, while the figure was 27% (Linux v2.4.2). We found that performance of the above downgrade is affected by two parts, policy enforcement and hook placement. To further investigate the impact of policy enforcement and hook placement respectively, we build a Policy Testing Module, which reuses hook placements of LSM, while alternating latency of policy enforcement. With this module, we are able to quantitatively estimate the impact of the latency of policy enforcement on the end-to-end latency of file operations by using a multiple linear regression model and count policy authorization frequencies for each syscall.We then discuss and justify the evaluation results with static analysis on syscalls’ call graphs.

Security Analysis on Practices of Certificate Authorities in the HTTPS Phishing Ecosystem

Doowon Kim (University of Tennessee, Knoxville, USA), Haehyun Cho (Arizona State University, USA), Yonghwi Kwon (University of Virginia, USA), Adam Oest (PayPal, Inc., USA), Adam Doupe (Arizona State University, USA), Sooel Son (KAIST, South Korea), Gail-Joon Ahn (Arizona State University, USA), Tudor Dumitras (Univ. Maryland, USA)

2
Phishing attacks are causing substantial damage albeit extensive effort in academia and industry. Recently, a large volume of phishing attacks transit toward adopting HTTPS, leveraging TLS certificates issued from Certificate Authorities (CAs), to make the attacks more effective. In this paper, we present a comprehensive study on the security practices of CAs in the HTTPS phishing ecosystem. We focus on the CAs, critical actors under-studied in previous literature, to better understand the importance of the security practices of CAs and thwart the proliferating HTTPS phishing. In particular, we first present the current landscape and effectiveness of HTTPS phishing attacks comparing to traditional HTTP ones. Then, we conduct an empirical experiment on the CAs’ security practices in terms of the issuance and revocation of the certificates. Our findings highlight serious conflicts between the expected security practices of CAs and reality, raising significant security concerns. We further validate our findings using a longitudinal dataset of abusive certificates used for real phishing attacks in the wild. We confirm that the security concerns of CAs prevail in the wild and these concerns can be one of the main contributors to the recent surge of HTTPS phishing attacks.

ARGUS: Assessing Unpatched Vulnerable Devices on the Internet via Efficient Firmware Recognition

Wei Xie (College of Computer, National University of Defense Technology, China), Chao Zhang (Institute for Network Science and Cyberspace of Tsinghua University, China), Pengfei Wang (College of Computer, National University of Defense Technology, China), Zhenhua Wang (College of Computer, National University of Defense Technology, China), Qiang Yang (College of Computer, National University of Defense Technology, China)

1
Assessing unpatched devices affected by a specified vulnerability is a vital but unsolved issue. Using a proof-of-concept tool on the Internet is illegal, while identifying vulnerable device models and firmware versions via fingerprints is a safer method. However, device search engines such as Shodan do not claim to accurately identify device models or versions, and existing works on firmware online recognition neglect the efficiency challenge of scanning redundant fingerprints. Consequently, this fingerprint-checking method has few real-world verifications on the Internet. We propose ARGUS, a simple but practical framework to identify device models and firmware versions. At its core is a heuristic fingerprint crush saga (FCS) scheme inspired by the phone game “Candy Crush Saga". It can improve efficiency by an average of 156 times compared to scanning fingerprints of all web files by default. This efficiency improvement enables us to widely assess the proportion of unpatched devices affected by 176 CVE vulnerabilities, which is 64.3% on average on the Internet. This result quantitatively proves that the majority of users do not periodically update device firmware.

Session Chair

Yuan Zhang

Session Poster-2

Social Event + Poster Session 2

Conference
11:50 AM — 2:00 PM HKT
Local
Jun 8 Tue, 8:50 PM — 11:00 PM PDT

Please follow the link to join the virtual social event

0
This talk does not have an abstract.

Session Chair

Poster Chair

Session 5A

Network and Web Security (II)

Conference
2:00 PM — 3:20 PM HKT
Local
Jun 8 Tue, 11:00 PM — 12:20 AM PDT

Filtering DDoS Attacks from Unlabeled Network Traffic Data Using Online Deep Learning

Wesley Joon-Wie Tann (National University of Singapore, Singapore), Jackie Tan Jin Wei (National University of Singapore, Singapore), Joanna Purba (National University of Singapore, Singapore), Ee-Chien Chang (National University of Singapore, Singapore)

0
DDoS attacks are simple, effective, and still pose a significant threat even after more than two decades. Given the recent success in machine learning, it is interesting to investigate how we can leverage deep learning to filter out application layer attack requests. There are challenges in adopting deep learning solutions due to the ever-changing profiles, the lack of labeled data, and constraints in the online setting. Offline unsupervised learning methods can sidestep these hurdles by learning an anomaly detector 𝑁 from the normal-day traffic N. However, anomaly detection does not exploit information acquired during attacks, and their performance typically is not satisfactory. In this paper, we propose two approaches that utilize both the historic N and the mixtureM traffic obtained during attacks, consisting of unlabeled requests. First, our proposed approach, inspired by statistical methods, extends an unsupervised anomaly detector 𝑁 to solve the problem using estimated conditional probability distributions.We adopt transfer learning to apply 𝑁 on N andM separately and efficiently, combining the results to obtain an online learner. Second, we formulate a specific loss function more suited for deep learning and use iterative training to solve it in the online setting. On publicly available datasets, such as the CICIDS2017, our online learners achieve an average of 90.6% accuracy rates compared to the baseline detection method, which achieves around 60.0% accuracy. In the offline setting, our approaches on unlabeled data achieve competitive accuracy compared to classifiers trained on labeled data.

Bypassing Push-based Second Factor and Passwordless Authentication with Human-Indistinguishable Notifications

Mohammed Jubur (The University of Alabama at Birmingham, USA), Prakash Shrestha (University of Florida, USA), Nitesh Saxena (The University of Alabama at Birmingham, USA), Jay Prakash (SUTD, Singapore)

1
Second factor (2FA) or passwordless authentication based on notifications pushed to a user’s personal device (e.g., a phone) that the user can simply approve (or deny) has become widely popular due to its convenience. In this paper, we show that the effortlessness of this approach gives rise to a fundamental design vulnerability. The vulnerability stems from the fact that the notification, as shown to the user, is not uniquely bound to the user’s login session running through the browser, and thus if two notifications are sent around the same time (one for the user’s session and one for an attacker’s session), the user may not be able to distinguish between the two, likely ending up accepting the notification of the attacker’s session. Exploiting this vulnerability, we present HIENA1, a simple yet devastating attack against such “one-push” 2FA or passwordless schemes, which can allow a malicious actor to login soon after the victim user attempts to login triggering multiple near-concurrent notifications that seem indistinguishable to the user. To further deceive the user into accepting the attacker-triggered notification, HIENA can optionally spoof/mimic the victim’s client machine information (e.g., the city from which the victim logs in, by being in the same city) and even issue other third-party notifications (e.g., email or social media) for obfuscation purposes. In case of 2FA schemes, we assume that the attacker knows the victim’s password (e.g., obtained via breached password databases), a standard methodology to evaluate the security of any 2FA scheme. To evaluate the effectiveness of HIENA, we carefully designed and ran a human factors lab study where we tested benign and adversarial settings mimicking the user interface designs of well-known one-push 2FA and passwordless schemes. Our results show that users are prone to accepting attacker’s notification in HIENA with high rates, about 83% overall and about 99% upon using spoofed information, which is almost similar to the rates of acceptance of benign login sessions. Even for the non-spoofed sessions (our primary attack), the attack success rates are about 68%, which go up to about 90-97% if the attack attempt is repeated 2-3 times. While we did not see a statistically significant effect of using third-party notifications on attack success rate, in real-life, the use of such obfuscation can be quite effective as users may only see one single 2FA notification (corresponding to attacker’s session) on top of the notifications list which is most likely to be accepted. We have verified that many widely deployed one-push 2FA schemes (e.g., Duo Push, Authy OneTouch, LastPass, Facebook’s and OpenOTP) seem directly vulnerable to our attack.

Click This, Not That: Extending Web Authentication with Deception

Timothy Barron (Yale University, USA), Johnny So (Stony Brook University, USA), Nick Nikiforakis (Stony Brook University, USA)

0
With phishing attacks, password breaches, and brute-force login attacks presenting constant threats, it is clear that passwords alone are inadequate for protecting the web applications entrusted with our personal data. Instead, web applications should practice defense in depth and give users multiple ways to secure their accounts. In this paper we propose login rituals, which define actions that a user must take to authenticate, and web tripwires, which define actions that a user must not take to remain authenticated. These actions outline expected behavior of users familiar with their individual setups on applications they use often. We show how we can detect and prevent intrusions from web attackers lacking this familiarity with their victim’s behavior. We design a modular and application-agnostic system that incorporates these two mechanisms, allowing us to add an additional layer of deception-based security to existing web applications without modifying the applications themselves. Next to testing our system and evaluating its performance when applied to￿ ve popular open-source web applications, we demonstrate the promising nature of these mechanisms through a user study. Specifically, we evaluate the detection rate of tripwires against simulated attackers, 88% of whom clicked on at least one tripwire. We also observe web users’ creation of personalized login rituals and evaluate the practicality and memorability of these rituals over time. Out of 39 user-created rituals, all of them are unique and 79% of users were able to reproduce their rituals even a week after creation.

Analyzing Spatial Differences in the TLS Security of Delegated Web Services

Joonhee Lee (Seoul National University, South Korea), Hyunwoo Lee (Purdue University, USA), Jongheon Jeong (Seoul National University, South Korea), Doowon Kim (University of Tennessee, Knoxville, USA), Taekyoung "Ted" Kwon (Seoul National University, South Korea)

0
To provide secure content delivery, Transport Layer Security (TLS) has become a de facto standard over a couple of decades. However, TLS has a long history of security weaknesses and drawbacks. Thus, the security of TLS has been enhanced by addressing security problems through continuous version upgrades. Meanwhile, to provide fast content delivery globally, websites (or origin web servers) need to deploy and administer many machines in globally distributed environments. They often delegate the management of machines to web hosting services or content delivery networks (CDNs), where the security configurations of distributed servers may vary spatially depending on the managing entities or locations. Based on these spatial differences in TLS security, we find that the security level of TLS connections (and their web services) can be lowered. After collecting the information of (web) domains that exhibit different TLS versions and cryptographic options depending on clients’ locations, we show that it is possible to redirect TLS handshake messages to weak TLS servers, which both the origin server and the client may not be aware of. We investigate 7M domains with these spatial differences of security levels in the wild and conduct the analyses to better understand the root causes of this phenomenon. We also measure redirection delays at various locations in the world to see whether there are noticeable delays in redirections.

Session Chair

Ding Wang

Session 5B

Applied Cryptography (II)

Conference
2:00 PM — 3:20 PM HKT
Local
Jun 8 Tue, 11:00 PM — 12:20 AM PDT

Hash-Enabled Garbling and the Insecurity of Free-Hashing Garbled Circuits

Ruiyu Zhu (Indiana University, USA), Yan Huang (Indiana University, USA)

0
Hashing garbled circuits is an important, albeit expensive, bandwidthsaving technique that can be an order-of-magnitude slower than generating garbled circuits. In a recent work, Fan et al. (EUROCRYPT, 2017) proposed a method to produce GC-hashes without any calls to expensive collision-resistant hash functions. They showed experimentally that the overhead of hashing GCs can be eliminated almost entirely. In this paper, we identify several security flaws in their approach. (1) We show some fundamental weakness in the notion of hashsecurity, which makes it impossible to support any existing malicious GC-hash-based cut-and-choose protocols. (2) Although the concept of hash-security could be useful in certain scenarios, we show (with concrete attacks) that the Free-Hash construction given in their paper is not really hash-secure. As a positive result of this work, we propose and formalize the concept of hash-enabled garbling and show how an actively-secure 2PC protocol can be constructed with black-box use of any hashenabled garbling scheme. This is the first time when the use of GC-hashes in any cut-and-choose protocols is formally examined and rigorously proved secure. Our protocol allows to leverage GChashes to save bandwidth while minimizing the cut-and-choose duplication factor, i.e., using s (instead of 3s) copies of GCs to achieve 2−s statistical security.

Look Before You Leap: Secure Connection Bootstrapping for 5G Networks to Defend Against Fake Base-Stations

Ankush Singla (Purdue University, USA), Rouzbeh Behnia (University of South Florida, USA), Syed Rafiul Hussain (Pennsylvania State University, USA), Attila Yavuz (University of South Florida, USA), Elisa Bertino (Purdue University, USA)

0
The lack of authentication protection for bootstrapping messages broadcast by base-stations makes impossible for devices to differentiate between a legitimate and a fake base-station. This vulnerability has been widely acknowledged, but not yet fixed and thus enables law-enforcement agencies, motivated adversaries and nation-states to carry out attacks against targeted users. Although 5G cellular protocols have been enhanced to prevent some of these attacks, the root vulnerability for fake base-stations still exists. In this paper, we propose an efficient broadcast authentication protocol based on a hierarchical identity-based signature scheme, Schnorr-HIBS, which addresses the root cause of the fake base-station problem with minimal computation and communication overhead. We implement and evaluate our proposed protocol using off-the-shelf software-defined radios and open-source libraries. We also provide a comprehensive quantitative and qualitative comparison between our scheme and other candidate solutions for 5G base-station authentication proposed by 3GPP. Our proposed protocol achieves at least a 6x speedup in terms of end-to-end cryptographic delay and a communication cost reduction of 31% over other 3GPP proposals.

Efficient Graph Encryption Scheme for Shortest Path Queries

Esha Ghosh (Microsoft Research, Redmond, USA), Seny Kamara (Brown University, USA), Roberto Tamassia (Brown University, USA)

1
Graph encryption schemes (introduced by [Chase and Kamara, 2010]) have been receiving growing interest across various disciplines due to their attractive tradeoff between functionality, efficiency and privacy. In this paper, we advance the state of the art on encrypted graph search by providing an efficient graph encryption scheme for shortest path queries. The preprocessing time and space and the query time are proportional to those for building and querying the search structure for the unencrypted graph. Hence, the overhead of providing structured encryption is asymptotically optimal. We implement our scheme and experimentally validate its performance on real world networks. Furthermore, we extend our scheme to support verifiability. Our scheme is the first structured encryption scheme that supports a recursive algorithm, where the number of recursion steps is not known at setup time (unlike the chaining technique from [Chase and Kamara, 2010]). Recursion is an important algorithmic design paradigm. Hence, our technique may help develop other practical encrypted structures for recursive algorithms.

Session Chair

Xingliang Yuan

Session 6A

Software Security and Vulnerability Analysis (I)

Conference
3:40 PM — 5:00 PM HKT
Local
Jun 9 Wed, 12:40 AM — 2:00 AM PDT

How to Take Over Drones

Sebastian Plotz (University of Applied Sciences Stralsund, Germany), Frederik Armknecht (University of Mannheim, Germany), Christian Bunse (University of Applied Sciences Stralsund, Germany)

0
The number of unmanned aerial vehicles (UAV) (hereinafter referred to as drone) is rising in both, private and commercial applications. This makes it necessary that a drone remains under full control of the owner at any time. Most drones are controlled wirelessly by protocols in the 2.4 GHz band. The most commonly used protocols are DSMX (Spektrum), ACCST D16 EU-LBT (FrSky), DEVO (Walkera) and S-FHSS (Futaba). While it has been known that the DSMX protocol is vulnerable to attacks, the security of the other protocols was an open question. In this paper, we give a negative answer: all these protocols are insecure as well. More precisely, we show that it is practically possible to seize control over the drone in all cases. All presented attacks were implemented and validated under real conditions.

Localizing Vulnerabilities Statistically From One Exploit

Shiqi Shen (National University of Singapore, Singapore), Aashish Kolluri (National University of Singapore, Singapore), Zhen Dong (National University of Singapore, Singapore), Prateek Saxena (National University of Singapore, Singapore), Abhik Roychoudhury (National University of Singapore, Singapore)

0
Automatic vulnerability diagnosis can help security analysts identify and, therefore, quickly patch disclosed vulnerabilities. The vulnerability localization problem is to automatically find a program point at which the “root cause” of the bug can be fixed. This paper employs a statistical localization approach to analyze a given exploit. Our main technical contribution is a novel procedure to systematically construct a test-suite which enables high-fidelity localization. We build our techniques in a tool called VulnLoc which automatically pinpoints vulnerability locations, given just one exploit, with high accuracy. VulnLoc does not make any assumptions about the availability of source code, test suites, or specialized knowledge of the type of vulnerability. It identifies actionable locations in its Top-5 outputs, where a correct patch can be applied, for about 88% of 43 CVEs arising in large real-world applications we study. These include 6 different classes of security flaws. Our results highlight the under-explored power of statistical analyses, when combined with suitable test-generation techniques.

Cali: Compiler-Assisted Library Isolation

Markus Bauer (CISPA – Helmholtz Center for Information Security, Germany), Christian Rossow (CISPA – Helmholtz Center for Information Security, Germany)

0
Software libraries can freely access the program’s entire address space, and also inherit its system-level privileges. This lack of separation regularly leads to security-critical incidents once libraries contain vulnerabilities or turn rogue. We present Cali, a compilerassisted library isolation system that fully automatically shields a program from a given library. Cali is fully compatible with mainline Linux and does not require supervisor privileges to execute.We compartmentalize libraries into their own process with well-defined security policies. To preserve the functionality of the interactions between program and library, Cali uses a Program Dependence Graph to track data flow between the program and the library during link time. We evaluate our open-source prototype against three popular libraries: Ghostscript, OpenSSL, and SQLite. Cali successfully reduced the amount of memory that is shared between the program and library to 0.08% (ImageMagick) – 0.4% (Socat), while retaining an acceptable program performance.

Privilege-Escalation Vulnerability Discovery for Large-scale RPC Services: Principle, Design, and Deployment

Zhuotao Liu (Tsinghua University, China), Hao Zhao (Ant Group, China), Sainan Li (Tsinghua University, China), Qi Li (Tsinghua University, China), Tao Wei (Ant Group, China), Yu Wang (Ant Group, China)

0
RPCs are fundamental to our large-scale distributed system. From a security perspective, the blast radius of RPCs is worryingly big since each RPC often interacts with tens of internal system components. Thus, discovering RPC vulnerabilities is often a top priority in the software quality assurance process for production systems. In this paper, we present the design, implementation, and deployment experiences of PAIR, a fully automated system for privilegeescalation vulnerability discovery in Ant Group’s large-scale RPC system. The design of PAIR centers around the live replay design principle where the vulnerability discovery is driven by the live RPC requests collected from production, rather than relying on any engineered testing requests. This ensures that PAIR is able to provide complete coverage to our production RPC requests in a privacy-preserving manner, despite the manifest of scale (billions of daily requests), complexity (hundreds of system-services involved) and heterogeneity (RPC protocols are highly customized). However, the live replay design principle is not a panacea. We made a couple of critical design decisions (and addressed their corresponding challenges) along the way to realize the principle in production. First, to avoid inspecting the responses of user-facing RPCs (due to privacy concerns), PAIR designs a universal and privacypreserving mechanism, via profiling the end-to-end system invocation, to represent the RPC handling logic. Second, to ensure that PAIR provides proactive defense (rather than reactive defense that is often limited by known vulnerabilities), PAIR designs an empirical vulnerability labeling mechanism to effectively identify a group of potentially insecure RPCs while safely excluding other RPCs. During the course of three-year production development, PAIR in total helped locate 133 truly insecure RPCs, from billions of requests, while maintaining a zero false negative rate per our production observations.

Session Chair

Kehuan Zhang

Session 6B

Privacy (I)

Conference
3:40 PM — 5:00 PM HKT
Local
Jun 9 Wed, 12:40 AM — 2:00 AM PDT

Measuring User Perception for Detecting Unexpected Access to Sensitive Resource in Mobile Apps

Trung Tin Nguyen (CISPA Helmholtz Center for Information Security, Germany), Duc Cuong Nguyen (CISPA Helmholtz Center for Information Security, Germany), Michael Schilling (CISPA Helmholtz Center for Information Security, Germany), Gang Wang (University of Illinois at Urbana-Champaign, USA), Michael Backes (CISPA Helmholtz Center for Information Security, Germany)

0
Understanding users’ perception of app behaviors is an important step to detect data access that violates user expectations. While existing works have used various proxies to infer user expectations (e.g., by analyzing app descriptions), how real-world users perceive an app’s data access when they interact with graphical user interfaces (UI) has not been fully explored. In this paper, we aimed to fill this gap by directly measuring how end-users perceive app behaviors based on graphical UI elements via extensive user studies. The results are used to build an automated tool - GUIBAT (Graphical User Interface Behavioral Analysis Tool) - that detects sensitive resource accesses that violate user expectations. We conducted three user studies in total (N=904). The first two user studies were used to build a semantic mapping between user expectations of sensitive resource accesses and the common graphical UI elements (N=459). The third user study (N=445) was used to validate the performance of GUIBAT in predicting user expectations. By comparing user expectations and the actual app behavior (inferred by static program analysis) for 47,909 Android apps, we found that 75.38% of the apps have at least one unexpected sensitive resource access in which thirdparty libraries attributed to 46.13%. Our analysis lays a concrete foundation for modeling user expectations based on UI elements. We show the urgent need for more transparent UI designs to better inform users of data access, and call for new tools to support app developers in this endeavor.

Low-Cost Hiding of the Query Pattern

Maryam Sepehri (Milan University, Italy), Florian Kerschbaum (University of Waterloo, Canada)

0
Several attacks have shown that the leakage from the access pattern in searchable encryption is dangerous. Attacks exploiting leakage from the query pattern are as dangerous, but less explored. While there are known, lightweight countermeasures to hide information in the access pattern by padding the ciphertexts, the same is not true for the query pattern. Oblivious RAM hides the query patterns, but requires a logarithmic overhead in the size of the database and hence will become even slower as data grows. In this paper we present a query smoothing algorithm to hide the frequency information in the query pattern of searchable encryption schemes by introducing fake queries. Our method only introduces a constant overhead of 7 to 13 fake queries per real query in our experiments. Furthermore, we show that our query smoothing algorithm can also be applied to range-searchable encryption schemes and then prevents all recent plaintext recovery attacks.

Horizontal Privacy-Preserving Linear Regression Which is Highly Efficient for Dataset of Low Dimension

Linpeng Lu (Shanghai JiaoTong University, China), Ning Ding (Shanghai JiaoTong University, China)

0
Linear regression is a widely used machine learning model for applications such as personalized health-care prediction, recommendation systems, and policy making etc. Nowadays one important trend of applying this model (also others) is privacy-preserving linear regression, in which multiple parties, each possessing a part of dataset, jointly perform the learning process, while paying a specific attention to the goal of preserving privacy of their data. Consequently some works on how to achieve this goal with various properties appear in recent years. In this paper, we present a new privacy-preserving linear regression protocol for the scenario where dataset is distributed horizontally, which works highly efficiently in particular when training dataset is of low dimension. Our protocol uses two non-colluding servers, in which multiple data providers share their private data, i.e. a 𝑑 × 𝑑 matrix where 𝑑 denotes the dimension of dataset, into two shares and send shares to the two servers respectively which then jointly perform the training process securely. Our technical novelties are as follows. We remark that in the method of solving linear systems (SLS), one mainstream method for linear regression (while the other one is stochastic gradient descent (SGD)), the time complexity only depends on the dimension, regardless of the number of samples. Note that known works using SLS employ garbled circuits for entire linear systems and thus use heavy computation. We implement SLS via the share-computation method which is known for securely implementing SGD and has not been applied to SLS to our knowledge, and thus inherit the advantages from them both (note that SLS admits fast computation when dataset is of low dimension, and the share-computation method is faster than the method of garbled circuits for entire linear systems). In the share-computation for SLS we propose a hybrid method, combining garbled circuits and secret sharing, to realize a secure and round-efficient division for fixed-point number (8+2𝜃 rounds, where 𝜃 is a small number of iterations and is set to 5 in our experiment). We then use the division protocol to implement our new protocol for linear regression. As a consequence, our protocol is highly efficient for dataset of low dimension. We implement our protocol in C++ and the experiments show that our protocol is much more efficient than the state of the art implementations for privacy-preserving linear regression for dataset of low dimension.

Accelerating Secure (2+1)-Party Computation by Insecure but Efficient Building Blocks

Keitaro Hiwatashi (The University of Tokyo / AIST, Japan), Ken Ogura (The University of Tokyo, Japan), Satsuya Ohata (Digital Garage, Inc., Japan), Koji Nuida (The University of Tokyo / AIST, Japan)

0
Secure multi-party computation (MPC) is a cryptographic tool that enables a set of parties to compute a function jointly while keeping each input secret. Since MPC based on secret sharing (SS) achieves high throughput and works fast, many applications have been developed. However, SS-based MPC requires many communication rounds in general, and this becomes a performance bottleneck in real-world applications under high-latency networks. In this paper, we propose SS-based secure three-party computation with almost no preprocessing based on our new (small-)constant-round fundamental gates, by revisiting a framework in a few previous works where a number of parties are assisted by another party who may partially learn secret information. Instead of ordinary logical gates, our fundamental gate is an efficient Equality, for which the result leaks to the third party, and we develop novel two-round constructions of secure building-block protocols (LessThan Comparison, RightShift, Table LookUp, etc.) from the insecure Equality. To show the practicality of our protocols, we implement a secure exact edit distance protocol for two genome strings. Our experiments show that in some network setting our protocol is about 2 times faster (14 times faster taking preprocessing into consideration) than the state-of-the-art SS-based protocol (Ohata and Nuida, FC 2020).

Session Chair

Miroslaw Kutylowski

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.