9:00 a.m.–9:15 a.m. |
Tuesday |
LEET '12 Program Chair: Engin Kirda, Northeastern University
|
9:15 a.m.–10:30 a.m. |
Tuesday |
Session Chair: Fabian Monrose, University of North Carolina, Chapel Hill
Alok Tongaonkar, Ram Keralapura, and Antonio Nucci, Narus, Inc.
The evolution of the Internet in the last few years has been characterized by dramatic changes to the way users behave, interact and utilize the network. This has posed new challenges to network operators. To deal with the increasing number of threats to enterprise networks, operators need greater visibility and understanding of the applications running in their networks. In years gone by, the biggest challenge in network application identification used to be of providing real-time classification at increasing wire speeds. But now the operators are facing another challenge - the ability to keep pace with the tremendous rate of development of new applications. This problem can be attributed largely to the explosive growth in the number of web and mobile applications. This combined with application hiding techniques like encryption, port abuse, and tunneling have rendered the traditional approaches for application identification ineffective. In this paper, we discuss the challenges facing the network operators and the limitations of current state of the art approaches in both the commercial and the research world in solving these problems.
Ari Juels and Ting-Fang Yen, RSA Laboratories
An Advanced Persistent Threat (APT) is a targeted attack against a high-value asset or a physical system. Drawing from analogies in the Sherlock Holmes stories of Sir Arthur Conan Doyle, we illustrate potential strategies of deception and evasion available in this setting, and caution against overly narrow characterization of APTs.
Mike Samuel and Ulfar Erlingsson, Google
Software that processes rich content suffers from endemic security vulnerabilities. Frequently, these bugs are due to data confusion: discrepancies in how content data is parsed, composed, and otherwise processed by different applications, library frameworks, and language runtimes. Data confusion often enables code injection attacks, such as cross-site scripting or SQL injection, by leading to incorrect assumptions about the encodings and checks applied to rich content of uncertain provenance. However, even for well-structured, value-only content, data confusion can critically impact security, e.g., as shown by XML signature vulnerabilities.
This paper advocates the position that data confusion can be effectively prevented through the use of simple mechanisms—such as parsing—that resolve ambiguities by fully resolving content data to canonical, clearly-understood formats.
Using code injection on the Web as our motivation, we make the case that automatic defense mechanisms should be integrated with programming languages, application frameworks, and runtime libraries, and applied with little, or no, developer intervention. We outline a scalable, sustainable approach for developing and maintaining those types of mechanisms. The resulting tools can offer comprehensive protection against data confusion, even when multiple types of rich content data are processed and composed in complex ways
|
10:30 a.m.–11:00 a.m. |
Tuesday |
|
11:00 a.m.–11:50 a.m. |
Tuesday |
Session Chair: Vern Paxson, University of California, Berkeley, and International Computer Science Institute
Paul Ferguson, Trend Micro, Inc.
Trend Micro's Threat Research group is specially tasked with looking forward on the threat landscape and working with technology and/or various product development groups inside the company to ensure that, as a company, we deliver the appropriate security solutions to address emerging threats to our customers. To accomplish this requires our threat research group to understand, explore, and deconstruct various malicious technologies, campaigns, vulnerabilities, and exploits which are currently being perpetrated on victims today. With this in mind, I have briefly outlined below what we are currently witnessing as “emerging threats” which pose serious potential risks to our customers, and others in their daily use of the Internet and beyond.
Eric Chien and Liam OMurchu, Symantec; Nicolas Falliere
On October 14, 2011, we were alerted to a sample by the Laboratory of Cryptography and System Security (CrySyS) at Budapest University of Technology and Economics. The threat appeared very similar to the Stuxnet worm from June of 2010 [1]. CrySyS named the threat Duqu [dyü-kyü] because it creates files with the file name prefix “~DQ” [2]. We confirmed Duqu is a threat nearly identical to Stuxnet, but with a completely different purpose of espionage rather than sabotage.
|
11:50 a.m.–1:30 p.m. |
Tuesday |
|
1:30 p.m.–2:45 p.m. |
Tuesday |
Session Chair: Manuel Egele, University of California, Santa Barbara
David Dittrich, University of Washington
Computer criminals regularly construct large distributed attack networks comprised of many thousands of compromised computers around the globe. Once constituted, these attack networks are used to perform computer crimes, creating yet other sets of victims of secondary computer crimes, such as denial of service attacks, spam delivery, theft of personal and financial information for performing fraud, exfiltration of proprietary information for competitive advantage (industrial espionage), etc.
The arms race between criminal actors who create and operate botnets and the computer security industry and research community who are actively trying to take these botnets down is escalating in aggressiveness. As the sophistication level of botnet engineering and operations increases, so does the demand on reverse engineering, understanding weaknesses in design that can be exploited on the defensive (or counter-offensive) side, and the possibility that actions to take down or eradicate the botnet may cause unintended consequences.
Alexandru G. Bardas, Loai Zomlot, Sathya Chandran Sundaramurthy, and Xinming Ou, Kansas State University; S. Raj Rajagopalan, HP Labs; Marc R. Eisenbarth, HP TippingPoint
UDP traffic has recently been used extensively in flooding-based distributed denial of service (DDoS) attacks, most notably by those launched by the Anonymous group. Despite extensive past research in the general area of DDoS detection/prevention, the industry still lacks effective tools to deal with DDoS attacks leveraging UDP traffic. This paper presents our investigation into the proportional-packet rate assumption, and the use of this criterion to classify UDP traffic with the goal of detecting malicious addresses that launch flooding-based UDP DDoS attacks. We conducted our experiments on data from a large number of production networks including large corporations (edge and core), ISPs, universities, financial institutions, etc. In addition, we also conducted experiments on the DETER testbed as well as a testbed of our own. All the experiments indicate that proportional-packet rate assumption generally holds for benign UDP traffic and can be used as a reasonable criterion to differentiate DDoS and non-DDoS traffic. We designed and implemented a prototype classifier based on this criterion and discuss how it can be used to effectively thwart UDP-based flooding attacks.
Armin Büscher, Websense Security Labs; Thorsten Holz, Ruhr University Bochum
Known for a long time, Distributed Denial-of-Service (DDoS) attacks are still prevalent today and cause harm on the Internet on a daily basis. The main mechanism behind this kind of attacks is the use of so called botnets, i.e., networks of compromised machines under the control of an attacker. There are several different botnet families that focus on DDoS attacks and are even used to sell such attacks as a service on Underground markets.
In this paper, we present an empirical study of modern DDoS botnets and analyze one particular family of botnets in detail. We identified 35 Command and Control (C&C) servers related to DirtJumper (also called Ruskill), one of the popular DDoS botnets in operation at this point in time. We monitored these C&C servers for a period of several months, during which we observed almost two thousand different DDoS attacks carried out by the botmasters behind the botnets. Based on this empirical data, we performed an analysis of the characteristics of DDoS attacks. To complement this C&C-centric point of view, we briefly analyzed the information logged at two different victims of DirtJumper DDoS attacks to study how such attacks are perceived at an endhost. Our results provide insights into modern DDoS attacks and help us to understand how such attacks are carried out nowadays.
|
2:45 p.m.–3:10 p.m. |
Tuesday |
Session Chair: Manuel Egele, University of California, Santa Barbara
Yeongung Park, Dankook University; ChoongHyun Lee, Massachusetts Institute of Technology; Chanhee Lee and JiHyeog Lim, Dankook University; Sangchul Han and Minkyu Park, Konkuk University; Seong-Je Cho, Dankook University
Recent malware often collects sensitive information from third-party applications with an illegally escalated privilege to the system level (the highest level) on the Android platform. An attack to obtain root-level privilege in an Android environment can pose a serious threat to users because it breaks down the whole security system. RGBDroid (Rooting Good-Bye on Droid) is an extension to the Android smartphone platform that effectively detects and responds to the attacks associated with escalation or abuse of privileges. Considering the Android security model, which dictates that users are not allowed to get root-level privilege and that root-level privilege should be restrictively used, RGBDroid can find out whether an application illegally acquires root-level privilege, and does not permit an illegal root-level process to access protected resources according to the principle of least privilege. RGBDroid protects the Android system against malicious applications even when malware obtains root-level privilege by exploiting vulnerabilities of the Android platform.
This paper shows that i) a system can still be safely protected even after the system security is breached by privilege escalation attacks, and ii) our proposed response technique has comparative advantage over conventional prevention techniques in terms of operational overhead which can lead to significant deterioration of overall system performance. RGBDroid has been implemented on an embedded board and verified experimentally.
|
3:10 p.m.–3:40 p.m. |
Tuesday |
|
3:40 p.m.–4:30 p.m. |
Tuesday |
Session Chair: Ulfar Erlingsson, Google Inc.
Jason Britt, Brad Wardman, Dr. Alan Sprague, and Gary Warner, University of Alabama at Birmingham
Phishing websites attempt to deceive people to expose their passwords, user IDs and other sensitive information by mimicking legitimate websites such as banks, product vendors, and service providers. Phishing websites are a pervasive and ongoing problem. Examining and analyzing a phishing website is a good first step in an investigation.
Examining and analyzing phishing websites can be a manually intensive job and analyzing a large continuous feed of phishing websites manually would be an almost insurmountable problem because of the amount of time and labor required. Automated methods need to be created that group large volumes of phishing website data and allow investigators to focus their investigative efforts on the largest phishing website groupings that represent the most prevalent phishing groups or individuals.
An attempt to create such an automated method is described in this paper. The method is based upon the assumption that phishing websites attacking a particular brand are often used many times by a particular group or individual. And when the targeted brand changes a new phishing website is not created from scratch, but rather incremental upgrades are made to the original phishing website. The method employs a SLINK-style clustering algorithm using local domain file commonality between websites as a distance metric. This method produces clusters of phishing websites with the same brand and evidence suggests created by the same phishing group or individual.
Tudor Dumitraş and Petros Efstathopoulos, Symantec Research Labs
The Internet can be a dangerous place: 800,000 new malware variants are detected each day, and this number is growing at an exponential rate—driven by the quest for economic gains. However, over the past ten years operating-system vendors have introduced a number of security technologies that aim to make exploits harder and to reduce the attack surface of the platform. Faced with these two conflicting trends, it is difficult for end-users to determine what techniques make them safer from Internet attacks. In this position paper, we argue that to answer this question conclusively we must analyze field data collected on real hosts that are targeted by attacks—e.g., the approximately 50 million records of anti-virus telemetry available through Symantec’s WINE platform. Such studies can characterize the factors that drive the production of malware, can help us understand the impact of security technologies in the real world and can suggest new security metrics, derived from field observations rather than small lab experiments, indicating how susceptible to attacks a computing platform may be.
|
4:30 p.m.–5:00 p.m. |
Tuesday |
|
5:00 p.m.–5:50 p.m. |
Tuesday |
Session Chair: Engin Kirda, Northeastern University
Yazan Boshmaf, Ildar Muslukhov, Konstantin Beznosov, and Matei Ripeanu, University of British Columbia
The ease with which we adopt online personas and relationships has created a soft spot that cyber criminals are willing to exploit. Advances in artificial intelligence make it feasible to design bots that sense, think and act cooperatively in social settings just like human beings. In the wrong hands, these bots can be used to infiltrate online communities, build up trust over time and then send personalized messages to elicit information, sway opinions and call to action. In this position paper, we observe that defending against such malicious bots raises a set of unique challenges that relate to web automation, online-offline identity binding and usable security.
Kurt Thomas and Chris Grier, University of California, Berkeley; Vern Paxson, University of California, Berkeley, and International Computer Science Institute
As social networks emerge as an important tool for political engagement and dissent, services including Twitter and Facebook have become regular targets of censorship. In the past, nation states have exerted their control over Internet access to outright block connections to social media during times of political upheaval. Parties without such capabilities may however still desire to control political expression. A striking example of such manipulation recently occurred on Twitter when an unknown attacker leveraged 25,860 fraudulent accounts to send 440,793 tweets in an attempt to disrupt political conversations following the announcement of Russia’s parliamentary election results.
In this paper, we undertake an in-depth analysis of the infrastructure and accounts that facilitated the attack. We find that miscreants leveraged the spam-as-a-service market to acquire thousands of fraudulent accounts which they used in conjunction with compromised hosts located around the globe to flood out political messages. Our findings demonstrate how malicious parties can adapt the services and techniques traditionally used by spammers to other forms of attack, including censorship. Despite the complexity of the attack, we show how Twitter’s relevance-based search helped mitigate the attack’s impact on users searching for information regarding the Russian election.
|