Proceedings

The full Proceedings published by USENIX for this symposium are available below. To download individual papers, click on the paper title to go to the download page. Copyright to the individual works is retained by the author(s).

 SOUPS '15 Full Proceedings (PDF)

Thursday, July 23

Privacy Attitudes and Comprehension

A Design Space for Effective Privacy Notices

Florian Schaub, Carnegie Mellon University; Rebecca Balebako, RAND Corporation; Adam L. Durity, Google; Lorrie Faith Cranor, Carnegie Mellon University

Notifying users about a system's data practices is supposed to enable users to make informed privacy decisions. Yet, current notice and choice mechanisms, such as privacy policies, are often ineffective because they are neither usable nor useful, and are therefore ignored by users. Constrained interfaces on mobile devices, wearables, and smart home devices connected in an Internet of Things exacerbate the issue. Much research has studied usability issues of privacy notices and many proposals for more usable privacy notices exist. Yet, there is little guidance for designers and developers on the design aspects that can impact the e ffectiveness of privacy notices. In this paper, we make multiple contributions to remedy this issue. We survey the existing literature on privacy notices and identify challenges, requirements, and best practices for privacy notice design. Further, we map out the design space for privacy notices by identifying relevant dimensions. This provides a taxonomy and consistent terminology of notice approaches to foster understanding and reasoning about notice options available in the context of specifi c systems. Our systemization of knowledge and the developed design space can help designers, developers, and researchers identify notice and choice requirements and develop a comprehensive notice concept for their system that addresses the needs of diff erent audiences and considers the system's limitations and opportunities for providing notice.

Available Media

“WTH..!?!” Experiences, Reactions, and Expectations Related to Online Privacy Panic Situations

Julio Angulo, Karlstad University; Martin Ortlieb, Google

There are moments in which users might find themselves experiencing feelings of panic with the realization that their privacy or personal information on the Internet might be at risk. We present an exploratory study on common experiences of online privacy-related panic and on users’ reactions to frequently occurring privacy incidents. By using the metaphor of a privacy panic button, we also gather users’ expectations on the type of help that they would like to obtain in such situations. Through user interviews (n = 16) and a survey (n = 549), we identify 18 scenarios of privacy panic situations. We ranked these scenarios according to their frequency of occurrence and to the concerns of users to become victims of these incidents. We explore users’ underlying worries of falling pray for these incidents and other contextual factors common to privacy panic experiences. Based on our findings we present implications for the design of a help system for users experiencing privacy panic situations.

Available Media

“My Data Just Goes Everywhere:” User Mental Models of the Internet and Implications for Privacy and Security

Ruogu Kang, Laura Dabbish, Nathaniel Fruchter, and Sara Kiesler, Carnegie Mellon University

Many people use the Internet every day yet know little about how it really works. Prior literature diverges on how people’s Internet knowledge affects their privacy and security decisions. We undertook a qualitative study to understand what people do and do not know about the Internet and how that knowledge affects their responses to privacy and security risks. Lay people, as compared to those with computer science or related backgrounds, had simpler mental models that omitted Internet levels, organizations, and entities. People with more articulated technical models perceived more privacy threats, possibly driven by their more accurate understanding of where specific risks could occur in the network. Despite these differences, we did not find a direct relationship between people’s technical background and the actions they took to control their privacy or increase their security online. Consistent with other work on user knowledge and experience, our study suggests a greater emphasis on policies and systems that protect privacy and security without relying too much on users’ security practices.

Available Media

User Perceptions of Sharing, Advertising, and Tracking

Farah Chanchary and Sonia Chiasson, Carleton University

Extending earlier work, we conducted an online user study to investigate users' understanding of online behavioral advertising (OBA) and tracking prevention tools (TPT), and whether users' willingness to share data with advertising companies varied depending on the type of first party website. We presented results of 368 participant responses across four types of websites - an online banking site, an online shopping site, a search engine and a social networking site.

In general, we identi fied that participants had positive responses for OBA and that they demonstrated clear preferences for which classes of information they would like to disclose online. Our results generalize over a variety of website categories containing data with diff erent levels of sensitivity, as opposed to only the medical context as was shown in previous work by Leon et al. In our study, participants' privacy attitudes signifi cantly dominated their sharing willingness. Interestingly, participants appreciated the idea of user-customized targeted ads and some would be more willing to share data if given prior control mechanisms for tracking protection tools.

Available Media

Design and Compliance

Leading Johnny to Water: Designing for Usability and Trust

Erinn Atwater, Cecylia Bocovich, Urs Hengartner, Ed Lank, and Ian Goldberg, University of Waterloo

Although the means and the motivation for securing private messages and emails with strong end-to-end encryption exist, we have yet to see the widespread adoption of existing implementations. Previous studies have suggested that this is due to the lack of usability and understanding of existing systems such as PGP. A recent study by Ruoti et al. suggested that transparent, standalone encryption software that shows ciphertext and allows users to manually participate in the encryption process is more trustworthy than integrated, opaque software and just as usable.

In this work, we critically examine this suggestion by revisiting their study, deliberately investigating the e ffect of integration and transparency on users' trust. We also implement systems that adhere to the OpenPGP standard and use end-to-end encryption without reliance on third-party key escrow servers.

We fi nd that while approximately a third of users do in fact trust standalone encryption applications more than browser extensions that integrate into their webmail client, it is not due to being able to see and interact with ciphertext. Rather, we fi nd that users hold a belief that desktop applications are less likely to transmit their personal messages back to the developer of the software. We also find that despite this trust di fference, users still overwhelmingly prefer integrated encryption software, due to the enhanced user experience it provides. Finally, we provide a set of design principles to guide the development of future consumer-friendly end-to-end encryption tools.

Available Media

Usability of Augmented Reality for Revealing Secret Messages to Users but Not Their Devices

Sarah J. Andrabi, Michael K. Reiter, and Cynthia Sturton, The University of North Carolina at Chapel Hill

We evaluate the possibility of a human receiving a secret message while trusting no device with the contents of that message, by using visual cryptography (VC) implemented with augmented-reality displays (ARDs). In a pilot user study using Google Glass and an improved study using the Epson Moverio, users were successfully able to decode VC messages using ARDs. In particular, 26 out of 30 participants in the Epson Moverio study decoded numbers and letters with 100% accuracy. Our studies also tested assumptions made in previous VC research about users' abilities to detect active modication of a ciphertext. While a majority of the participants could identify that the images were modified, fewer participants could detect all of the modifications in the ciphertext or the decoded plaintext.

Available Media

Unpacking Security Policy Compliance: The Motivators and Barriers of Employees’ Security Behaviors

John M. Blythe, Lynne Coventry, and Linda Little, Northumbria University

The body of research that focuses on employees’ Information Security Policy compliance is problematic as it treats compliance as a single behavior. This study explored the underlying behavioral context of information security in the workplace, exploring how individual and organizational factors influence the interplay of the motivations and barriers of security behaviors. Investigating factors that had previously been explored in security research, 20 employees from two organizations were interviewed and the data was analyzed using framework analysis. The analysis indicated that there were seven themes pertinent to information security: Response Evaluation, Threat Evaluation, Knowledge, Experience, Security Responsibility, Personal and Work Boundaries, and Security Behavior. The findings suggest that these differ by security behavior and by the nature of the behavior (e.g. on- and offline). Conclusions are discussed highlighting barriers to security actions and implications for future research and workplace practice. 

Available Media

Authentication Experience

"I Added '!' at the End to Make It Secure": Observing Password Creation in the Lab

Blase Ur, Fumiko Noma, Jonathan Bees, Sean M. Segreti, Richard Shay, Lujo Bauer, Nicolas Christin, and Lorrie Faith Cranor, Carnegie Mellon University

Users often make passwords that are easy for attackers to guess. Prior studies have documented features that lead to easily guessed passwords, but have not probed why users craft weak passwords. To understand the genesis of common password patterns and uncover average users’ misconceptions about password strength, we conducted a qualitative interview study. In our lab, 49 participants each created passwords for fictitious banking, email, and news website accounts while thinking aloud. We then interviewed them about their general strategies and inspirations. Most participants had a well-defined process for creating passwords. In some cases, participants consciously made weak passwords. In other cases, however, weak passwords resulted from misconceptions, such as the belief that adding “!” to the end of a password instantly makes it secure or that words that are difficult to spell are more secure than easy-to-spell words. Participants commonly anticipated only very targeted attacks, believing that using a birthday or name is secure if those data are not on Facebook. In contrast, some participants made secure passwords using unpredictable phrases or non-standard capitalization. Based on our data, we identify aspects of password creation ripe for improved guidance or automated intervention.

Available Media

Social Media As a Resource for Understanding Security Experiences: A Qualitative Analysis of #Password Tweets

Paul Dunphy, Vasilis Vlachokyriakos, and Anja Thieme, Newcastle University; James Nicholson, Northumbria University; John McCarthy, University College Cork; Patrick Olivier, Newcastle University

As security technologies become more embedded into people's everyday lives, it becomes more challenging for researchers to understand the contexts in which those technologies are situated. The need to develop research methods that provide a lens on personal experiences has driven much recent work in human-computer interaction, but has so far received little focus in usable security. In this paper we explore the potential of the micro blogging site Twitter to provide experience-centered insights into security practices. Taking the topic of passwords as an example, we collected tweets with the goal to capture personal narratives of password use situated in its context. We performed a qualitative content analysis on the tweets and uncovered: how tweets contained critique and frustration about existing password practices and workarounds; how people socially shared and revoked their passwords as a deliberate act in exploring and de fining their relationships with others; practices of playfully bypassing passwords mechanisms and how passwords are appropriated in portrayals of self. These findings begin to evidence the extent to which passwords increasingly serve social functions that are more complex than have been documented in previous research.

Available Media

“I’m Stuck!”: A Contextual Inquiry of People with Visual Impairments in Authentication

Bryan Dosono, Jordan Hayes, and Yang Wang, Syracuse University

Current authentication mechanisms pose significant challenges for people with visual impairments. This paper presents results from a contextual inquiry study that investigated the experiences this population encounters when logging into their computers, smart phones, and websites that they use. By triangulating results from observation, contextual inquiry interviews and a hierarchical task analysis of participants’ authentication tasks, we found that these users experience various difficulties associated with the limitations of assistive technologies, suffer noticeable delays in authentication and fall prey to confusing login challenges. The hierarchical task analysis uncovered challenging and time-consuming steps in the authentication process that participants performed. Our study raises awareness of these difficulties and reveals the limitations of current authentication experiences to the security community. We discuss implications for designing accessible authentication experiences for people with visual impairments.

Available Media

Friday, July 24

Authentication Methods

Where Have You Been? Using Location-Based Security Questions for Fallback Authentication

Alina Hang, and Alexander De Luca, University of Munich; Matthew Smith, University of Bonn; Michael Richter and Heinrich Hussmann, University of Munich

In this paper, we propose and evaluate the combination of location-based authentication with security questions as a more usable and secure fallback authentication scheme. A four weeks user study with an additional evaluation after six months was conducted to test the feasibility of the concept in the context of long-term fallback authentication. The results show that most users are able to recall the locations to their security questions within a distance of 30 meters, while potential adversaries are bad in guessing the answers even after performing Internet research. After four weeks, our approach yields an accuracy of 95% and reaches, after six months, a value of 92%. In both cases, none of the adversaries were able to attack users successfully.

Available Media

The Impact of Cues and User Interaction on the Memorability of System-Assigned Recognition-Based Graphical Passwords

Mahdi Nasrullah Al-Ameen, Kanis Fatema, Matthew Wright, and Shannon Scielzo, The University of Texas at Arlington

User-chosen passwords reflecting common strategies and patterns ease memorization, but oer uncertain and often weak security. System-assigned passwords provide higher security, and thus in commercially deployed graphical-password systems (e.g., Passfaces), images are randomly assigned by the system. It is difficult, however, for many users to remember system-assigned passwords. We argue that this is because existing password schemes do not fully leverage humans' cognitive strengths, and we thus examine techniques to enhance password memorability that incorporate scientic understanding of long-term memory. In our study, we examine the efficacy of spatial cues (fixed position of images), verbal cues (phrases/facts related to the images), and employing user interaction (learning images through writing a short description at registration) to improve the memorability of passwords based on face images and object images. We conducted a multi-session in-lab user study with 56 participants, where each participant was assigned seven different graphical passwords, each representing one study condition. One week after registration, participants had a 98% login success rate for a scheme offering spatial and verbal cues, while the scheme based on user interaction had a 95% login success rate for face images and a 93% login success rate for object images. All of these were significantly higher than the control conditions representing existing graphical password schemes. These findings contribute to our understanding of the impact of cues and user interaction on graphical passwords, and they show a promising direction for future research to gain high memorability for system-assigned random passwords.

Available Media

On the Memorability of System-generated PINs: Can Chunking Help?

Jun Ho Huh, Honeywell ACS Labs; Hyoungshick Kim, Sungkyunkwan University; Rakesh B. Bobba, Oregon State University; Masooda N. Bashir, University of Illinois at Urbana-Champaign; Konstantin Beznosov, University of British Columbia

To ensure that users do not choose weak personal identication numbers (PINs), many banks give out system-generated random PINs. 4-digit is the most commonly used PIN length, but 6-digit system-generated PINs are also becoming popular. The increased security we get from using system-generated PINs, however, comes at the cost of memorability. And while banks are increasingly adopting system-generated PINs, the impact on memorability of such PINs has not been studied.

We conducted a large-scale online user study with 9,114 participants to investigate the impact of increased PIN length on the memorability of PINs, and whether number <em>chunking</em> techniques (breaking a single number into multiple smaller numbers) can be applied to improve memorability for larger PIN lengths. As one would expect, our study shows that system-generated 4-digit PINs outperform 6-, 7-, and 8-digit PINs in long-term memorability. Interestingly, however, we find that there is no statistically significant difference in memorability between 6-, 7-, and 8-digit PINs, indicating that 7-, and 8-digit PINs should also be considered when looking to increase PIN length to 6-digits from currently common length of 4-digits for improved security.

By grouping all 6-, 7-, and 8-digit chunked PINs together, and comparing them against a group of all non-chunked PINs, we find that chunking, overall, improves memorability of system-generated PINs. To our surprise, however, none of the individual chunking policies (e.g., 0000-00-00) showed statistically significant improvement over their peer non-chunked policies (e.g., 00000000), indicating that chunking may only have a limited impact. Interestingly, the top performing 8-digit chunking policy did show noticeable and statistically significant improvement in memorability over shorter 7-digit PINs, indicating that while chunking has the potential to improve memorability, more studies are needed to understand the contexts in which that potential can be realized.

Available Media

Evaluating the Effectiveness of Using Hints for Autobiographical Authentication: A Field Study

Yusuf Albayram and Mohammad Maifi Hasan Khan, University of Connecticut

To address the limitations of static challenge question based authentication mechanism, recently smartphone-based autobiographical authentication mechanisms are being explored where challenge questions are generated using users' day-to-day activities captured by smartphones dynamically. However, users' poor recall rate in such systems is still a significant problem that negatively affects the usability of such systems. To address this challenge, this paper investigates the possibility of using hints that may help users to recall recent day-to-day events more easily and explores various design alternatives for generating hints. Specifically, in this paper, we generate challenge questions and hints for three different kinds of autobiographical data (e.g., call logs, SMS logs, and location logs), and evaluate the effect of different question types and hint types on user performance by conducting a real-life study with 24 users over a 30 day period. To test whether hints are useful/harmful for adversaries' response accuracy, we simulate various kinds of adversaries (e.g., naive and knowledgeable) by recruiting volunteers in pairs (e.g., close friends, significant others). In our study, we observed that, for legitimate users, hint was effective for all different question types. Interestingly, we found that hint has negative effect on strong adversarial users and no significant effect on performance for naive adversarial users.

Available Media

Mobile Privacy and Security

Usability and Security Perceptions of Implicit Authentication: Convenient, Secure, Sometimes Annoying

Hassan Khan, Urs Hengartner, and Daniel Vogel, University of Waterloo

Implicit authentication (IA) uses behavioural biometrics to provide continuous authentication on smartphones. IA has been advocated as more usable when compared to traditional explicit authentication schemes, albeit with some security limitations. Consequently researchers have proposed that IA provides a middle-ground for people who do not use traditional authentication due to its usability limitations or as a second line of defence for users who already use authentication. However, there is a lack of empirical evidence that establishes the usability superiority of IA and its security perceptions. We report on the first extensive two-part study (n = 37) consisting of a controlled lab experiment and a field study to gain insights into usability and security perceptions of IA. Our findings indicate that 91% of participants found IA to be convenient (26% more than the explicit authentication schemes tested) and 81% perceived the provided level of protection to be satisfactory. While this is encouraging, false rejects with IA were a source of annoyance for 35% of the participants and false accepts and detection delay were prime security concerns for 27% and 22% of the participants, respectively. We point out these and other barriers to the adoption of IA and suggest directions to overcome them.

Available Media

Understanding the Inconsistencies between Text Descriptions and the Use of Privacy-sensitive Resources of Mobile Apps

Takuya Watanabe, Waseda University; Mitsuaki Akiyama, NTT Secure Platform Labs; Tetsuya Sakai, Hironori Washizaki, and Tatsuya Mori, Waseda University

Permission warnings and privacy policy enforcement are widely used to inform mobile app users of privacy threats. These mechanisms disclose information about use of privacy-sensitive resources such as user location or contact list. However, it has been reported that very few users pay attention to these mechanisms during installation. Instead, a user may focus on a more user-friendly source of information: text description, which is written by a developer who has an incentive to attract user attention. When a user searches for an app in a marketplace, his/her query keywords are generally searched on text descriptions of mobile apps. Then, users review the search results, often by reading the text descriptions; i.e., text descriptions are associated with user expectation. Given these observations, this paper aims to address the following research question: What are the primary reasons that text descriptions of mobile apps fail to refer to the use of privacy-sensitive resources? To answer the research question, we performed empirical large-scale study using a huge volume of apps with our ACODE (Analyzing COde and DEscription) framework, which combines static code analysis and text analysis. We developed light-weight techniques so that we can handle hundred of thousands of distinct text descriptions. We note that our text analysis technique does not require manually labeled descriptions; hence, it enables us to conduct a large-scale measurement study without requiring expensive labeling tasks. Our analysis of 200,000 apps and multilingual text descriptions collected from official and third-party Android marketplaces revealed four primary factors that are associated with the inconsistencies between text descriptions and the use of privacy-sensitive resources: (1) existence of app building services/frameworks that tend to add API permissions/code unnecessarily, (2) existence of prolific developers who publish many applications that unnecessarily install permissions and code, (3) existence of secondary functions that tend to be unmentioned, and (4) existence of third-party libraries that access to the privacy-sensitive resources. We believe that these findings will be useful for improving users’ awareness of privacy on mobile software distribution platforms.

Available Media

On the Impact of Touch ID on iPhone Passcodes

Ivan Cherapau, Ildar Muslukhov, Nalin Asanka, and Konstantin Beznosov, University of British Columbia

Smartphones today store large amounts of data that can be confidential, private or sensitive. To protect such data, all mobile OSs have a phone lock mechanism, a mechanism that requires user authentication before granting access to applications and data on the phone. iPhone’s unlocking secret (a.k.a., passcode in Apple’s terminology) is also used to derive a key for encrypting data on the device. Recently, Apple has introduced Touch ID, that allows a fingerprint-based authentication to be used for unlocking an iPhone. The intuition behind the technology was that its usability would allow users to use stronger passcodes for locking their iOS devices, without substantially sacrificing usability. To this date, it is unclear, however, if users take advantage of Touch ID technology and if they, indeed, employ stronger passcodes. It is the main objective and the contribution of this paper to fill this knowledge gap. In order to answer this question, we conducted three user studies (a) an in-person survey with 90 participants, (b) interviews with 21 participants, and (c) an online survey with 374 Amazon Mechanical Turks. Overall, we found that users do not take an advantage of Touch ID and use weak unlocking secrets, mainly 4-digit PINs, similarly to those users who do not use Touch ID. To our surprise, we found that more than 30% of the participants in each group did not know that they could use passwords instead of 4-digit PINs. Some other participants indicated that they adopted PINs due to better usability, in comparison to passwords. Most of the participants agreed that Touch ID, indeed, offers usability benefits, such as convenience, speed and ease of use. Finally, we found that there is a disconnect between users’ desires for security that their passcodes have to offer and the reality. In particular, only 12% of participants correctly estimated the security their passcodes provide. 

Available Media

Learning Assigned Secrets for Unlocking Mobile Devices

Stuart Schechter, Microsoft; Joseph Bonneau, Stanford University and Electronic Frontier Foundation

Nearly all smartphones and tablets support unlocking with a short user-chosen secret: e.g., a numeric PIN or a pattern. To address users’ tendency to choose guessable PINs and patterns, we compare two approaches for helping users learn assigned random secrets. In one approach, built on our prior work, we assign users a second numeric PIN and, during each login, we require them to enter it after their chosen PIN. In a new approach, we re-arrange the digits on the keypad so that the user’s chosen PIN appears on an assigned random sequence of key positions. We performed experiments with over a thousand participants to compare these two repetition-learning approaches to simple user-chosen PINs and assigned PINs that users are required to learn immediately at account set-up time. Almost all of the participants using either repetition-learning approach learned their assigned secrets quickly and could recall them three days after the study. Those using the new mapping approach were less likely to write down their secret. Surprisingly, the learning process was less time consuming for those required to enter an extra PIN.

Available Media

Security Experience

Security Practices for Households Bank Customers in the Kingdom of Saudi Arabia

Deena Alghamdi, Ivan Flechais, and Marina Jirotka, University of Oxford

Banking security is an instance of a socio-technical system, where technology and customers’ practices need to work in harmony for the overall system to achieve its intended aims. While the technology of banking security is of interest, our study focuses on exploring the specific practices of household bank customers in the Kingdom of Saudi Arabia (KSA). The findings describe some practices of household customers and reveal some of the reasons behind them. Contrary to banking policy, sharing bank authentication credentials appears to be a common practice for our participants, and a number of different reasons are presented: trust, driving restrictions, the esteem placed in parents, and the ‘need to know’ this information. On the other hand, some participants consider credentials to be private information and do not share, although other participants view this as a sign of distrust. Implications of such practices on the Saudi banking system are outlined and discussed.

Available Media

Too Much Knowledge? Security Beliefs and Protective Behaviors Among United States Internet Users

Rick Wash and Emilee Rader, Michigan State University

Home computers are frequently the target of malicious attackers because they are usually administered by non-experts. Prior work has found that users who make security decisions about their home computers often possess different mental models of information security threats, and use those mental models to make decisions about security. Using a survey, we asked a large representative sample of United States Internet users about different causal beliefs related to computer security, and about the actions they regularly undertake to protect their computers. We found demographic differences in both beliefs about security and security behaviors that pose challenges for helping users become more informed about security. Many participants reported weakly held beliefs about viruses and hackers, and these were the least likely to say they take protective actions. These results suggest that all security knowledge is not the same, educating users about security is not simply a more-is-better issue, and not all users should receive the same messages.

Available Media

“...No one Can Hack My Mind”: Comparing Expert and Non-Expert Security Practices

Iulia Ion, Rob Reeder, and Sunny Consolvo, Google

The state of advice given to people today on how to stay safe online has plenty of room for improvement. Too many things are asked of them, which may be unrealistic, time consuming, or not really worth the effort. To improve the security advice, our community must find out what practices people use and what recommendations, if messaged well, are likely to bring the highest benefit while being realistic to ask of people. In this paper, we present the results of a study which aims to identify which practices people do that they consider most important at protecting their security online. We compare self-reported security practices of non-experts to those of security experts (i.e., participants who reported having five or more years of experience working in computer security). We report on the results of two online surveys—one with 231 security experts and one with 294 MTurk participants—on what the practices and attitudes of each group are. Our findings show a discrepancy between the security practices that experts and non-experts report taking. For instance, while experts most frequently report installing software updates, using two-factor authentication and using a password manager to stay safe online, non-experts report using antivirus software, visiting only known websites, and changing passwords frequently.

Available Media

A Human Capital Model for Mitigating Security Analyst Burnout

Sathya Chandran Sundaramurthy, Alexandru G. Bardas, Jacob Case, Xinming Ou, and Michael Wesch, Kansas State University; John McHugh, RedJack, LLC.; S. Raj Rajagopalan, Honeywell ACS Labs

Security Operation Centers (SOCs) are being operated by universities, government agencies, and corporations to defend their enterprise networks in general and in particular to identify malicious behaviors in both networks and hosts. The success of a SOC depends on having the right tools, processes and, most importantly, efficient and effective analysts. One of the worrying issues in recent times has been the consistently high burnout rates of security analysts in SOCs. Burnout results in analysts making poor judgments when analyzing security events as well as frequent personnel turnovers. In spite of high awareness of this problem, little has been known so far about the factors leading to burnout. Various coping strategies employed by SOC management such as career progression do not seem to address the problem but rather deal only with the symptoms. In short, burnout is a manifestation of one or more underlying issues in SOCs that are as of yet unknown. In this work we performed an anthropological study of a corporate SOC over a period of six months and identified concrete factors contributing to the burnout phenomenon. We use Grounded Theory to analyze our fieldwork data and propose a model that explains the burnout phenomenon. Our model indicates that burnout is a human capital management problem resulting from the cyclic interaction of a number of human, technical, and managerial factors. Specifically, we identified multiple vicious cycles connecting the factors affecting the morale of the analysts. In this paper we provide detailed descriptions of the various vicious cycles and suggest ways to turn these cycles into virtuous ones. We further validated our results on the fieldnotes from a SOC at a higher education institution. The proposed model is able to successfully capture and explain the burnout symptoms in this other SOC as well.

Available Media