Business Collaboration Platforms (BCPs), like Microsoft Teams and Slack, have become indispensable collaboration and productivity tools in remote work environments. Their widespread usage implies that a lot of sensitive information is passing through them. At the same time, these platforms are often integrated with many third-party apps. We examine the security of the access control system that BCPs enforce on these apps and find a number of design flaws that allows malicious apps to violate the confidentiality and integrity of various resources in BCPs.
The Problem
Beyond basic multi-user chat capabilities, modern BCPs feature a vibrant third-party app ecosystem, where users are allowed to install an app that connects their data from other services and provides additional productivity-enhancing functionality. For example, users can start video calls with Zoom, upload files to DropBox, or manage code repositories, all from within their BCP workspace (which is a virtual space to host communications for a group of users).
Therefore, a BCP's workspace not only hosts private communications among users but also serves as a hub for sensitive resources from third-party services. A natural question follows: can a BCP correctly govern the permissions for its apps and what resources they may access? Is there any distinctiveness in this emerging class of multi-user systems that can impact its security and privacy properties, when compared to traditional app platforms?
We contribute to the understanding of such vital problems by performing an experimental analysis of the third-party app model in BCP. In particular, we focus on Microsoft Teams and Slack, the top two most popular BCPs used in businesses reported by a recent survey [1] while providing an abundant list of officially approved third-party apps.
Analyzing BCPs is challenging because these systems, including their apps, are closed-source. Specifically, apps themselves are remotely-hosted web services whose endpoints are only known to the BCP. This precludes classical analysis techniques such as source code analysis or API endpoint testing. As an external party, we can only interact with apps the way a human user would -- through the BCP itself. Therefore, we anchored our study on the interactions between apps and users, by examining what the BCP's access control model is for each type of interaction and how an attacker may potentially violate it.
We have found that both Slack and Teams apply a two-level permission system to control the resources a BCP app can access. However, their systems exhibit design-level vulnerabilities that violate security principles like least privilege and complete mediation, and hence cannot adequately confine the behavior of a malicious app. This would often lead to privilege escalation or violation of the confidentiality and integrity guarantees of private chat messages and third-party resources connected to BCPs. To demonstrate the concrete harms posed to end-users, we introduce the vulnerability for each type of interaction in BCPs along with proof-of-concept attacks and prevalence measurement.
The process to get a BCP app installed to your workspace is similar to installing apps in other OAuth-based or mobile app platforms. When the user clicks on the installation link, the app requests a number of permission scopes from the platform, each representing a certain set of capabilities, and the user is greeted with an authorization prompt (as shown in the picture below, which many of you probably have encountered similar ones before), detailing these capabilities.
Once installed, a BCP app may act in three roles: 1) as a workspace feature provider that, for example, allows user-invokable actions through customized slash commands, like /zoom
that will initiate a new Zoom meeting; 2) as an interactive bot that can chat with other users or be invited to channels (i.e., multi-user chat groups); 3) as a user delegate that performs actions, such as sending messages or reacting with emoji, on the behalf of users.
However, unlike other app platforms that have been well-studied [2, 3, 5], little work has been done on studying the BCP's access control model, despite it having some characteristics that lead to more severe security and privacy concerns. First, there is no selective toggling of permissions, so users must accept all the permissions if they want to use an app. Second, the installation of a new app is almost imperceptible to other users (unless they somehow periodically do a manual check of the app list), even though it can have workspace-wide consequences. Third, the app is entirely hosted on a remote server managed by its developer, preventing the BCP or users from inspecting the app's source code and allowing the app to change its behavior at will. Finally, the BCPs have lenient default app policy. Although the administrators of BCP workspace can limit who is allowed to install apps and which apps can be installed, the default set by Slack and Teams is that any user can install any apps from any source.
We study the access control model in Slack and Teams to identify any potential security design issues. At high-level, both platforms have designed their access control model based on a two-level permission-based system. An example of how this permission system works is shown in the figure below.
Level 1: static permission scopes. An app must first declare a set of permission scopes it requires, with each scope representing the permission to a type of action. For example, Slack's group:history bot token scope is associated with the ability to read private channel messages, while its chat:write
user token scope links to sending and modifying messages as the user. These scopes are static, in the sense that they are predefined based on the BCP's perceived categorization of the workspace resources, and may not align with the user's desired security policies, which can vary by workspaces and evolve.
Level 2: runtime policies. To compensate for the limitation of permission scopes, both BCPs implement a number of runtime policies to determine which instances in a resource type the app can access based on various conditions. Users can usually control these conditions to express their desired security policies. For example, users can have more fine-grained control of which private channel's messages an app (that has the prerequisite permission scope) can view by inviting the app to a specific channel.
In spite of such two-level permissions, we uncover two design issues in this system that violate basic security principles. First, the runtime policies are ad-hoc and incomplete. As a result, not all user security policies can be correctly expressed. We find that not only do they differ in each BCP, but even in the same BCP there are often inconsistencies between the runtime policies of similar types of resources. The incompleteness of runtime policies leads to coarse-grained access control, violating the principle of least privilege. Second, The ownership or provenance of some resources is not properly tracked or enforced. This frequently happens when a user delegates an app to create resources. For example, Teams does not differentiate between messages sent by a real user and a delegated app. We find that the absence of such property will result in privilege escalation.
We perform experimental security analysis to study how a malicious app can exploit the two security design issues in Slack's and Teams' permission systems. Specifically, we systematically examine the three types of interactions that a malicious app may have with other entities in the workspace and check whether such interaction involves resources that have incomplete runtime policy or suffer from improper provenance tracking.
Our threat model assumes that the attacker has targeted a BCP workspace with a number of users and already-installed apps, and the attacker has also tricked one of the users (victim) into installing an app under the attacker's control. We believe this is a realistic scenario, because 1) an app can easily masquerade itself as other legitimate apps during installation, by copying their publicly available manifests; 2) the attacker can be a curious insider who wants to gain information they cannot access; 3) a previously-installed benign app may get compromised and turn malicious suddenly. In addition, since we focus on the design of permission systems, we do assume that the BCPs themselves are secure without any implementation-level exploits.
1. App-to-App Delegation Attacks
One of the core functionalities provided by BCP apps is to chat with users, by presenting themselves as interactive bots so that human users can send direct text messages to them and instruct them to perform certain tasks. Meanwhile, BCPs also allow apps to perform delegated actions for human users. For example. Dropbox's Slack app utilizes the chat:write
user token scope to share files in channels on behalf of users.
These two functionalities, when combined, enable app-to-app interactions: one app that has the delegated permission to send user's messages can interact with another app's bot. While such interaction can be deemed beneficial and essential to productivity [4], it also has severe security implications. When the former app turns malicious, it can potentially invoke actions from the latter app without direct user approval, and such actions might affect data in the user's connected third-party account. We refer to attacks exploiting this vulnerability as delegation attacks.
Example of vulnerable apps
Not all apps are susceptible to delegation attacks. Some apps may directly reject text messages sent by another app, even when the latter acts as a user delegate. However, we observe that Teams apps are more vulnerable to this type of attack, since Teams does not tell the receiving apps whether a message comes from a real user or a delegation app -- a display of improper provenance tracking. One particularly security-critical example of vulnerable app we find is the BitBucket's Teams app. A malicious app can instruct this app to merge pull requests in any connected repository, without requiring any inputs from users. Imagine if the repo is public, the attacker can even submit its own evil requests and merge them, leading to code poisoning or backdoor injection.
Potential prevalence
We find that 427 (33%) of Teams apps in the official directory are vulnerable to this attack, meaning that they will execute actions based on delegated messages. For Slack, we cannot pinpoint the exact number of apps vulnerable to this attack, as many require third-party accounts to function. We do find that 1,493 Slack apps (61%) in the official directory request at least one read-related scope, implying that they are subscribing to events in the workspace and thus might be potentially affected by the attack.
2. User-to-App Interaction Hijacking
BCP apps can also customize how users interact with them and workspace features. For example, an app can introduce new "slash commands" into a workspace. Thanks to this interaction, one can start a Zoom video call by simply entering /zoom
. Another example is the "link unfurl" feature, which allows an app to append a preview to some website URLs in chat messages. However, we discover that a malicious app can interfere when a user attempts to interact with a benign app, a problem similar to DNS domain squatting.
Slash Command Hijacking
In Slack's user-to-app interactions, all apps' slash commands share a single namespace, creating the potential for name collisions. A malicious app can hijack another app's commands, responding to any user that tries to launch the hijacked command in the victim app's stead. We demonstrate the command hijacking attack on Zoom's Slack app. We create a malicious app that masquerades as the official Zoom app. At the time of installation, our malicious app requests the commands
scope to implement a benign command called /foo
. Once installed, we rename this command as /zoom
to hijack the previous official /zoom
command. After that, the malicious app will use the attacker's Zoom account to start meetings every time a user invokes the /zoom
command. The renaming of commands is imperceptible to users, as it does not require any reinstallation or reauthorization of the app.
Link Unfurling Hijacking
Teams allows an app to provide customized link unfurling for an authorized user. The app can register a domain in its manifest. After that, whenever the user posts a URL under this domain, the app can attach a rich message card containing texts, images, or even interactive buttons. Such unfurled content can be hijacked similarly to Slack's slash command: a malicious app can register the same domain as the victim app and, if the malicious app is installed after the victim app, its unfurled content will be displayed instead of the victim app's one. Moreover, the malicious app can masquerade as the victim app to further deceive the user, as its name and icon will also be part of the unfurled content.
Potential Prevalence
In Slack, this slash command attack only exploits the commands
scope, which is requested by 1,266 apps (51.5%). These apps can immediately overwrite each other's commands to hijack their standard workflows. We also find that many apps in the Slack App Directory already have conflicting commands: 270 apps register commands used by other apps. In Teams, the link unfurl attack relies on the messageHandlers
capability, which is requested by 77 apps (5.9%).
3. App-to-User Confidentiality Violations
We analyze the different ways in which BCP apps interact with user messages. Our main discovery is that an attacker can leak messages from private channels without having the proper permission to read from those channels. We note that this vulnerability only exists in Slack, as the features that this attack relies on do not exist in Teams -- a classic struggle between functionality and security.
Unauthorized message extraction
We present a powerful attack that can achieve such privilege escalation. Slack provides a public URL to every message in a workspace. This URL, when accessed, will only show the message if a proper login credential is provided. However, when this URL is posted to the user's personal channel, Slack will automatically unfurl (or expand) the URL to show its message content, provided that the user has access to it. However, while the user does indeed have access to the message, an app that can access the user's personal channel may not (for example, reading this personal channel requires im:history
while reading multi-user private channels requires groups:history
). Therefore, this app can abuse such discrepancies to read messages that it originally cannot read. By repeatedly obtaining the URLs of unauthorized messages (see our full paper for details) and posting them, the app is basically capable of eavesdropping on all private channels that this user has joined. The flow of this attack is illustrated below. We also find other types of attacks that can achieve the same level of privilege escalation but use a different set of permissions, such as pinning or emoji-reacting to messages.
Potential prevalence
Fortunately, there are not many existing apps today capable of launching this attack. Out of all 1,640 apps that do not request explicit scopes to read private channels (namely, groups:history
), we only counted 11 apps with the necessary permissions to extract unauthorized messages.
The attack classes we identified exist because the BCP permission model violates classic security principles. During our disclosure to Microsoft and Slack, while Slack acknowledged the issue of unauthorized message extraction, they do not consider that other attacks meet their definitions of security vulnerabilities, due to their view of the workspace as a trusted environment. Therefore, we now discuss potential countermeasures and how much they will help mitigate the attacks.
- Finer-grained Scopes. Slack and Teams define several coarse-grained scopes that manage multiple resources of different types. For example, BCPs have scopes allowing an app to send messages to any target with the identity of the authorizing user. They should break down these scopes into two separate scopes: one that allows sending messages to non-app targets, and another that allows messages to app targets.
- Stricter Runtime Policy Checks. Stricter checks can help address the message extraction attacks found in Slack. Specifically, Slack first needs to fix its coarse-grained modeling of the message resources by decoupling the unfurled content from the message and treating it as a separate type of resource. Slack also needs to track the origin of the unfurled content, for example, whether it is a message from another channel or a file shared with the user.
- Indicate Identity of Action Issuer. To counter delegation attacks, the victim app should be able to determine if a received event comes from a human or an impersonated user and thus choose whether to respond or not. Thus, BCPs should indicate the identity of the action issuer (i.e., whether a real or delegated user performed the action) and therefore allow for identity checks on the victim app's side.
- Explicit User Confirmation. From the perspective of victim users, all attacks stem from the fact that either victim apps or the BCPs automatically reacted to malicious events (in an unwanted way). Therefore, before accessing sensitive data, both the apps and the BCPs should prompt the user for confirmation. For example, they can create a consent popup UI that involves clicking a button.
We performed an experimental security analysis of the app model of two popular BCPs: Slack and Microsoft Teams. Our methodology was to study each BCP-facilitated interaction method between apps and users. We found that these BCPs violate standard security principles and created proof-of-concept attacks to
- impersonate users and trick victim apps into performing unwanted actions;
- hijack commands and link unfurls;
- steal messages from private channels without appropriate permissions.
Our discussion of countermeasures indicates that while point fixes for these attacks can be deployed at the cost of BCP usability, preventing further issues may require a redesign of the BCP app access control model.