The Advanced Risk of Basic Roles In GCP IAM
Basic roles in GCP allow data-level actions, even though at first glance it might seem like they don’t. Avoid using basic roles, and if you must use them, make a special effort to protect any sensitive data you store in your GCP projects.
Most GCP users know that granting basic roles is a really bad practice. But you may be surprised to learn that the risk is much more serious than it might seem, because basic roles actually grant far more than what appears on the permissions list (which is already excessive, of course).
For the Owner role, we can assume that most project administrators are aware it includes the resourcemanager.projects.setIamPolicy permission which allows for straight-forward privilege escalation, and manage the risk accordingly. However, for Viewer or Editor, you could make the very reasonable assumption that even though the roles provide a wide set of permissions - spanning every resource type in a GCP project - AT LEAST they won’t allow anything else.
Well, it appears this assumption is WRONG.
The Problem with “Principals with project-level basic roles” Groups
In case you didn’t know, when you grant a principal a basic role on a project or above (folder / organization), they are automatically placed in a group corresponding with the basic role granted to them called “<ROLE_NAME>s of project: <PROJECT_NAME>”.
So for example, if you provide a principal with the Viewer role on project Ermetic-Production it would be a member of the group Viewers of project: Ermetic-Production.
We found out that bindings for these groups are created automatically on key resources - giving the principals with the basic roles more permissions (and even different kinds of permissions) than you bargained for.
Specifically, current and future members of these groups are automatically awarded roles that provide them with permissions to data level actions, and not just control plain actions - which is really counter-intuitive.
Since the bindings are created on the resource level (e.g. - storage bucket) and not on the project level you may not notice them because GCP IAM policies for each resource are rarely reviewed. To do so would require inspecting the IAM policy for each resource which is not feasible.
As an example - let’s see how this plays out with storage buckets.
Viewers of the project receive, by default, the Storage Legacy Object Reader role, which includes storage.objects.get, and the Storage Legacy Bucket Reader role, which includes storage.objects.list. These roles combined grant the Viewers the ability to access the data itself in the storage bucket. The Storage Legacy Object Owner and Storage Legacy Bucket Owner that are granted to Editors and Owners have similar permissions (and more).
If you look at the permission set of the Viewer role, you may mistakenly think it doesn’t have access to storage.objects.get and storage.objects.list on buckets as these permissions are not included (as can be seen in figure 2). This, in a nutshell, is what should keep you on your toes.
Why Should You Care?
The obvious issue is that certain individuals who are responsible for performing tasks that require control plane permissions (e.g. system administrators or auditors) will also get out-of-the-box permission to read information you store, unless it’s otherwise protected (e.g. encrypted using a customer-managed KMS key). However, there are other scenarios where the impact could be much worse.
Some 3rd party vendors ask for a Basic role binding to enable their products to work with your GCP project; for example, following is a screenshot from the documentation of Palo Alto’s PrismaTM:
It’s unnecessary to describe the risk of providing a 3rd party with access to data - but this is exactly what happens when you provide this role. To add insult to injury, since few people know about this configuration, it’s very possible that the 3rd party itself is not aware of the risk and will neglect to use the proper technical and legal controls to mitigate it.
Another thing to look out for is that the default service accounts for App Engine and Compute Engine are automatically granted the Editor role. So unless this is changed, if Compute instances have the Compute Engine default service account attached / App Engine is using App Engine default service account, workloads they run are actually granted data-level access (Note that it is configurable for Compute instances, yet from the documentation it seems not to NOT be configurable for App Engine). To understand what this actually means, imagine that a workload running on App Engine or a Compute instance is compromised and a malicious actor can remotely execute code on it. If this happens, data-level permissions could lead to compromised confidential information and serious legal, reputational and financial effects.
What Can You Do?
First of all, never (NEVER!) electively use basic roles for any principal other than very specific use cases such as system administrators. Even then - do so with extreme caution. Specifically, be very careful about providing a Basic role (even “just” Viewer) to a 3rd party and do whatever you can to avoid it.
In addition, you can use an organizational policy to disable the automatic grant of the Editor role for the default service accounts of App Engine and Compute resources. Note that doing this won’t remove the Editor role if it was already granted to a default service account (if it was created in the past), but it will prevent it from being created in the future. This could be very useful if you set an organizational policy that applies to new projects as they are being created. But for existing projects with these service accounts already in place, you should avoid using the default service account (unlike for most services, in Compute instances you can actually change the attached service account after the resource was created). You can also reduce the permissions the default service account has - after making sure it won’t affect its ability to support the business function of resources using it. It goes without saying that if possible, you should find or create the least-privileged role you can use that will allow your workloads to serve their purpose.
Finally, this issue is yet another important reminder to protect sensitive data with customer managed KMS keys. This would prevent any principals that don’t have access to the decryption permissions of the KMS keys used to encrypt the bucket to actually access the data - even if they have the permission to the action allowing reading it (e.g. - storage.objects.get). An interesting scenario to look out for, are cases in which a 3rd party (either a software vendor, or outside personnel) actually requests access to both a basic role (such as Viewer) and a role that allows it to decrypt information encrypted with customer managed KMS keys in a project (such as Cloud KMS CryptoKey Decrypter).
We found out about this issue because we make it our business to unravel the complexities of cloud environments - yet we are still surprised when we find counter-intuitive configurations that can potentially cause unsuspecting administrators to expose sensitive information in their accounts (we have reported similar examples in AWS and Azure in the past).
We hope this post helped raise awareness and provide some best practices for managing the risk. If you wish to find out more about what really goes down deep in your cloud environments and what other threats are lurking RIGHT NOW, just waiting to be exploited by the wrong people - you are welcome to contact us.