Hidden Risk in the Default Roles of Google-Managed Service Accounts
Some Google-managed service accounts are binded by default to a role granting access to storage.objects.read. This hidden risk is (yet another) great reason to use customer-managed KMS keys to encrypt your sensitive data stored in buckets.
A few weeks ago, cloud-sec-twitter got all riled up because permissions for s3:GetObject were temporarily granted to AWS support staff. The blunder was due to an update of the AWSSupportServiceRolePolicy that is attached to a mandatory Service-Linked-Role that exists in all AWS accounts. The updated policy enabled AWS support staff to access objects stored in S3 buckets. This was possible even without the account’s administrator attaching the policy to the role (as it was already attached by default) or even able to remove the permission (as policies attached to the role can’t really be edited). To its credit, AWS reverted the update quickly. The response by AWS’s Colm MacCarthaigh shows that AWS took the event seriously (we highly recommend this thread by Victor Grenu to learn more).
While the case got the attention it deserved, we wonder if consumers of Google Cloud Platform (GCP) know that a similar exposure scenario exists by default in GCP. The GCP exposure is particularly interesting because, unlike the parallel AWS situation, it’s there almost by design and should be of concern to anyone holding sensitive information in storage buckets.
Let’s get down to the details and see what you can do about it.
For those of you not familiar with how Google-managed service accounts operate, here’s a brief description: When a service in GCP needs access to resources in your GCP environment to act “behind the scenes” and perform actions required to operate properly, Google creates and manages a service account, which you can’t control, for this purpose. As described in the GCP service accounts documentation:
Some Google Cloud services need access to your resources so that they can act on your behalf.
To meet this need, Google creates and manages service accounts for many Google Cloud services. These service accounts are known as Google-managed service accounts.
For example, when you enable Cloud Functions in your project, Google creates a service account called service-PROJECT-NUMBER@gcf-admin-robot.iam.gserviceaccount.com.
You won’t see this service account in the “Service Accounts” blade of the IAM service; however, you can see it under the IAM policy if you enable “Include Google-provided role grants” (see Figure 1). Looking at the policy, you see that the service account is binded by default to a role called “Cloud Functions Service Agent”. This predefined role holds the permissions required for the service account to perform its job.
The necessity for such a service account is pretty clear. What you may not realize unless you look a bit below the surface is what exactly such a service account entails. For example, when you create a Cloud Function, the environment has to perform tasks such as store and manage the code. For Cloud Functions, the environment creates a bucket for storing, writing and reading the code. This process happens seamlessly, in a way that doesn’t usually have to concern you. If you’re really interested you can check out the buckets created, such as below:
If you browse the bucket you may even see a note that Google placed there kindly asking that you not delete the bucket so as to not compromise its ability to manage the Cloud Functions:
To access the contents of the bucket, the Google-managed service account that the Cloud Functions service uses needs to have access to the storage.objects.get permission. For this reason, the role binded to it by default (“Cloud Functions Service Account”) has the permission included in it.
However, it’s important to note that this binding is created on the project scope.
The meaning of this binding is that, unless denied by default, the Google-managed service account has access to all the data stored in the project’s Cloud Storage buckets.
Do not take this service account access lightly. If you allow it to happen and do not take proper precautions, you are effectively allowing an identity that Google controls to access your data. Should control be managed incorrectly, your data may fall in the wrong hands. In addition, by mistake or oversight, the service agent roles may be granted to other identities, providing them with unneeded access to storage buckets.
It’s worth noting that we found a similar by-default configuration in yet another Google-managed service account: “Google Container Registry Service Agent”, with the “Container Registry Service Agent” role bound to it by default.
So what can you do about this?
First of all, even though you’re technically able to, never use these GCP predefined roles for an identity. The GCP IAM documentation states this clearly:
Still, mistakes happen and, as mentioned, these roles are granted to Google-managed service accounts by default. So what can you do to address the problem?
Unlike the mandatory AWS Service-Linked-Role we discussed earlier, with Google-managed service accounts you have the ability to remove the roles that Google assigns by default. And yet, we do not recommend doing so; it is best to avoid touching default permissions so as to not risk disrupting the normal operation of services you rely on.
An easier, probably more effective and less error prone solution is to use KMS encryption with customer-managed keys on Cloud Storage buckets. With this approach, you can easily limit access to a bucket’s contents to any identity that does not have access to the cloudkms.cryptoKeyVersions.useToDecrypt permission on the key used to encrypt the data.
Fortunately (and not by accident of course), this permission is not included in the service agents roles mentioned above. So even though the service accounts (or any other identity the roles would be binded to) would have access to storage.objects.get, as long as the buckets in which the objects are stored are encrypted using specific KMS keys for which the service accounts don’t have access to cloudkms.cryptoKeyVersions.useToDecrypt ,the service accounts won’t be able to access the actual data.
Using this approach to protect your sensitive data is not just the simplest solution for this particular issue, it is good practice in general.
To sum up:
- Keep close track of the buckets in which you place your sensitive data
- Confirm that the buckets are encrypted using KMS encryption with customer-managed keys
- Keep close tabs on the identities that have permissions to use cloudkms.cryptoKeyVersions.useToDecrypt with the relevant KMS keys
We hope you found this useful; please be in touch with any questions or comments.