It’s a new beginning! Ermetic is now Tenable Cloud Security.

The Default Toxic Combination of GCP Compute Engine Instances

By default, compute instances in GCP are prone to a toxic combination that you should be aware of, and can avoid and fix

Lior Zatlavi By Lior Zatlavi
The Default Toxic Combination of GCP Compute Engine Instances

By default, instances in GCP are attached with a service account that, also by default, is granted the powerful Editor role. Default access scopes scope down the permissions of instances somewhat but still provide, among other things, read access to all storage in the project. Watch out for this! 

A few weeks ago my colleague Liv Matan and I had the pleasure of speaking at fwd:cloudsec. We gave a session on implementations of the metadata service (also known sometimes as IMDS) in the computing services of AWS, Azure and GCP. [Feel free to attend our detailed webinar, which we based on the talk.] 

One item we received much feedback on is less directly about metadata services yet about a related configuration that amplifies the consequence of a metadata service being abused. Specifically, in a GCP compute instance, a default configuration exists that can have dire implications if the metadata service is manipulated and the instance’s credentials are exfiltrated and can be used by an attacker.

This blog post addresses this toxic combination configuration, why you should take it seriously and what you can do about it. 

What’s Metadata Got to Do With It?  

Before we dig into the actual configuration of the permissions let’s discuss, for context, the risk of credential exfiltration from compute instances. 

When you’re using compute instances (such as compute engine instances in GCP, VMs in Azure or EC2 instances in AWS) that have access to resources in your cloud environment, you (or, rather, your application) don’t have to manage the session credentials they use on your own as the metadata service of the instance manages them for you. 

This works as follows: the compute instance service is allowed to impersonate a proxy identity and get temporary credentials for a session that allows access to the permissions assigned to the identity. The credentials are stored in a metadata service (used to store and serve metadata to applications running on the instance) that the applications can use to retrieve the credentials.

Figure 1: comparison of metadata services in the various of computing services of the providers 
Figure 1: Comparison of metadata services in cloud provider computing services

As seen in the figure above, in GCP, the proxy identity is a service account for which an OAuth 2.0 access token is created as the session credentials. 

This is interesting from a security perspective because the ability to manipulate the metadata service and retrieve the credentials can allow an attacker to use the credentials – unless otherwise limited – for their own purposes, effectively compromising the identity that the  instance is allowed to use. 

In the case of GCP, this access is achieved by performing the following HTTP GET call (in this example, using curl):

curl "" -H "Metadata-Flavor: Google"

One popular way of manipulating the metadata service to extract the credentials is by leveraging software – deployed on the machine – that is vulnerable to SSRF. The SSRF vulnerability may allow an attacker to make calls on behalf of the machine. If an attacker can craft such a request on behalf of, for example, a web server running on the compute instance, they may be able to extract and use the access token that may allow them to access the permissions of the service account attached to the instance. 

Two things to note: As evident from the GET call above, the “Metadata-Flavor” header (with the value “Google”) is required to be present on the call. To take action, an attacker would have to be able to not only make the HTTP GET request but also add headers to it. This is an important layer of protection from such attacks as it requires the attacker to have more functionality available. 

In addition, the metadata service will reject any calls with the “X-Forwarded-For” header, as this is a popular header on requests made from reverse proxy servers. Since such servers are often a target for harvesting credentials from their metadata service (as almost by definition their goal is to relay information from other servers), this is another important protection mechanism. 

However, even with these important protection mechanisms in place, having web servers vulnerable to SSRF, which allows making a GET request with custom headers and retrieving its response, is still very much a reality, as shown in our recent webinar on the AWS implementation of IMDS on EC2 instances

So why is this vulnerability especially important to be aware of when it comes to GCP? 

Toxic IAM Combination (by Default!) 

While of course any metadata exploitation is a significant security risk, in GCP, the default IAM configuration of compute instances makes this exploitation more potentially risky. 

This is because to each compute instance, by default, GCP attaches a default service account (rather than, by default, not attaching the service account) called “Compute Engine default service account,” with the identifier of: 

<PROJECT_ID>[email protected]

(It’s worth noting that this attachment can be changed after the machine is created.) 

On its own, this default configuration may have been reasonable (although its benefit is not self-evident) but for the fact of another default configuration in which the default service account is granted the Editor role for the entire project where the compute instance resides:  

Figure 2: default grant of the Editor role with the default service account as Principal
Figure 2: Default granting of the Editor role with the default service account as Principal

For those of you less familiar with GCP, roles are documents that group together permissions which are then granted to identities (or Principals) on resources or containers of resources, such as projects. For more information, feel free to read our introduction to IAM in GCP

The Editor role, along with Owner and Viewer, is one of the Basic roles in GCP IAM. As such, the role includes a huge number of permissions. When granted on an entire project (as is the case with the default service account for the compute engine), the Editor role is an extremely powerful assignment of permissions that allows its grantee to create, delete and manipulate almost any kind of resource in the project. The role can also potentially privilege escalate because it has access to the iam.serviceAccounts.actAs permission, which allows impersonation of other service accounts and, since the assignment is on the entire project, would apply to all service accounts in the project. Like we said - ultra powerful.  

As shown in our post on the advanced risk of basic roles, having the Editor role on a project also awards membership to the “Editors of project: <project name>” special group. This group is automatically granted, for example, read access to storage buckets created in the project. This is counter-intuitive as normally we would expect managerial roles to have only control-plane and not data-plane permissions but in GCP this is not the case. Keep this in mind as it will be extremely significant in the next section.

Figure 3: Special group “Editors of project: <PROJECT_NAME>” awarded legacy roles that allow read access to bucket
Figure 3: Special group “Editors of project: <PROJECT_NAME>” awards legacy roles that allow read access to a bucket

Scoped Down, Can Still Do Damage 

Somewhat fortunately, the permissions a compute instance can use are scoped down by a legacy mechanism called “Access Scopes”. In a very blunt way, access scopes limit access to APIs (along the lines of read/write/read-write and unlike the RBAC mechanism, which is permission specific). Think of them as a boundary, such as a permission boundary on an IAM role in AWS, that is set on the compute instance. In GCP, the mechanism doesn’t actually grant the compute instance access to resources but can limit the set of RBAC permissions granted to the service account attached to it. 

This boundary setting is extremely significant as it means that by default the powerful Editor role is limited from exercising most of its capacity. 

But hold on! There are two thorns in the rose of this scoping down mechanism. 

First of all, by default, the access scopes applied to compute instances still allow several functions; notably, “read-only” access to storage. As described in the previous section, by default, the default compute engine service account is an Editor of the project so has read-only access by default. 

From my point of view, this default read-only access is another component in the toxic combination of the default configuration of compute instances. Take a minute to think about it: By default, compute instances have read access to all storage buckets created by the customer in the project associated with the compute instance. Such a configuration mix is a formidable security gap. 

Second, it’s more common than you may think (stats below) for administrators to actually remove the access scope limitations by granting access to the cloud-platform access scope and relying completely on the RBAC permissions assigned to the service account. One reason for this tendency may be that GCP documentation recommends doing so as part of a best practice: 

There are many access scopes available to choose from, but a best practice is to set the cloud-platform access scope, which is an OAuth scope for most Google Cloud services, and then control the service account's access by granting it IAM roles. [Source: GCP documentation]

Don’t get me wrong. Fine-graining permissions with the RBAC mechanism is a great approach and one I believe in. I can also see how granting access to the general cloud-platform access scope can make the task easier and with less friction. I believe the GCP recommendation derives from these considerations. 

However after almost two decades as a cyber security professional I have a fairly good idea of what the process would look like for most GCP admins in a typical technology organization: 

  • Phase 1: “Let’s allow access to the general cloud-platform access scope so it’s easier for us to fine tune permissions with RBAC! It’s even in the documentation.”
  • Phase 2: “OK, now we need to specify the exact permissions the workload needs and select appropriate built-in roles or even create custom roles to grant them.” 
  • Phase 3: “Wow. This is really hard to get right. Even though we haven’t been able to solve this and have least privilege permissions, things are working and that’s what counts. Let’s leave the general access scope as is until next quarter – should be fine.”

As you may guess, refining the access permissions doesn’t take place in the next quarter, or the quarter after – and the permissions assigned to the workload just stay there.

To be honest, an even more likely scenario would be for the organization to simply set the general cloud-platform access scope with no intention of reducing the permissions to least privilege - simply so things will work. 

The default assignment for the Editor role is potentially quite significant as it means the  instance has near-admin level permissions. Scary. 

So… How Common Is This? 

Interested in how common these configurations are, we sampled real life environments we had access to and surveyed how many instances were configured in this way. 

We first looked at how many compute instances had the default service account attached to them and still had the Editor role assigned and the default access scopes applied (a combination which, as you remember, allows read access to storage buckets). We found that 18.6% (!) of compute instances in our real life data matched this criteria. 

Next, we looked for this configuration with the general cloud-platform access scope applied to the instance (which meant the access was barely limited) rather than the default access scopes - and we found a staggering 7.5% percent of compute instances were configured in this way. Imagine: one out of almost 13 compute instances was found to, for most purposes, have administrator-like privileges on the entire GCP project it belonged to. 

If these stats reflect most companies’ environments (and we assume they do), this toxic scenario is a real issue plaguing GCP infrastructure. 

What You Can Do About It 

The solution to this security gap, at least with regard to new projects or compute instances, is brutally simple - don’t use the default service account on instances. 

When creating an instance, only attach a service account to it if the compute instance actually needs access to resources in the project. When you do attach a service account, make sure it’s one you have thought through carefully and created, and for which you are managing the roles and resources to which it is granted access. 

Figure 4: Selecting “No service account” / choosing your own service account instead of the default one when creating a compute instance. We have the technology, people!
Figure 4: Selecting “No service account” or choosing your own service account instead of the default when creating a compute instance. We have the technology, people!

Remember: service accounts can be attached and detached after the machine is already created. So you are not obligated to have one attached upon creation. 

If you want better control at the organization level, you can set an organizational policy that prevents the default Compute Engine service account (and the default App Engine service account) from being granted any role upon its creation.

Figure 5: The organizational policy for NOT granting any role to the default service account upon its creation (not enforced by default)
Figure 5: The organizational policy for not granting any role to the default service account upon its creation (that is, avoiding the default assigning of the Editor role)

Of course, this approach will not apply to existing default service accounts in running projects but is an excellent preventative measure for new projects. In fact, this is such an excellent setting to apply it makes me wonder why it is not the default configuration, with the policy needing to be actively set by the customer. 

Now for the more complicated problem of existing compute instances that have the default service account attached and are assigned the default Editor role. 

It would likely be reckless to simply strip the default service account from the Editor role without knowing which compute instances may use it and for what purpose. Alternatively, even for specific compute instances, it’s probably not going to be wise to simply take all these permissions away by detaching the service account without knowing for sure which permissions the service account is actively using. 

A better approach is to replace the Editor role on each compute instance with a dedicated service account that has custom permissions assigned to it that fully support the legitimate business need of the compute instance without being over-permissive. To do so responsibly you will need to fine tune the permissions that each workload actually needs and grant them either a built-in or, preferably, fine-grained custom role in place of the basic Editor role. This task is difficult and sensitive as performing it in the wrong way may cause legitimate business functionality to be denied – probably the last thing you want to do as a security professional. For this reason, it’s essential to first apply such a new configuration in a testing environment and make sure it works properly before moving it to production. 

How Tenable Cloud Security Can Help 

As mentioned, the task of assigning least-privilege permissions to service accounts for already running compute instances using the over permissive Editor role is extremely difficult and sensitive. Tenable Cloud Security is designed to do most of the heavy lifting for you. 

Tenable Cloud Security processes logs of the actual activity of workloads and analyzes it compared to the permissions granted to the principal. It then calculates the difference and suggests a more appropriate permission set. The platform alerts you on identities (for example, service accounts attached to compute instances) that are over-permissive and instructs in how to remedy the situation by telling you exactly to which role to make the changes, and what changes to make. 

 Figure 6: Ermetic finding for an over-permissive service account
Figure 6: Tenable Cloud Security finding for an over-permissive service account

For each role you need to substitute, Tenable Cloud Security will show you the exact difference between the existing role and the permissions included in it and the suggested role. 

 Figure 7: Ermetic suggestion of a role replacement for an over-permissive role not being utilized by the service account
Figure 7: Tenable Cloud Security suggestion of a role replacement for an over-permissive role the service account is not utilizing

To make things even safer and more convenient, for new workloads, you can apply this analysis in a testing environment before even deploying to production. 


When using compute instances in GCP, make sure they are not attached with the default compute engine service account. This default configuration for GCP compute instances is prone to a combined configuration scenario that introduces unnecessary access risk.

As a preventative measure, to stop this from happening in new projects for which you enable the compute engine API, you should apply the organizational policy that will prevent the default service account (and default App Engine service account) from being granted any roles upon their creation. 

For currently running compute instances to which the default service account may still be attached and with the Editor role assigned, you should responsibly and carefully replace the default service account with a service account to which only the required permissions have been granted. 

If you have any questions on this or other topics, feel free to contact me at: [email protected]  

Skip to content