Part 1 of a series takes a look at sensitive AWS Resources - secret strings and keys used in AWS.
In this two part series, we will take a closer look at sensitive AWS Resources - secret strings and keys used in AWS. In this first part, we will explain the Resources and access control mechanisms relevant to them, while in the second part, I delve into the challenges of tracking the permissions granted to them. I will also suggest ways in which automating analysis of environment configuration and logs in AWS can help.
What Are The Keys to The Kingdom?
Nearly all cloud environments store sensitive or important data. It requires multiple levels of protection including encryption as well as strict access control, which could be due to regulatory requirements or organizational policies.
In addition to private files and records, some of the most sensitive artifacts stored in the cloud are credentials for both external and in-house services. When an application running on a cloud platform requires a third party software service or database, it often uses credentials in the form of an API token or access key. These are highly sensitive strings, as they are usually all that’s required to perform actions on behalf of the organization. This means a malicious actor compromising them could perform actions while posing as the application, access sensitive information the application uses, tamper with information or even cause a denial of service incident. On top of their sensitivity, these strings also must be very easily accessible and simple to use programmatically.
Iaas and PaaS vendors have various solutions for this issue. AWS offers services to encrypt data and handle secrets effectively and quite simply. These services work well. In fact, they work so well that even though they literally hold the keys to the kingdom, we tend to forget that the secrets they hold are in fact AWS resources - not unlike S3 buckets or EC2 instances - and access control to them is managed in the exact same way.
In one of our past blog posts we discussed the challenges of managing identities and entitlements of AWS IAM Users and Roles. As one part of the series shows, the challenge of merely understanding which entity has access to what resource is massive - virtually impossible for an analyst or even a team of analysts relying only on the tools supplied by AWS. It goes without saying that cybersecurity-wise, not all AWS resources are made equal. Almost by their literal definition - “secrets” and encryption keys in our environment are some of the most sensitive resources out there.
By neglecting to control the access to these resources we might be enabling a backdoor to our most treasured assets, making the cryptography-forged, secured front door moot.
In this post, we will explore the ins and outs of access control for secrets in AWS, and how to significantly reduce the risks.
AWS Secret Keeping Services
AWS provides three main mechanisms for the management of such sensitive information - Key Management Service (KMS), Secrets Manager and the Parameter Store of the Simple Systems Manager (SSM).
We will give a brief overview of how these services are used as background for the later sections - but this is of course just the tip of the iceberg when it comes to understanding them. We highly recommend you go through the AWS documentation to become familiar with how to use them.
Key Management Service
KMS is the foundation of encryption used in AWS; it allows you to create and manage keys utilized for cryptographic functions. Many AWS services leverage KMS in order to perform cryptographic actions.
The primary resources of AWS KMS are Customer Master Keys (CMKs), which can be used to encrypt or decrypt information. Access to CMKs is configured based on a Key Policy, or optionally, on IAM policies.
KMS guarantees that the confidential parts of CMKs required for decryption / signing actions are stored safely and never leave KMS in plaintext form. This means that only an entity with access to the CMKs based on its Key Policy and / or IAM policy can access the secrets encrypted by it. That’s why access to these resources is so sensitive.
It’s important to distinguish between two types of CMKs:
- Customer Managed CMKs - They are (as stated in the AWS docs) the “CMKs in your AWS account that you create, own, and manage.” That is - CMKs you can create for performing cryptographic functions for any kind of proprietary needs you may have (e.g. - encrypting files). You have complete control over them and you, specifically, manage their Key Policy.
- AWS Managed CMKs - According to AWS, these are “CMKs in your account that are created, managed, and used on your behalf by an AWS service that is integrated with AWS KMS.” This means they are keys created and used automatically when you use AWS services that require cryptographic actions for their operation (Secrets Manager described in the next section is a good example). Since these CMKs are managed by AWS, you can’t configure their Key Policy.
Previously, we spoke about storing sensitive values which need to be accessed at runtime by applications, such as credentials for various services and assets such as databases. Since this is a very common requirement for applications everywhere, AWS has a service called Secrets Manager which enables you to “store, distribute and rotate credentials securely”.
Using Secrets Manager allows you to create an AWS resource called a “Secret” - a sensitive string, stored in an encrypted format. AWS allows you to configure the secret’s rotation process (changing the value of the Secret and having it still function properly as credentials) using a Lambda function and can even create a Lambda function that does it for you if the Secret in question is credentials for a service it knows (e.g. an RDS database or a Redshift cluster).
Systems Manager’s Parameter Store
The Systems Manager’s Parameter Store is a service older than Secrets Manager that allows managing Resources called “Parameters,” which are essentially Key-Value pairs available for use by applications. It wasn’t designed only to hold secrets such as API keys or DB credentials, but also holds non-sensitive values such as configuration information that require no encryption.
When creating a sensitive Parameter requiring encryption, you can simply make it a “SecureString” which configures AWS to encrypt it with a KMS key of your choice (either a dedicated AWS managed CMK or one of the CMKs managed by you).
It’s not as convenient as Secrets Manager - for example, implementing rotation requires more work - specifically if the credentials are for a service where Secrets Manager can provide you with an out-of-the-box Lambda function for rotation. However, mostly due to cost considerations, it’s still a popular alternative to Secrets Manager.
It’s All About The Resources
Before we move forward - let’s focus on the fact that CMKs, Secrets managed by Secrets Manager, and Parameters in the SSM Parameter Store are all, in and of themselves, AWS resources. Therefore, access to them is managed in a similar way to how access to any AWS resource is managed.
In the next sections, we’ll take you through the basics of access control for these specific resources to allow you to more easily assess who has access to them.
Reviewing Access Entitlements for Secrets and Keys
In this section, we will explore the specific things to look out for when it comes to managing access to the resources described in the previous section. We will review how it’s specified in IAM policies, Resource-based Policies and Key Policies / Grants for KMS CMKs.
It should be noted that many other aspects of access control which apply to AWS resources, such as Service Control Policies and Permission Boundaries, also apply here; however, as they don’t apply uniquely to the resources this article focuses on, we’ll come back to them later. Keep in mind this is not a complete list of locations in which to look, but it’s a good start.
First, let’s note the sensitive Actions to look for when it comes to each of the Resources we described previously. Out of the Actions available for each of the Resources, the ones we consider as sensitive Actions are the ones that allow you to access secured information. In the case of a KMS CMK, it might be decrypting ciphertext encrypted by it, or performing other sensitive cryptographic functions like signing.
A Closer Look at IAM Policies
Like most resources, access control to the resources we previously described can be managed using IAM Policies.
To locate Principals who have been granted access to these Actions, you need to find the Customer Managed AWS Policies you’ve created with these permissions, and check the Policy Usage to see where they are attached. This enables you to find Users, Roles and Groups which have access to resources through the Policy.
In addition, you should similarly review AWS Managed Policies which allow unrestricted access to secrets - for example, AdministratorAccess and SecretsManagerReadWrite.
The latter allows users full access to your Secrets Manager secrets and is, ironically, the one you should look out for as it’s more prone to excessive access. While most administrators of AWS environments realize AdministratorAccess is a Policy to be assigned with extreme caution, “SecretsManagerReadWrite” is sometimes perceived as more innocent and much less harmful. Though it is in many ways, it nevertheless allows access to strings which could be used to read and impact some of the most sensitive assets an enterprise has, as it allows full access to all Secrets Manager Actions for ALL resources (taken from the current version 3 of the policy):
Another AWS managed policy, AmazonConnectFullAccess (in its current version 2), allows kms:CreateGrant to "*".
Other AWS managed policies in their current versions allow access to SecuredString values in the Parameter Store via ssm:GetParameter and / or ssm:GetParameters, for example:
- AmazonSSMFullAccess (version 4)
- AmazonSSMReadOnlyAccess (version 1)
- AmazonSSMMaintenanceWindowRole (version 3)
- AmazonSSMManagedInstanceCore (version 2)
- AmazonSSMServiceRolePolicy (version 9)
It should be noted that AWS Managed Policies are frequently changed, and it’s a best practice not to use them as a long-term permission granting mechanism. Please note we’ve only given some examples here. Even if we did list all the AWS managed policies which currently provide access to these resources - AWS can always change other policies to do so as well - so make sure you’re alert!
Next, we will explain how to keep a closer eye on resource-based policies in which you should also look for the sensitive Actions we previously described. Inline Policies are obviously also significant to track; however, as this is a much more difficult task, they are referred to in the next section.
Key takeaway: Review AWS Managed Policies and Customer Managed Policies (and do your best to review Inline Policies) allowing the specified Actions from Table 1.
KMS Access Control
KMS Customer Master Keys (CMKs) have their own Resource-based access control mechanisms.
Key Policy Basics
A KMS CMK has a “Key Policy” which is basically KMS’s version of a resource-based policy. Unlike a regular resource-based policy, it’s mandatory for the CMK (the Resource) to have one, and in order for IAM policies to be effective for the CMK, the Key Policy must include a declaration such as this:
The KMS console allows you to entitle Key Administrators and Key Users. Both will have the ability to grant other Principals with access to the key (to perform any function) using kms:CreateGrant, while Key Users will also be able to use kms:Decrypt in order to do so without an additional grant.
In addition, Key Policies can allow access to external entities (most commonly - other AWS Accounts), which we will review in more detail later.
For AWS-Managed CMKs, it’s better to focus your effort on the permissions granted to the services utilizing these keys, which is outside the scope of this article. For Customer-Managed CMKs, however, as you control their configuration, it’s up to you to make sure it’s not configured in a way which allows excessive permissions.
Key takeaway: Review the Key Policy for each KMS CMK looking for the Actions from Table 1 and / or a statement enabling IAM Policies on it and / or access to outside Accounts.
Keeping Tabs on Grants
You can also enable access to KMS CMKs by creating a “Grant” for a Principal allowing it to perform any Action on the CMK. A grant is a resource-based access control mechanism (in addition to the Key Policy) which can be easily granted and revoked programmatically to delegate the use of a KMS CMK. As the AWS docs state: “Because grants can be very specific, and are easy to create and revoke, they are often used to provide temporary permissions or more granular permissions.” As we know, however, mistakes tend to happen and what’s given temporarily and not revoked properly (either due to faulty programming or a runtime exception) might still be in effect.
Fortunately, AWS provides a simple CLI function called list-grants which you can use to list all the current grants for a key. Its usage is pretty straightforward, and it allows you to detect all the Principals and the Actions they are currently allowed to perform on the KMS CMK.
Key takeaway: Use the list-grants CLI command to keep track of the grants given to a KMS CMK.
Monitor External Access To KMS CMKs
You should also note that the KMS console allows you to configure the Key Policy to enable access for outside accounts, which simply turns the root user of those accounts into a Key User for the key (with the entitlements mentioned above). Once this trust has been established, sensitive permissions can be delegated by the root user of any of these outside accounts to other Principals within the outside account.
With Access Analyzer, AWS allows you to monitor outside access granted to KMS. Access Analyzer lets you know when external access is granted to resources of certain supported types (more information is available here) and luckily, KMS CMKs are among them. You can manually review Access Analyzer periodically, or alternatively, if you’re using CloudWatch to monitor various events in your environment, you can configure EventBridge to monitor Access Analyzer and have events about KMS keys go to a specific Log Group in CloudWatch.
The way to do it is as such: after enabling Access Analyzer, you create a Log Group in CloudWatch (e.g. aws/events/secretsmonitoring), and then create a Rule in EventBridge using the following pattern:
The EventBridge console should look like this when configuring the rule:
Under “Select Targets,” choose a Target to be “CloudWatch log group” and the Log Group to be the new one we created for this purpose:
Once you create and enable this rule, it will track external access to any KMS CMK and will alert about it to the Log Group in CloudWatch.
Key takeaway: Keep a close eye on outside accounts that have access to a KMS CMK. More preferably, configure Access Analyzer, EventBridge and CloudWatch to monitor events of such access being granted.
Secrets Manager Resource Based Policies
Secrets Manager Secrets are Resources for which Resource-based-policies are enabled. That means that access for ANY principal from ANY account can be granted to them using resource-based policies.
Unlike KMS CMKs, external access to Secrets Manager Secrets is currently not tracked by Access Analyzer so the best you can do is review their resource-based policies, either manually or by the use of a script / analysis technology, to figure out who exactly has access to them.
Key takeaway: Review access granted to Secrets Manager secrets using their Resource Based Policy.
Review Roles Trust Relationships
Finally, one significant move that can help with the mapping of entities that can gain access to your sensitive Resources is to first map the trust relationship on the Roles you found which are assigned a policy allowing access to them / granted access to them via a resource-based mechanism.
If the trust relationship of a Role created in an account allows the root user of an Account to assume it, any User or Role in that Account would be able to assume it if it is assigned an IAM policy which delegates the sts:AssumeRole Action to it on that Role. This can be limited by changing the trust policy by either setting a more specific Principal (or Principals) other than the root and / or setting conditions that limit which Principals can assume it. In order for you to fully understand who can assume the Roles with access to your sensitive Resources, review the trust relationships on said Roles.
You should note that the trust relationship of a Role may allow various entities to assume it, including an outside AWS account, an Identity Provider or an AWS Service within the account. This could open a path for entities who could be granted access to the Resources through various scenarios, for example, EC2 as an AWS Service is allowed to assume a Role and an IAM User is granted access to EC2 (this is a very common scenario when it comes to developers).
Key takeaway: Review the Trust Relationships of the Roles allowed access to the Resources to track which Principals can assume them.
Putting It Together
To clarify, here’s a diagram of the objects we’ve described and the relationships we’ve reviewed between them:
“Identity Provider” and “AWS Account” are marked red since they allow access via assumption of a Role to external entities. “IAM Policy” is a generalization for AWS-managed, Customer-managed and inline policies.
Note that resource based policies are indicated in the diagram in rectangles. When it comes to assuming a Role, an IAM Policy can only delegate what the Role’s Trust Relationships has allowed.
Any combination of relationships could allow an entity to gain access to the Resource - so if you’re looking to review it manually, follow all possible paths that could eventually lead to a Resource. Finally, as we said before, remember what we’ve presented in this article is not a complete description of how access to these resources is managed. So the situation might even get more complex.
Where Do We Go From Here?
If what you’ve read here has made you think this process is complicated, I’m sad to inform you that the reality is much more grim.
However, there’s no reason to lose hope. In our next post, we’ll look at the difficulties in implementing this process and how you can use state of the art automated analysis technology to mitigate them.