Five Strategies for Mitigating the S3 Ransomware Threat

Lior Zatlavi By Lior Zatlavi
Five Strategies For Mitigating The S3 Ransomware Threat
Five Strategies For Mitigating The S3 Ransomware Threat

Detailed steps for better ransomware protection of your AWS environment

We recently published original research regarding the very high incidence among enterprises surveyed of exposure to ransomware on S3 buckets. If the subject troubles you (and it should), we strongly recommend reading the research summary blog or full report.

The most important takeaway from the study is a set of five strategies that you can employ today for better ransomware protection of your AWS environment by dramatically reducing exposure of your S3 buckets. We review these strategies here in detail.

1. Embrace Least Privilege Access As a Strategy

The most effective way to prevent malicious actors from performing any kind of action in your environment is to simply not grant permission where it’s not necessary. By granting the identities in your environment the bare minimum permissions needed to actually perform their jobs you also minimize the blast radius from a hack -- which, let’s face it, is probably inevitable. Simply put, a least privilege approach is critical for not just mitigating the risk of ransomware; it is also an effective, holistic way to minimize fallout from cybersecurity breaches to your environment. Since very few identities actually need the permissions that allow them to perform such an attack across a wide range of buckets, a sensible permissions strategy, diligently enforced, is a great solution for this problem.

Below are several actions for restricting privileges in your AWS environment to reduce the chances of it being the target of a successful ransomware campaign.

Deny Sensitive Actions

You can use a resource-based policy on a bucket to easily deny certain actions and allow them only when absolutely necessary. Make sure, of course, that identities with an active need to use these permissions can still do so. That said, bear in mind that the ability to perform such policy actions should never be widely available.

We recommend that you curtail the ability to read or modify the configuration of S3 buckets by limiting actions such as:

  • s3:PutBucketPolicy
  • s3:DeleteBucketPolicy
  • s3:PutLifecycleConfiguration
  • s3:PutBucketAcl
  • s3:GetEncryptionConfiguration
  • s3:PutEncryptionConfiguration (we include this permission, although not in our research, because it can be used maliciously so should not be made widely available)

Deleting data from a bucket is usually not something done by many identities. So you should grant an action such as s3:DeleteObject very carefully and s3:DeleteObjectVersion even more carefully as, when versioning is enabled, even fewer identities need to manage previous versions of an object. Finally, although seemingly counterintuitive, the ability to list the contents of a bucket using permissions such as s3:ListBucket and to list the buckets within an environment using s3:ListAllMyBuckets could be cardinal to a bad actor looking to access data. In any case, such permissions should not be widely assigned (after all, how often does a service running legitimately actually need to list all the buckets within an environment?).

KMS keys used to encrypt buckets are very sensitive resources for which access management is critical. Similarly to S3, you will want to limit the ability of any identity to place a resource policy on the key using kms:PutKeyPolicy. Also, be sure to severely restrict any scheduling of the key deletion (an action seldom actually performed) using kms:ScheduleKeyDeletion. It goes without saying that decrypting the information that the key encrypts, using kms:Decrypt, is very sensitive. Finally, similarly to listing buckets information, having the ability to list keys in an environment using kms:ListKeys should also be limited as it seldom needs to actually be used.

Find and “Privatize” All Public Buckets

Making buckets public via ACLs -- that is, giving permission to all AWS users to perform certain actions on buckets -- has long been considered bad practice and, we dare say, is outright irresponsible. Finding and removing ACLs that make buckets public is quite easy. To make things easier, you can also prevent all public access using ACLs by configuring “Block public settings” at the account level.

It’s not only ACLs that can make buckets public. A bucket policy with a statement granting access, such as this:

"Principal": { "AWS": "*" }

may cause more than just the identities in the account to have access to the bucket as it literally applies to all AWS identities in any account.

There are several ways to automatically detect if a bucket is public (for example, this post by Auth0 shows how you can find public buckets via ACLs). Tools like Ermetic can automatically show you buckets that are public due to ACLs or a bucket policy. Of course, the hardest part is making the necessary business-internal changes to ensure that no buckets need to have their contents public and, if they do, providing an alternative way to share it.

Separation of Duties

As indicated by the research findings, the ability to perform a ransomware attack seems to require a bad actor to perform actions of two calibers: e.g. to access information and also delete versions or configure bucket lifecycle rules. In defining permissions it’s usually best to have different identities perform different actions so that, should one be compromised, the damage is limited. Scenarios in which a combination of permissions allows a single identity to perform ransomware are powerful examples of why separation of duties is good practice.

Remove Inactive Roles and Inactive Users

Finally, security practitioners often overlook identities that are no longer needed and are therefore no longer in use. The existence of such identities is in fact a violation of the principle of least privilege. Removing them is an easy win on the way toward implementing least privilege. You can use out-of-the-box tools such as AWS’s Access Advisor to find identities that are currently unused -- as shown in this AWS blog. AWS also lets you find unused user credentials which, once disabled, can help improve your security posture.

2. Remove Risk Factors

Improving the security posture of your cloud environment is an ongoing effort. A comprehensive look at all risk factors exceeds the scope of this post. However, certain “easy wins” that are rather simple to implement can dramatically reduce the chance of identities in your environment being exploited by ransomware -- or other malicious attacks.

Probably the most notable framework for preventing security risk factors is the CIS Benchmark which, among other things, includes rotating access keys, enabling MFA for users and disabling unused credentials. Although these core protective measures seem obvious, we often see their absence in real environments.

We also highly recommend taking public exposure to the internet very seriously -- such as EC2 instances or ECS Services running task definitions. Obviously some resources need to be exposed to the internet. However, there’s little reason for those resources to have highly permissive privileges. In our research, we found examples of public resources that can wreak havoc on S3 buckets in accounts that lacked justification for such excessive access.

Finally, we recommend using out-of-the-box Amazon features such as ECR vulnerability scanning when pushing images and as needed on a regular basis to make sure unnecessary vulnerabilities are not left unmitigated.

3. Perform Logging and Monitoring

Another crucial layer of security is getting notified when certain events occur, such as when sensitive actions are triggered on a bucket. This can mean the difference between a malicious campaign being successful or not.

As described in Rhino Security Labs’ review of possible mitigations, having AWS CloudTrail enabled on every bucket with logging management and data events can be quite expensive -- but it’s still necessary to have on the buckets that matter the most (that is, those that contain sensitive and/or mission-critical information). Also, as a review by SummitRoute points out, you can use advanced event selectors for more granular control of data event logging.

You can stream events to a SIEM or, more simply, create CloudWatch alerts for sensitive events.

The most important events to monitor are those that you can respond to effectively. So, for example, if you get alerted on a deletion job being scheduled for a KMS key (that is, by a notification triggered by ScheduleKeyDeletion), you have at least seven days to respond before the key will actually be deleted. Also, as mentioned, the effect of a lifecycle configuration (applied via the PutBucketLifecycle event) takes at least two days -- usually more than enough time for an effective response.

You will also want to monitor events that can be indicators of reconnaissance activity, such as ListBuckets or ListKeys. Also, ListObjects / ListObjectsV2 and ListObjectVersions may indicate that someone is collecting information prior to an attack on a bucket.

We also recommend that you track events such as DeleteObject and DeleteObjectVersion. However, know that if they are used effectively and you find out about them during a ransomware attack, it will probably be too late. The same goes for PutBucketPolicy and PutKeyPolicy.

4. Use Native “Delete Prevention” Mechanisms

AWS has two mechanisms that may help you effectively prevent objects or versions from deletion. These mechanisms may not be appropriate in all scenarios but, if applicable, can be quite useful.

Object locking lets you define a retention period or legal hold. The mechanism lets you set a default retention period for objects, including in different modes. To make the most of it use compliance mode, which makes it impossible to delete the object until the period ends.

The challenge with object locks (and versioning management as a whole) is usually in managing the time duration that versions are locked. While, theoretically, you can retain versions for a very long time, doing so has huge cost implications, especially if objects in the bucket are updated regularly.

Another consideration is that you must enable object locks when a bucket is created as you will not be able to do so once the bucket exists. So if you want to use object locks on an existing bucket that is not already configured for object locks, you need to migrate the bucket to a new one and then configure the object locks. Also, if you enable object locks on a bucket, you should protect your accounts against a denial of wallet attack. If someone with malicious intent locks objects in compliance mode for an exceptionally long retention period you will not be able to delete the object (see SummitRoute’s analysis and example of how to protect against denial of wallet).

Alternatively, you can protect a bucket with MFA Delete; doing so requires using the root user and its MFA token to permanently delete objects of versions. Using MFA Delete is, of course, extremely effective against malicious deletions -- however, a bit “too” effective. In most scenarios, requiring both the root user and its MFA token to delete an object is an excessive demand. Also, in general, using the root user for such actions is considered bad practice. You can achieve similar protection using a condition (such as those we’ve listed above) that requires MFA on specific actions to be able to perform them. For example, you can include the following statement in a bucket resource policy:

{
"Effect": "Deny",
"Action": ["s3:DeleteObject", "s3:DeleteObjectVersion"],
"Condition": {
"Bool": {
"aws:MultiFactorAuthPresent": "false"
}
}
}

This protective action is much easier to configure and still allows identities other than the root user to perform the delete operation. However, this approach won’t be useful in scenarios in which the identity seeking to perform the deletion is a service rather than a human (e.g. a Lambda function) and/or using MFA is operationally not possible.

Therefore, we were not surprised to discover that, of the thousands of buckets reviewed in our research sampling, none had MFA Delete enabled.

5. Replicate Buckets

AWS offers a built-in mechanism for replicating buckets to different S3 buckets for backup purposes. As mentioned, one valuable use of bucket replication is to mitigate malicious delete operations. The mechanism is easy to use and an extremely effective solution should the original bucket get compromised. However you will want to be aware of a few things when using this mechanism in the context of mitigating ransomware.

First, there’s the cost. Replication requires versioning to be enabled on a bucket. So, unless you need versioning for business purposes, the effective cost of using this solution is not only for another bucket but also for managing version retention on the original and backup buckets. If you choose to replicate the bucket to a different region, bear in mind that data transfer has a significant cost unto itself.

Furthermore, to truly make the most of this mechanism, you need to make sure that the bucket to which you are replicating the data is inherently more secure than the original. Keep in mind that in our research the identities found vulnerable to compromise had exposure to 90% or more of the buckets in certain accounts. Fortunately, when used only as replication targets, buckets are much easier to secure -- because they will be used for very specific actions and by very specific identities. It’s more straightforward to apply and monitor a restrictive resource policy to buckets. You can even replicate to a bucket in a completely different account -- though keep in mind that placing all replications in one account is not recommended as doing so would create (as also noted by SummitRoute) a security single point of failure.

Conclusion

If you’re a cloud security stakeholder, our research should be a wakeup call to improve the ransomware protection of your environment. If most of the organizations surveyed were found to be vulnerable to ransomware exposure of their S3 buckets via compromised identities -- how about yours? AWS S3 buckets often serve as a replication destination for backing up sensitive data. Yet, if inadequately secured, these same “safety” buckets can, ironically, worsen an organization’s security posture -- by creating that many more buckets for malicious targeting.

The good news is that cloud native mechanisms are readily available that, applied correctly, can go far in helping close the ransomware exposure security gap. They have their downsides and they certainly don’t provide automation for comprehensively detecting and mitigating potential S3 ransomware exposure -- but cloud infrastructure entitlement management solutions exist that can help with that.