What are two security options used within Amazon S3 that can be used for preventing accidental delete action?

Amazon S3 or Simple Storage is an object storage service and is one of the most used services in AWS. With the simple web interfaces which S3 provides, it is much easier to use S3 as a remote storage location for our web applications. Therefore S3 might easily become the storage for some of the most critical data of our application which makes the security of your S3 buckets crucial.

When it comes to security in AWS services, AWS follows a shared responsibility model which involves both AWS and the customer. While AWS takes the responsibility of “Security of the Cloud” the client will be responsible for “Security in the cloud”

Shared responsibility model for AWS cloud security

Security of the cloud

AWS is taking the responsibility of ensuring the security of all the infrastructure running underneath the AWS cloud which consists of all the hardware, software, networking and security of the facilities which run the cloud services.

Security in the cloud

Customers will be responsible for security in the cloud-based on the services which they select to use. In some service, customers will have more responsibility and control when setting up and securing the service such as EC2 instances where clients have the responsibility of defining all of the access controls and many other configurations to ensure the security of it, while other services like Lambda will take care of most of its configurations and have the user worry less about all the security concerns of their compute instances.

Handling security for S3 buckets can be discussed under 3 major topics.

  1. Access control
  2. Data protection
  3. Monitoring and audit

Access control

By default, S3 buckets are set as private and access to all objects are restricted for any external users. And AWS has provided many interesting options to make sure the owner has the control on deciding who has access to what.

Follow least privilege access control model

A least privilege access control model means having an access control model which grants users only the permissions which are absolutely necessary. This can be implemented by starting with a role with no permissions at all and then gradually allowing permission to perform the required actions. Since IAM enables the capability of defining fine-grained access to the object level, IAM can be incorporated to implement the least privilege access control model for S3.

S3 Access points

Access points are a feature introduced by AWS to enable access to shared data sets to different customers. Each access point provides a unique hostname and can be attached to different IAM policies to differentiate user access to the data in the bucket. Furthermore, any of these access points can be restricted to a VPC ensuring only clients’ private network can access the data. AWS suggests that access points are a good option for the following scenarios.

  • Multiple applications sharing large data sets — Different access points can be made for different applications ensuring the application has access only to a specific share of the data.
  • Restrict access to VPC — Access points can be restricted to make sure that they can only be accessed from a certain VPC
  • Test new access policies — The policy can be set as a new access point and can be tested without having any effect on existing users or applications
  • Limit access to specific account IDs — Access point policies can be defined ensuring only buckets owned by a certain user can be accessed
  • Provide a unique name — Allows having a unique domain name within the account and the region which allows having ‘test’ endpoints for your S3 bucket

IAM roles for services accessing S3

It is always best to use roles instead of using an IAM user when other AWS services have to access an S3 bucket. The least privilege access model can be applied here to ensure no service has any unnecessary access on the S3 buckets or its objects.

Pre signed URLs

Pre signed URLs can be used to grant temporary access to a specific object or to allow a user to upload an object to an S3 bucket without having to grant permanent access to the S3 bucket.

Block public access

Always block public access to S3 buckets unless it is necessary to allow it.

Data protection

Once data is uploaded to the S3 bucket further actions need to be taken to ensure protection against data corruption, loss, malicious or accidental removal or modifications. S3 has provided several mechanisms to ensure data protection such as,

  • Encryption
  • Versioning
  • S3 replication
  • Object lock

Encryption

Mainly there are 2 types of encryption that can be enabled in S3.

  1. Data at rest encryption — Encryption of data stored in the storage location and is not moving through the network
  2. Data in transit encryption — Encryption of data before transmission, authenticating endpoints, decrypting and verifying data on arrival

Data at rest encryption

Amazon S3 provides 3 methods to provide data at rest encryption.

  1. Server-side encryption with Amazon S3 managed keys (SSE — S3): Encryption of data is done from the server-side (S3) using keys managed by Amazon S3 itself using AES-256 encryption. Each object will be encrypted using a unique key and this unique is encrypted with another master key which is regularly rotated.
  2. Server-side encryption with AWS Key Management Service (SSE — KMS): Encryption is done using keys handled and managed by KMS. Since there are separate permissions attached to KMS keys different keys for different sets of objects can be used to protect against any unauthorized access. And also since KMS provides an audit trail it is easier to track who has used KMS on what.
  3. Server-side encryption with customer-provided keys (SSE — C):
    Encryption is done using a user-provided key. Therefore the user is responsible for handling, rotating and saving keys and S3 will be responsible for the encryption.

Data at transit encryption

S3 enables encrypting data in transit using SSL/TLS. And HTTPS over TLS can be used to protect data from sniffing or man in the middle type of attacks. Bucket policies can be set to make sure only encrypted connections using HTTPS over TLS are allowed.

Versioning

S3 versioning enables users to maintain separate versions of each object in the bucket. With this capability, it provides the capability to recover easily from accident deletions or data corruptions by restoring from a previous version.

Object Lock

Object lock in Amazon S3 follows the write-once-read-many (WORM) model. This enables users to upload an object to the S3 bucket and prevent the object from being deleted for a specified time period or indefinitely. Object Lock provides 2 methods to manage this object protection.

  1. Retention period: The user can specify a time period and in this time period the object cannot be modified or deleted
  2. Legal hold: Provides same protection as in retention period but legal hold has no expiration date. In order to remove or update any objects under legal hold has to explicitly remove the legal hold.

Monitor and Audit

Once access control and data protection are set up properly it is important to set up proper monitoring to be enabled in order to make sure we are able to identify any issues. And Amazon S3 provides a range of services in order to set up proper monitoring and the following options will be discussed in this article.

  • Server access logs
  • Cloud watch metrics

Server access logs

S3 server access logs enable users to log and store all requests coming into the S3 bucket in a separate bucket. Services like Amazon Athena can be used to analyze these data further. It is important to make sure we do not use the same bucket as the storage location for access logs as it will create an infinite loop of accessing the bucket and creating new logs.

Cloud watch metrics

Cloud watch metrics can be enabled to ensure we have near real-time data on the S3 bucket which can be helpful to understand and act upon any issues. Fine-grained analytics can be obtained via cloud watch metrics as cloud watch metrics configuration can be set to monitor from object level.

Conclusion

Since Amazon S3 has become one of the most popular solutions for object storages, the security of S3 also has become crucial. As AWS ensures the security of the cloud as users we need to make sure we follow the best practices to ensure security in the cloud. Since implementing some of these best practices can be expensive it is important to keep in mind any data breaches can be much more expensive. Therefore it is important to follow best practices and to do our best to ensure the protection of data, knowledge of existing options is important when a trade-off needs to be done between security and budget.

Reference

[1]https://docs.aws.amazon.com/AmazonS3/latest/userguide/security.html

[2] https://aws.amazon.com/s3/features/access-points/

[3] https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html

[4]https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html

[5] https://docs.aws.amazon.com/AmazonS3/latest/userguide/metrics-configurations.html

More content at plainenglish.io