S3 Cost: Amazon Cloud Storage Costs Explained
Amazon’s Simple Storage Service (S3) is the world’s most popular cloud object storage solution due to its durability, availability, and scalability. However, unlike the name suggests, S3 cost calculations are far from simple. We explain why this is and what you can do about it.
If you’re looking for a durable, available, and scalable storage solution at the lowest possible cost, Amazon’s Simple Storage Service (S3) is likely your best option. Amazon S3 takes advantage of the world’s largest global cloud infrastructure to deliver 99.999999999% uptime and unlimited storage, while protecting sensitive data with the most comprehensive range of security and compliance capabilities.
So, what does S3 cost? On the surface, it appears very straightforward with Standard S3 typically costing from $0.021 to $0.026 per GB stored per month depending on the Region and the volume of data stored. However, customers can get confused when they see the variety of features and sub-services offered by AWS in order to provide a versatile user experience. Let’s break down the different types of Amazon object storage and how you’re charged for them.
Nội Dung Chính
S3 cost: AWS cloud storage costs explained
When you first start using Amazon S3 as a new customer, you can take advantage of a free usage tier. This gives you 5GB of S3 storage in the Standard Storage class, 2,000 PUT requests, 20,000 GET requests, and 15 GB of data transfer out of your storage “bucket” each month free for one year. If you exceed the limits of the free tier, or when the offer expires, you pay the standard Amazon S3 cost for what you use.
Beyond the free tier, Amazon S3 cost is based on multiple factors. Primarily these include the volume of data you want to store, how often you will want to access it, add to it, or retrieve it, and the speed at which you want to retrieve your data. Then, there’s a choice of how many regions your data is replicated across—with some considerably cheaper options if durability is not a concern.
There are six “storage classes”—Standard, Intelligent Tiering, Infrequent Access, One-Zone Infrequent Access, Glacier, and Glacier Deep Archive. Which S3 storage class is right for your data will likely depend on how often you want to access it.
S3 Standard Storage
S3 Standard Storage is suitable for the general-purpose storage of frequently accessed data. Because you only pay for what you use, S3 Standard Storage is suitable for most cases including data-intensive, user-generated content, such as photos and videos.
S3 Infrequent Access Storage
S3 Infrequent Access Storage is great for storing data you don’t need frequent access to, but may need to access in a hurry—i.e. for disaster recovery. The Amazon S3 cost for Infrequent Access Storage is less than Standard Storage, but you pay more each time you access or retrieve data.
S3 One Zone Infrequent Access Storage
Usually, when data is assigned to a Region, it’s distributed between at least three Availability Zones in order to maximize durability. For data that isn’t accessed often, but still needs quick retrieval times and can tolerate lower availability rates, use One Zone Infrequent Access Storage. One Zone stores data in a single Availability Zone and offers a 20% discount compared to Standard Infrequent Access.
S3 Reduced Redundancy
Similar to S3 One-Zone Infrequent Access Storage, S3 Reduced Redundancy was originally introduced to offer a lower-priced option for storage that was replicated fewer times than standard S3.
However, the pricing for this hasn’t been updated in several years, so it’s effectively become end-of-life as it’s now more expensive than Standard redundancy S3 storage!
Bottom line: if you have data stored on S3 Reduced Redundancy, get it off there ASAP and move to either S3 Standard or S3 One Zone Infrequent Access.
S3 Glacier Storage
S3 Glacier storage is for long-term data archiving. Typically this storage class is used when record retention is required for compliance purposes. Retrieval requests can take up to five hours to complete, which is why this is an inappropriate storage class for data you want to access quickly.
S3 Glacier Deep Archive
For even longer-term data archiving, S3 Glacier Deep Archive offers cost-saving opportunities for data that is retrieved one or two times per year. An important consideration for organizations with large volumes of data to archive is that it may take up to 12 hours to resolve data retrieval requests.
S3 Intelligent Tiering
Amazon also provides an Intelligent Tiering service to automatically transfer data between the Standard S3 and Standard Infrequent tiers depending on access patterns. The service can significantly reduce the management overhead, but incurs a monitoring and automation charge of $0.0025 per thousand items monitored.
Confused about when to use each class of S3? Use this decision-tree to clarify when each Amazon cloud storage class is most appropriate:
It’s important to note that minimum capacity charges apply to data stored in the two Infrequent Access classes (current minimum is 128KB), and minimum storage duration charges apply to data stored in the two Infrequent Access classes (current minimum is 30 days), S3 Glacier class (current minimum is 90 days), and S3 Glacier Deep Archive class (current minimum is 180 days).
In addition to the above considerations, you also have to take the following into account:
The cost of S3 requests
Although you’re allowed a certain number of PUT and GET requests in the free usage tier, other requests have to be paid for, along with any PUT and GET requests above the free tier monthly limit. Using Amazon S3 cost in the US East (Ohio) Region as an example, PUT, COPY and POST requests cost $0.005 per 1,000 requests in the S3 Standard Storage class, Reduced Redundancy, Glacier, and Glacier Deep Archive classes, and double that amount in the two Infrequent Storage classes.
Data retrieval costs
There are also costs associated with retrieving data from the Infrequent Access and Glacier classes (per GB retrieved). Data retrieval from the Infrequent Access classes is instant, but you’ll be charged a premium for a faster retrieval service in the Glacier class, although Amazon recently introduced an “S3 Select” tool that enables you to retrieve subsets of data from an object or archive in less time.
Data transfer pricing
Standard data transfers into an S3 bucket from the internet are free of charge (for accelerated data transfers, see Transfer Acceleration Pricing below), but transferring data in the opposite direction can add up. Amazon has a tiered data transfer pricing structure depending on the volume of data transferred out from an S3 service to the internet each month:
Data Transfer Out from S3 to Internet – US East (Ohio) Region – September 2020
Up to 1GB per month—Free
Next 9.999TB per month—$0.0900 per GB
Next 40TB per month—$0.0850 per GB
Next 100TB per month—$0.0700 per GB
More than 150TB per month—$0.0500 per GB
You’ll also incur charges if you transfer data from one Region to another in order to take advantage of cheaper storage costs. For example, if you were to transfer data from the US West (Northern California) Region—where Infrequent Access Storage costs $0.019 per GB—to the US East (Ohio) Region—where Infrequent Access Storage costs $0.0125 per GB—the transfer would cost you $0.02 per GB.
Transfer acceleration pricing
If you need faster data transfers—for example, to support global multiplayer video games—you can pay a premium for accelerated data transfers. Typically accelerated data transfers between an S3 bucket and the internet costs $0.0400 per GB in both directions ($0.0800 per GB outside the US, Europe, and Japan), but you won’t be charged if the accelerated data transfer is no faster than a standard data transfer.
Other accelerated transfer options
For one-way data transfers into an S3 bucket of Glacier Storage bucket, or Glacier Deep Archive bucket, AWS offers four accelerated transfer options other than the internet. These are useful substitutes if you have a large amount of data to transfer, but they come at a premium that will increase your Amazon S3 costs.
- AWS Direct Connect transfers data through a dedicated port on a private network connection at speeds of up to 10GB per second, but at a cost of up to $2.25 per hour. This cost can be a factor if redundancy is required and you have to use more than one direct connection.
- Amazon Kinesis Firehose allows you to save streamed data to an S3 bucket. Depending on the format of the data, you may have to pay a conversion charge of $0.018 per GB as well as a storage charge of up to $0.029 per GB (US East Ohio Region).
- AWS Snowball is a petabyte-scale physical data transport solution that uses secure devices to transfer large amounts of data into and out of S3 storage. You pay a service fee “per job” plus daily charges (after 10 days), shipping costs and data transfer fees when moving data out of S3 storage.
- AWS Snowmobile is like Snowball but bigger, much bigger. The physical form is a 45-foot long shipping container pulled on a truck bed, which can hold up to 100PB per transfer. The $0.005 per GB per month cost is based on the amount of data provisioned. The charging period run between the time the Snowmobile container departs an AWS data center and the time data ingestion into AWS is complete.
Lifecycle transition requests
There are times when you’ll want to transfer data out of one storage class into another to take advantage of lower S3 costs—for example, when data stored in a Standard Storage bucket isn’t being access very frequently. You can transfer data downwards (i.e. Standard to Infrequent Access, or Infrequent Access to Glacier), and this will cost $0.01 per thousand requests into the two Infrequent Access classes, or $0.05 per thousand requests to transfer data into the Glacier Storage and Glacier Storage Deep Archive classes.
Amazon S3 storage management and alternative storage services
AWS offers a host of paid-for S3 storage management solutions to help you manage, tag, and analyze your inventory of data. You may also wish to take advantage of the Amazon CloudWatch and AWS CloudTrail services. Although they offer free levels, they can incur costs depending on the number of dashboards, metrics, alarms, logs, and custom events you use or create each month.
The offerings of S3 Inventory, S3 Analytics, and S3 Object tagging are also popular ways to help you get a better handle on what’s going on in your S3 environment. However, while the ability to tag S3 Objects has been a long-standing request, many organizations are now faced with the challenge of how to go about tagging thousands, millions, or even trillions of objects.
With regards to alternative storage solutions, Amazon recommends using storage services other than S3 for rapidly changing data that have lower read/write latencies. Depending on the nature of the data, alternative storage services include:
- Amazon Elastic Block Storage (EBS)
- Amazon Elastic File System (EFS)
- Amazon EC2 Instance Store
- Amazon Relational Database Services (RDS)
- Amazon DynamoDB
- AWS Backup
- AWS FSx for Lustre
- AWS FSx for Windows File Server
An AWS management solution to consider
Additionally, establishing a Cloud Financial Management practice can also help manage and reduce AWS costs. Cloud Financial Management (CFM), also known as FinOps or Cloud Cost Management, is a function that helps align and develop financial goals, drive a cost-conscious culture, establish guardrails to meet financial targets, and gain greater business efficiencies. Learn more about establishing a Cloud Financial Management practice here.
If your head isn’t already exploding with the variety of Amazon S3 storage options and associated costs, you may want to consider an AWS management solution—CloudHealth. CloudHealth simplifies S3 cost calculations by analyzing how your data is stored, providing reports that can identify where inefficiencies exists, and helping you to optimize S3 costs over time.
Once optimized, our platform maintains the optimized state through policy-driven automation. You simply create policies that—for example—alert you when S3 costs have exceeded a certain amount for an account or region, so you can take action and migrate objects to a lower, less expensive tier. You can also get reports and alerts based on granular S3 charges like transfer costs, and analyze where data is being transferred to.
S3 Cost: Amazon Cloud Storage Costs Explained FAQs
How does AWS calculate the size of GBs, TBs, and PBs?
The use of the abbreviations GBs, TB, and PBs is in itself confusing because, rather than use the decimal system for calculating storage sizes (1 gigabyte = 1,000 megabytes, etc.), AWS uses the binary system. In the binary system 1 gibibyte (GiB) = 2^30 bytes, 1 tebibyte (TiB) = 2^40 bytes, and 1 pebibyte (PiB) = 2^50 bytes. Most often these calculations are expressed multiples of 1,024 for ease of reference (i.e. 1 tebibyte = 1,024 gibibytes, etc.).
How does AWS calculate my monthly S3 storage usage?
Because the volume of data stored in S3 buckets is likely to be variable over the course of a month, AWS calculates S3 storage costs using a method called “TimeStorage-ByteHrs”. This is a calculation in which the GiBs of storage used per hour are converted into bytes, each hour´s bytes usage is added together, and the total number of bytes divided by 2^30 to arrive at a GiBs-per-month total.
What happens if my GiBs-per-month total falls across two or more price tiers?
Using the prices quoted above as an example, let´s say your GiBs-per-month total was 600TiBs. The first 50TiBs (51,200GiBs) are charged at $0.023 per GiB ($1,177.60), the next 450TiBs (460,800GiBs) are charged at $0.022 per GiB ($10,137.60), and the remaining 100TiBs (102,400GiBs) are charged at $0.021 per GiB ($2,150.40). Your S3 storage costs for the month would be $1.177.60 + $10,137.60 + $2,150.40 = $13,465.60.
How much difference would using Intelligent Tiering make if half the data isn’t frequently accessed?
Assuming a 50/50 split between frequently accessed data and infrequently accessed data, the cost per month would be reduced to $10,674.60 plus the cost of monitoring and automation ($25 per 10 million objects). Please note that both this calculation and the one above ignores the cost of S3 requests, data retrieval, and data exchanges. We’ve also used the “ease of reference” conversion rate of 1 tebibyte = 1,024 gibibytes rather than 1 tebibyte = 1,099,511,627,800 bytes conversion rate.
What would be the difference in managing S3 costs with CloudHealth?
Compared to Intelligent Tiering, CloudHealth can tell you when infrequently accessed data should be migrated to Glacier Storage or Deep Archive – potentially reducing S3 costs by significantly more. CloudHealth can also be configured to alert you when data retrieval costs from Infrequently Accessed and Glacier storage classes increase beyond the savings you are making.
However, most importantly CloudHealth gives you total visibility into the S3 services your organization is utilizing so you can make informed decisions about the most cost-effective way to organize data. For further information about CloudHealth´s optimization capabilities, don’t hesitate to get in touch and request a demo of our cloud management platform in action.