2026-03-01 · AWS Cost

Why Did My AWS Bill Spike This Month?

You open your AWS bill and something looks off. It's higher than last month, maybe a little, maybe a lot, and you have no idea why. This happens all the time. The good news is there is almost always a clear reason, and once you know where to look, you can figure it out quickly.

This guide breaks down the most common causes of AWS cost spikes, exactly how to check each one, and how to prevent them from repeating.

1. A Reserved Instance or Savings Plan Expired

This is the single most common cause of a sudden, unexplained jump, and the most frustrating, because nothing changed in your infrastructure.

Reserved Instances and Savings Plans give you discounted pricing in exchange for a one- or three-year commitment. When they expire, every covered resource immediately reverts to full on-demand pricing with no warning. A single m5.2xlarge running 24/7 goes from roughly $0.07/hr (RI rate) to $0.384/hr (on-demand). Across a fleet, that delta compounds fast.

How to check: Open Cost Explorer, set the grouping to Purchase Option, and compare this month against last month. If you see a bar shift from Reserved to On-Demand for EC2 or RDS, an RI expired. You can also go to EC2 > Reserved Instances and sort by expiration date to see what is coming up.

How to fix it: Purchase new Reserved Instances or a Compute Savings Plan to cover the affected workloads. If the workload is temporary, on-demand may be fine until you decide.

2. Usage Genuinely Increased

Sometimes the bill is higher because something actually used more resources. This is either expected growth or a bug, and telling the two apart matters.

EC2 autoscaling may have spun up extra instances during a traffic event and not scaled back in cleanly. Lambda invocations can spike if a downstream queue backed up or a retry loop fired unexpectedly. S3 storage grows silently if objects are not being deleted, lifecycle policies misconfigured or missing entirely. RDS can scale storage automatically when it gets close to the limit, and that storage never scales back down.

How to check: In Cost Explorer, set granularity to daily and group by service. Find the day the cost jumped. Then drill into that service and group by Usage Type. This tells you whether it was more compute hours, more storage GB, more API calls, or something else.

For Lambda specifically, check CloudWatch metrics for the function: invocation count, error count, and duration. A spike in errors combined with high invocations usually means a retry loop.

3. Data Transfer Costs Appeared

Data transfer is one of the easiest costs to accidentally create and one of the hardest to notice until the bill arrives.

You pay for outbound traffic from AWS to the internet (around $0.09/GB in us-east-1), cross-region traffic (around $0.02/GB each way), inter-AZ traffic within the same region ($0.01/GB each way), and NAT Gateway data processing ($0.045/GB on top of the transfer cost itself). Inbound traffic is free.

Common triggers: a new service that pulls data from a different region, an application change that started routing traffic through a NAT Gateway instead of a VPC endpoint, an EC2 instance serving large files directly instead of through CloudFront.

How to check: In Cost Explorer, group by Usage Type and filter for rows containing DataTransfer. Look for DataTransfer-Out-Bytes, DataTransfer-Regional-Bytes, and NatGateway-Bytes. These tell you the category. Then group by Resource to find which specific instance or service generated it.

4. A New Service Was Quietly Turned On

Security and observability services in AWS charge based on what they scan or process, and the costs are not obvious until they show up on a bill.

GuardDuty charges per GB of CloudTrail, VPC Flow Log, and DNS log data analyzed. Enabling it across a large multi-account org can run hundreds of dollars per month. CloudTrail data events charge per 100,000 events, enabling S3 data events on a high-traffic bucket can add up quickly. Security Hub charges per finding ingested per account per region. AWS Config charges per configuration item recorded.

How to check: In Cost Explorer, switch the grouping to Service and look for any service that had zero spend last month and non-zero spend this month. Sort by delta, not by total. New spend is easy to miss on percentage-based views.

5. Resources Left Running After Testing

This one is common on engineering teams without a cleanup process.

EC2 instances spun up for a load test and never stopped. EBS volumes that remain after an instance is terminated, AWS does not automatically delete EBS volumes unless the instance was launched with that option enabled. Application Load Balancers charge an hourly rate regardless of traffic. RDS instances left running over a weekend at an oversized instance class.

How to check: Go to EC2 > Instances and sort by launch date. Any instance running for more than a few weeks with a name like "test", "staging", or "temp" is a candidate. For EBS, go to EC2 > Volumes and filter for Available state, these are volumes not attached to any instance and actively billing you.

How to Find the Root Cause Step by Step

Start with Cost Explorer. Set the date range to cover both months and group by Service. Sort by the largest absolute increase, not percentage, which hides new spend entirely. That gives you the service responsible.

Once you have the service, group by Usage Type within that service. The usage type is the most specific label AWS provides: it tells you whether the increase was compute hours on a specific instance size, storage in a specific tier, data transfer in a specific direction, or API calls of a specific type.

If you need to go further, to a specific resource ID or account, you need your Cost and Usage Report. The CUR has a line_item_resource_id column that maps charges to specific EC2 instance IDs, RDS cluster ARNs, S3 bucket names, or Lambda function names. Combined with line_item_usage_type and line_item_net_amortized_cost, you can pinpoint the exact resource responsible.

The CUR also lets you answer the deeper question: did you pay more because you used more (volume), or because the rate went up (price)? That distinction determines whether the fix is an architecture change or a new Reserved Instance purchase.

How to Prevent Future Spikes

Set AWS Budgets alerts at 80 percent and 100 percent of your expected monthly spend. Add a forecasted alert at 100 percent so you get warned mid-month before the damage is done, not after.

Track RI and Savings Plan expirations on your team calendar at 60 and 30 days out. AWS does not send reminders when commitments expire.

Enable Cost Anomaly Detection. It learns your normal spend pattern and alerts you when something deviates significantly, even if it does not cross a fixed threshold. A service that normally costs $50 suddenly hitting $400 will trigger it even if your total budget alert is set at $5,000.

Tag everything. Cost allocation tags let you group spend by team, environment, or feature. When a spike happens, tags tell you immediately which team or project to talk to.

Bottom Line

Most AWS bill spikes come down to one of three things: you used more, your price went up, or a discount expired. The hard part is figuring out which one quickly.

Cost Explorer gets you to the service. The CUR gets you to the root cause. If you want to skip the manual work, BillSpike automates the full investigation, upload your CUR and you get a ranked breakdown of every driver with volume and price attribution in under a minute.


Analyze your own AWS cost spike at billspike.io