2026-02-22 · AWS Cost
How to Read an AWS Cost and Usage Report (CUR 2.0)
If you've ever opened an AWS Cost and Usage Report and immediately closed it again, you're not alone.
A full CUR export can contain 200+ columns and millions of rows. The first time you see it, it looks less like billing data and more like a spreadsheet nightmare.
The truth is: most of those columns don't matter for cost analysis. Once you know which ones to focus on, the report becomes much easier to work with.
This guide walks through what CUR 2.0 actually contains, which columns matter, and how to use them to find the real drivers behind an AWS cost spike.
What is CUR 2.0?
CUR stands for Cost and Usage Report. It's AWS's most detailed billing export, a giant flat file (CSV or Parquet) containing every charge across every service, region, account, and resource for a given billing period.
CUR 2.0 is the newer schema AWS introduced in late 2023. Compared to CUR 1.0, it's cleaner and easier to process. Column names use snake_case instead of camelCase, the schema is more consistent, and it aligns better with the FinOps FOCUS specification. It's still a big dataset, but it's much more predictable to work with.
Where to export it
You can generate a CUR from the AWS Billing console: Billing → Data Exports → Create export. Choose Cost and Usage Report, then select CUR 2.0.
Choose Parquet if possible, as it's usually 5–10x smaller and far faster to query. Set the export frequency to daily or monthly, and pick an S3 bucket where AWS can drop the files.
Once enabled, AWS writes the report into S3 as a partitioned folder structure. Each month gets its own folder with one or more part files depending on account size. Large environments can easily generate hundreds of megabytes per month.
The columns that actually matter
A fresh CUR 2.0 export has a huge number of columns, but only a handful are useful for most cost investigations.
line_item_usage_start_date, the timestamp when the usage for that row started. This is the column you'll use when grouping costs by day or month.
line_item_product_code, the AWS service responsible for the charge. Examples: AmazonEC2, AmazonRDS, AmazonS3. When you're first trying to understand where money is going, this is usually your first grouping column.
line_item_usage_type, more granular than product_code. For EC2 you might see something like USE1-BoxUsage:m5.xlarge, combining region, charge category, and resource type into a single label. The format looks messy, but this column is incredibly useful: it usually reveals the exact driver behind a spike.
line_item_line_item_type, the type of charge. Common values: Usage (normal consumption), Tax, Credit, RIFee, SavingsPlanRecurringFee. If you're analyzing real consumption, filter to Usage only. That removes a lot of billing noise.
line_item_net_amortized_cost, the most useful cost column. It represents what you actually paid after Reserved Instance or Savings Plan discounts are applied and spread across the usage they cover. For most cost analysis this is the right number. Use list_unblended_cost if you want the raw rack rate before discounts.
product_region, the AWS region where the charge occurred. Examples: us-east-1, eu-west-1. Cost spikes often happen within a single region even if the overall service total looks flat.
line_item_resource_id, the specific resource responsible for the charge. Depending on the service this might be an EC2 instance ID, an RDS cluster ARN, or an S3 bucket name. You'll typically use this once you've already identified which service and usage type caused the increase.
product_servicename, a human-readable version of the service name, like "Amazon Elastic Compute Cloud". This is what you display in dashboards. For queries and grouping, line_item_product_code is usually cleaner.
How cost spike analysis actually works
At its core, cost analysis is just period comparison.
Take two months of CUR data. For each combination of service, region, and usage type, compute the change in cost. The formula is simple: delta = current_month_cost - prior_month_cost. Sort by the largest positive delta and you have a ranked list of what actually drove the increase.
The math is easy. The hard part is handling the data volume. A busy AWS account can produce tens of millions of rows per month, so the filtering, grouping, and aggregation takes real setup work.
Common pitfalls when analyzing CUR manually
New spend gets hidden. If you rely on percentage change, brand-new services disappear. A service that cost $0 last month and $3,000 this month technically shows infinite percent growth, and most tools just skip those rows. Absolute cost delta is much more reliable.
RI and Savings Plan fees add noise. Charges like RIFee and SavingsPlanRecurringFee can appear as large lump sums on the purchasing account. They don't represent actual usage. Filtering to line_item_line_item_type = 'Usage' makes the data far cleaner.
Taxes inflate totals. Most analyses exclude line_item_line_item_type = 'Tax' from comparisons to avoid skewing the numbers.
Some services have no region. Global services like CloudFront and Route 53 often leave product_region empty. Blank values there are completely normal.
Multi-account environments add complexity. If you're using consolidated billing, always include line_item_usage_account_id. Without it you may accidentally combine charges from different teams or environments.
Volume vs. price spikes
Once you've identified the biggest cost drivers, the next step is understanding why they went up. Almost every spike falls into one of two categories.
Volume increase, you simply used more of the service. More EC2 hours, more S3 API calls, more data transfer. The usage_amount column increases.
Unit price increase, the usage stays the same but the price per unit goes up. This usually happens when a Reserved Instance or Savings Plan expires and a workload falls back to on-demand rates. To detect it manually, compare current_cost / current_usage against prior_cost / prior_usage. If the ratio jumps while usage stays flat, the spike is coming from price changes rather than workload growth.
The shorter path
Everything above can be done manually with SQL, Python, or Athena. But building the full workflow (grouping, period comparisons, new spend detection, price vs. volume decomposition) usually takes a few hours to set up correctly. And that's assuming the data behaves.
BillSpike automates the entire process. Upload your CUR file (CSV or Parquet, up to 1GB) and you'll get a ranked list of cost drivers, price/volume/mix attribution, waterfall charts, and a plain-English summary ready to paste into Slack or an incident report.
No IAM access. No setup. Just the file.
Analyze your own AWS cost spike at billspike.io