2026-03-23 · GCP Cost

How to Analyze Google Cloud Costs with the BigQuery Billing Export

Google Cloud does not have a CUR-style file export you can download and analyze locally. Instead, GCP streams detailed billing data into BigQuery automatically, and you query it from there. This is actually a better experience for large accounts since you do not have to deal with downloading multi-gigabyte CSV files, but it does mean you need BigQuery enabled and a bit of SQL to get answers.

This post covers how to set up the billing export, what the schema contains, and the queries you actually need for cost spike analysis.

Setting Up the Export

Go to the GCP console, navigate to Billing, and select Billing Export from the left menu. Under BigQuery Export, enable the Standard Usage Cost export. Choose a project and a dataset where GCP will write the billing data.

GCP creates a table named gcp_billing_export_v1_ACCOUNT_ID in that dataset. It takes about 24 hours to start populating and then updates daily.

The Standard export is sufficient for most cost analysis. There is also a Detailed Usage Cost export that includes resource-level labels and more granular data, which is worth enabling if you have a large environment and need to trace costs to specific resources.

The Schema: What Actually Matters

The billing export table has a lot of columns. Here are the ones you will use for cost analysis.

usage_start_time and usage_end_time are timestamps for when the usage occurred. Use DATE_TRUNC(usage_start_time, MONTH) to group by billing period.

service.description is the GCP service that generated the charge. Examples: Compute Engine, Cloud Storage, BigQuery, Cloud SQL.

sku.description is the specific SKU or operation. This is the most important column for drilling into a spike. Examples: N2 Instance Core running in Americas, Standard Storage US, Queries under 1 MiB. The SKU description is the equivalent of the Meter in Azure billing or the usage type in an AWS CUR.

usage.amount is how much was used. The units depend on the SKU: hours for compute, byte-seconds for storage, bytes for data transfer.

usage.unit tells you what unit the amount is in. Always check this before interpreting usage numbers.

cost is the base cost before credits are applied. This is the primary cost column for most analysis.

credits is a repeated field containing any discounts applied to the row. Credits include Sustained Use Discounts, Committed Use Discounts, promotional credits, and negotiated discounts. Each credit has a type and an amount. Summing credits gives you the total discount applied.

The effective cost you actually paid is cost + SUM(credits.amount). The credits amounts are negative, so adding them reduces the total. This is a common point of confusion when you first start working with GCP billing data.

project.id and project.name tell you which GCP project the charge belongs to. Essential for multi-project environments.

labels is a repeated field of key-value pairs corresponding to resource labels. If your team labels resources with environment or team tags, this is where you can filter or group by them.

location.region is the GCP region where the usage occurred.

The Core Analysis Query

For cost spike analysis, start with this query to compare the current month against the prior month by service and SKU:

``sql SELECT FORMAT_DATE('%Y-%m', DATE(usage_start_time)) AS month, service.description AS service, sku.description AS sku, SUM(cost + IFNULL((SELECT SUM(c.amount) FROM UNNEST(credits) c), 0)) AS net_cost FROM project.dataset.gcp_billing_export_v1_* WHERE DATE(usage_start_time) >= DATE_TRUNC(DATE_SUB(CURRENT_DATE(), INTERVAL 2 MONTH), MONTH) GROUP BY 1, 2, 3 ORDER BY month DESC, net_cost DESC ``

This gives you a table of monthly cost by service and SKU for the last two billing periods. Take the rows from each month, join them on service and SKU, compute the delta, and sort by the largest increase. The SKU with the biggest absolute cost jump is your primary suspect.

Separating Volume Changes from Price Changes

Once you have found the SKU that drove the spike, the next step is figuring out whether usage went up, the unit price changed, or both.

Extend the query to include SUM(usage.amount) and compute the effective unit cost as net_cost / usage_amount for each period. If usage amount is stable but the unit cost jumped, a discount expired or pricing changed. If unit cost is stable but usage went up, the workload grew or something is running that should not be.

The credits column is key here. If the SUM(credits.amount) for a SKU dropped significantly from one month to the next while usage stayed flat, a Committed Use Discount or Sustained Use Discount phase-out is the cause.

Useful Shortcuts in the Schema

GCP does not have a direct equivalent to the AWS line_item_line_item_type filter for removing tax and fee rows. Instead, filter out credits and adjustments using cost_type. The cost_type column values are regular, tax, adjustment, and rounding_error. Filtering to cost_type = 'regular' gives you just the actual usage charges.

For BigQuery-specific cost spikes, the sku.description will include the word Queries for on-demand query charges. You can join against INFORMATION_SCHEMA.JOBS in the project where queries ran to get per-query cost data and identify which jobs were responsible.

What This Cannot Tell You Alone

The billing export gives you cost by project, service, SKU, and label. It does not automatically tell you which specific resource instance caused an increase. For that, the Detailed Usage Cost export is more useful since it includes resource names.

If you need to trace a Compute Engine cost increase to a specific VM, enable the Detailed export and filter on resource.name after identifying the SKU. For most other services, the project plus SKU plus region is enough to get to the responsible team or workload.

BillSpike supports GCP FOCUS billing exports today. Export your GCP billing data in FOCUS format from the BigQuery export settings, upload the file, and you get the same ranked driver breakdown and volume versus price attribution that it runs on AWS CUR files. The analysis logic is provider-agnostic because it operates on the FOCUS schema columns that GCP, AWS, and Azure all share.


Analyze your own AWS cost spike at billspike.io