The RECORD_TYPE Filter
AWS Cost Explorer returns several record types by default: Usage, Tax, Credit, and Refund. Without filtering, credits inflate the apparent cost. Always filter to RECORD_TYPE = Usage for gross spend figures.
def sync_costs(start_date: str, end_date: str):
client = boto3.client("ce", region_name="us-east-1", ...)
response = client.get_cost_and_usage(
TimePeriod={"Start": start_date, "End": end_date},
Granularity="DAILY",
Filter={
"Dimensions": {
"Key": "RECORD_TYPE",
"Values": ["Usage"],
}
},
GroupBy=[{"Type": "DIMENSION", "Key": "SERVICE"}],
Metrics=["UnblendedCost"],
)
for result in response["ResultsByTime"]:
date = result["TimePeriod"]["Start"]
for group in result["Groups"]:
service = group["Keys"][0]
amount = Decimal(group["Metrics"]["UnblendedCost"]["Amount"])
DailyCost.objects.update_or_create(
date=date, service=service,
defaults={"amount_usd": amount}
)
Bedrock Token Logging
The cost tracker also maintains a local log of Bedrock invocations with estimated costs, so you can see which features are expensive without waiting for the Cost Explorer sync (which has a 24–48 hour lag).
# cost_tracker/tracker.py
PRICE_PER_1K_IN = Decimal("0.00006") # Nova Lite input tokens
PRICE_PER_1K_OUT = Decimal("0.00024") # Nova Lite output tokens
PRICE_PER_IMAGE = Decimal("0.04") # Stable Image Core per image
def log_bedrock(model_id, operation, caller, tokens_in=0, tokens_out=0, images=0):
cost = (
Decimal(tokens_in) / 1000 * PRICE_PER_1K_IN +
Decimal(tokens_out) / 1000 * PRICE_PER_1K_OUT +
Decimal(images) * PRICE_PER_IMAGE
)
BedrockLog.objects.create(
model_id=model_id, operation=operation, caller=caller,
tokens_in=tokens_in, tokens_out=tokens_out, images=images,
estimated_cost_usd=cost,
)
Tip: Cost Explorer data has a 24–48 hour lag. The local Bedrock log gives you near-real-time cost visibility. Use both: Cost Explorer for actuals, the local log for same-day budget checks.