Pricing comparison

Claude Haiku 4.5 vs o4-mini

Per-token pricing, full-workload cost ladders, and monthly volume projections. Numbers sourced directly from each provider's rate card.

Anthropic

Claude Haiku 4.5

Input
$1.00 / 1M
Output
$5.00 / 1M
Cached input
-
Context
200K
Max output
64K

OpenAI

o4-mini

Input
$1.10 / 1M
Output
$4.40 / 1M
Cached input
$0.275 / 1M
Context
200K
Max output
100K

Cost per request

Four common workload shapes, input tokens and a 1:2 output ratio (a standard chat/completion pattern). Long-context surcharges apply automatically where the provider charges them.

ScenarioTokens (in / out)Claude Haiku 4.5o4-miniWinner
Short prompt100 / 200$0.0011$0.0010o4-mini
Typical request1,000 / 2,000$0.0110$0.0099o4-mini
Long document10,000 / 5,000$0.0350$0.0330o4-mini
Large prompt100,000 / 10,000$0.1500$0.1540Claude Haiku 4.5

Monthly bill at scale

Projected monthly cost at typical request volume, assuming the "typical request" shape above (1k in, 2k out).

TrafficReq / monthClaude Haiku 4.5o4-miniDelta
Small SaaS1,000$11.00$9.90o4-mini -$1.10
Growing product10,000$110.00$99.00o4-mini -$11.00
Heavy usage100,000$1,100$990.00o4-mini -$110.00

Which should you use?

For the typical chat-shape request (~1k input, 2k output), o4-mini comes out 11% cheaper. If you're picking one as the default, that's usually the right choice on cost alone.

Split decision: Claude Haiku 4.5 has the cheaper input rate, while o4-mini has the cheaper output rate. RAG and long-context workloads favour the former; generation-heavy workloads (long responses, agentic loops) favour the latter.

Live cost calculator

Type in any token counts - both prices update instantly. Uses base input/output rates (no cache discount, no long-context tier).

Claude Haiku 4.5

$0.0035

per request

o4-mini

$0.0033

per request

o4-mini is $0.0002 cheaper per request (5.7% less).

Try both in the estimator →

Drop your actual prompt in, tokens are counted with the provider's own tokenizer, and the dollar number matches what lands on your invoice.

Frequently asked

Which is cheaper, Claude Haiku 4.5 or o4-mini?
On a typical 1,000-input / 2,000-output request, o4-mini costs ~$0.0099 vs ~$0.0110 on Claude Haiku 4.5. Input or output rates can flip the answer for very lopsided workloads - see the cost ladder above.
What's the difference in per-token pricing?
Claude Haiku 4.5 charges $1.00 per 1M input tokens and $5.00 per 1M output tokens. o4-mini charges $1.10 / $4.40 per 1M.
Which has the bigger context window?
Both hold 200K of input.
Is there a cached-input discount on either?
Claude Haiku 4.5 does not publish a cached-input rate. o4-mini caches at $0.275 per 1M (75% off). Workloads with repeated static prefixes see the biggest savings.
How fresh is this comparison?
Claude Haiku 4.5 was re-verified on 2026-04-06 and o4-mini on 2026-04-06 against each provider's published rate card. Calcis re-checks every row on a rolling schedule and re-deploys when a provider changes pricing.

Claude Haiku 4.5 verified 2026-04-06 · o4-mini verified 2026-04-06. Rate cards at Anthropic and OpenAI.