Pricing comparison
Claude Haiku 4.5 vs GPT-5.5 mini
Per-token pricing, full-workload cost ladders, and monthly volume projections. Numbers sourced directly from each provider's rate card.
Anthropic
Claude Haiku 4.5
- Input
- $1.00 / 1M
- Output
- $5.00 / 1M
- Cached input
- -
- Context
- 200K
- Max output
- 64K
OpenAI
GPT-5.5 mini
- Input
- $0.90 / 1M
- Output
- $5.50 / 1M
- Cached input
- $0.090 / 1M
- Context
- 400K
- Max output
- 128K
Cost per request
Four common workload shapes, input tokens and a 1:2 output ratio (a standard chat/completion pattern). Long-context surcharges apply automatically where the provider charges them.
| Scenario | Tokens (in / out) | Claude Haiku 4.5 | GPT-5.5 mini | Winner |
|---|---|---|---|---|
| Short prompt | 100 / 200 | $0.0011 | $0.0012 | Claude Haiku 4.5 |
| Typical request | 1,000 / 2,000 | $0.0110 | $0.0119 | Claude Haiku 4.5 |
| Long document | 10,000 / 5,000 | $0.0350 | $0.0365 | Claude Haiku 4.5 |
| Large prompt | 100,000 / 10,000 | $0.1500 | $0.1450 | GPT-5.5 mini |
Monthly bill at scale
Projected monthly cost at typical request volume, assuming the "typical request" shape above (1k in, 2k out).
| Traffic | Req / month | Claude Haiku 4.5 | GPT-5.5 mini | Delta |
|---|---|---|---|---|
| Small SaaS | 1,000 | $11.00 | $11.90 | Claude Haiku 4.5 -$0.90 |
| Growing product | 10,000 | $110.00 | $119.00 | Claude Haiku 4.5 -$9.00 |
| Heavy usage | 100,000 | $1,100 | $1,190 | Claude Haiku 4.5 -$90.00 |
Which should you use?
For the typical chat-shape request (~1k input, 2k output), Claude Haiku 4.5 comes out 8% cheaper. If you're picking one as the default, that's usually the right choice on cost alone.
Split decision: GPT-5.5 mini has the cheaper input rate, while Claude Haiku 4.5 has the cheaper output rate. RAG and long-context workloads favour the former; generation-heavy workloads (long responses, agentic loops) favour the latter.
Context window differs: GPT-5.5 mini holds 400K of input vs 200K on the other side. If you regularly push past the smaller ceiling, the comparison ends there.
Live cost calculator
Type in any token counts - both prices update instantly. Uses base input/output rates (no cache discount, no long-context tier).
Claude Haiku 4.5
$0.0035
per request
GPT-5.5 mini
NEW$0.0037
per request
Try both in the estimator →
Drop your actual prompt in, tokens are counted with the provider's own tokenizer, and the dollar number matches what lands on your invoice.
Frequently asked
- Which is cheaper, Claude Haiku 4.5 or GPT-5.5 mini?
- On a typical 1,000-input / 2,000-output request, Claude Haiku 4.5 costs ~$0.0110 vs ~$0.0119 on GPT-5.5 mini. Input or output rates can flip the answer for very lopsided workloads - see the cost ladder above.
- What's the difference in per-token pricing?
- Claude Haiku 4.5 charges $1.00 per 1M input tokens and $5.00 per 1M output tokens. GPT-5.5 mini charges $0.90 / $5.50 per 1M.
- Which has the bigger context window?
- GPT-5.5 mini is larger (400K) vs 200K on the other.
- Is there a cached-input discount on either?
- Claude Haiku 4.5 does not publish a cached-input rate. GPT-5.5 mini caches at $0.090 per 1M (90% off). Workloads with repeated static prefixes see the biggest savings.
- How fresh is this comparison?
- Claude Haiku 4.5 was re-verified on 2026-04-06 and GPT-5.5 mini on 2026-04-29 against each provider's published rate card. Calcis re-checks every row on a rolling schedule and re-deploys when a provider changes pricing.
Claude Haiku 4.5 verified 2026-04-06 · GPT-5.5 mini verified 2026-04-29. Rate cards at Anthropic and OpenAI.