Pricing comparison

Claude Sonnet 4.6 vs GPT-5.5

Per-token pricing, full-workload cost ladders, and monthly volume projections. Numbers sourced directly from each provider's rate card.

Anthropic

Claude Sonnet 4.6

Input
$3.00 / 1M
Output
$15.00 / 1M
Cached input
-
Context
1M
Max output
64K

OpenAI

GPT-5.5

Input
$3.00 / 1M
Output
$20.00 / 1M
Cached input
$0.300 / 1M
Context
1.1M
Max output
128K

Cost per request

Four common workload shapes, input tokens and a 1:2 output ratio (a standard chat/completion pattern). Long-context surcharges apply automatically where the provider charges them.

ScenarioTokens (in / out)Claude Sonnet 4.6GPT-5.5Winner
Short prompt100 / 200$0.0033$0.0043Claude Sonnet 4.6
Typical request1,000 / 2,000$0.0330$0.0430Claude Sonnet 4.6
Long document10,000 / 5,000$0.1050$0.1300Claude Sonnet 4.6
Large prompt100,000 / 10,000$0.4500$0.5000Claude Sonnet 4.6

Monthly bill at scale

Projected monthly cost at typical request volume, assuming the "typical request" shape above (1k in, 2k out).

TrafficReq / monthClaude Sonnet 4.6GPT-5.5Delta
Small SaaS1,000$33.00$43.00Claude Sonnet 4.6 -$10.00
Growing product10,000$330.00$430.00Claude Sonnet 4.6 -$100.00
Heavy usage100,000$3,300$4,300Claude Sonnet 4.6 -$1,000

Which should you use?

For the typical chat-shape request (~1k input, 2k output), Claude Sonnet 4.6 comes out 30% cheaper. If you're picking one as the default, that's usually the right choice on cost alone.

Context window differs: GPT-5.5 holds 1.1M of input vs 1M on the other side. If you regularly push past the smaller ceiling, the comparison ends there.

Live cost calculator

Type in any token counts - both prices update instantly. Uses base input/output rates (no cache discount, no long-context tier).

Claude Sonnet 4.6

$0.0105

per request

GPT-5.5

NEW

$0.0130

per request

Claude Sonnet 4.6 is $0.0025 cheaper per request (19.2% less).

Try both in the estimator →

Drop your actual prompt in, tokens are counted with the provider's own tokenizer, and the dollar number matches what lands on your invoice.

Frequently asked

Which is cheaper, Claude Sonnet 4.6 or GPT-5.5?
On a typical 1,000-input / 2,000-output request, Claude Sonnet 4.6 costs ~$0.0330 vs ~$0.0430 on GPT-5.5. Input or output rates can flip the answer for very lopsided workloads - see the cost ladder above.
What's the difference in per-token pricing?
Claude Sonnet 4.6 charges $3.00 per 1M input tokens and $15.00 per 1M output tokens. GPT-5.5 charges $3.00 / $20.00 per 1M.
Which has the bigger context window?
GPT-5.5 is larger (1.1M) vs 1M on the other.
Is there a cached-input discount on either?
Claude Sonnet 4.6 does not publish a cached-input rate. GPT-5.5 caches at $0.300 per 1M (90% off). Workloads with repeated static prefixes see the biggest savings.
How fresh is this comparison?
Claude Sonnet 4.6 was re-verified on 2026-04-06 and GPT-5.5 on 2026-04-29 against each provider's published rate card. Calcis re-checks every row on a rolling schedule and re-deploys when a provider changes pricing.

Claude Sonnet 4.6 verified 2026-04-06 · GPT-5.5 verified 2026-04-29. Rate cards at Anthropic and OpenAI.