Pricing comparison

Gemini 2.5 Flash vs GPT-5.4 nano

Per-token pricing, full-workload cost ladders, and monthly volume projections. Numbers sourced directly from each provider's rate card.

Google

Gemini 2.5 Flash

Input
$0.30 / 1M
Output
$2.50 / 1M
Cached input
$0.030 / 1M
Context
1M
Max output
-

OpenAI

GPT-5.4 nano

Input
$0.20 / 1M
Output
$1.25 / 1M
Cached input
$0.020 / 1M
Context
400K
Max output
128K

Cost per request

Four common workload shapes, input tokens and a 1:2 output ratio (a standard chat/completion pattern). Long-context surcharges apply automatically where the provider charges them.

ScenarioTokens (in / out)Gemini 2.5 FlashGPT-5.4 nanoWinner
Short prompt100 / 200$0.0005$0.0003GPT-5.4 nano
Typical request1,000 / 2,000$0.0053$0.0027GPT-5.4 nano
Long document10,000 / 5,000$0.0155$0.0083GPT-5.4 nano
Large prompt100,000 / 10,000$0.0550$0.0325GPT-5.4 nano

Monthly bill at scale

Projected monthly cost at typical request volume, assuming the "typical request" shape above (1k in, 2k out).

TrafficReq / monthGemini 2.5 FlashGPT-5.4 nanoDelta
Small SaaS1,000$5.30$2.70GPT-5.4 nano -$2.60
Growing product10,000$53.00$27.00GPT-5.4 nano -$26.00
Heavy usage100,000$530.00$270.00GPT-5.4 nano -$260.00

Which should you use?

For the typical chat-shape request (~1k input, 2k output), GPT-5.4 nano comes out 96% cheaper. If you're picking one as the default, that's usually the right choice on cost alone.

GPT-5.4 nano wins on both sides of the bill - cheaper input and cheaper output. If the quality gap is in your favour, there's no cost argument for the other side.

Context window differs: Gemini 2.5 Flash holds 1M of input vs 400K on the other side. If you regularly push past the smaller ceiling, the comparison ends there.

Live cost calculator

Type in any token counts - both prices update instantly. Uses base input/output rates (no cache discount, no long-context tier).

Gemini 2.5 Flash

$0.0015

per request

GPT-5.4 nano

$0.0008

per request

GPT-5.4 nano is $0.0007 cheaper per request (46.8% less).

Try both in the estimator →

Drop your actual prompt in, tokens are counted with the provider's own tokenizer, and the dollar number matches what lands on your invoice.

Frequently asked

Which is cheaper, Gemini 2.5 Flash or GPT-5.4 nano?
On a typical 1,000-input / 2,000-output request, GPT-5.4 nano costs ~$0.0027 vs ~$0.0053 on Gemini 2.5 Flash. Input or output rates can flip the answer for very lopsided workloads - see the cost ladder above.
What's the difference in per-token pricing?
Gemini 2.5 Flash charges $0.30 per 1M input tokens and $2.50 per 1M output tokens. GPT-5.4 nano charges $0.20 / $1.25 per 1M.
Which has the bigger context window?
Gemini 2.5 Flash is larger (1M) vs 400K on the other.
Is there a cached-input discount on either?
Gemini 2.5 Flash caches at $0.030 per 1M (90% off). GPT-5.4 nano caches at $0.020 per 1M (90% off). Workloads with repeated static prefixes see the biggest savings.
How fresh is this comparison?
Gemini 2.5 Flash was re-verified on 2026-04-06 and GPT-5.4 nano on 2026-04-06 against each provider's published rate card. Calcis re-checks every row on a rolling schedule and re-deploys when a provider changes pricing.

Gemini 2.5 Flash verified 2026-04-06 · GPT-5.4 nano verified 2026-04-06. Rate cards at Google and OpenAI.