Pricing comparison
GPT-5 vs Gemini 2.5 Flash-Lite
Per-token pricing, full-workload cost ladders, and monthly volume projections. Numbers sourced directly from each provider's rate card.
OpenAI
GPT-5
- Input
- $1.25 / 1M
- Output
- $10.00 / 1M
- Cached input
- $0.125 / 1M
- Context
- 400K
- Max output
- 128K
Gemini 2.5 Flash-Lite
- Input
- $0.10 / 1M
- Output
- $0.40 / 1M
- Cached input
- $0.010 / 1M
- Context
- 1M
- Max output
- -
Cost per request
Four common workload shapes, input tokens and a 1:2 output ratio (a standard chat/completion pattern). Long-context surcharges apply automatically where the provider charges them.
| Scenario | Tokens (in / out) | GPT-5 | Gemini 2.5 Flash-Lite | Winner |
|---|---|---|---|---|
| Short prompt | 100 / 200 | $0.0021 | $0.0001 | Gemini 2.5 Flash-Lite |
| Typical request | 1,000 / 2,000 | $0.0213 | $0.0009 | Gemini 2.5 Flash-Lite |
| Long document | 10,000 / 5,000 | $0.0625 | $0.0030 | Gemini 2.5 Flash-Lite |
| Large prompt | 100,000 / 10,000 | $0.2250 | $0.0140 | Gemini 2.5 Flash-Lite |
Monthly bill at scale
Projected monthly cost at typical request volume, assuming the "typical request" shape above (1k in, 2k out).
| Traffic | Req / month | GPT-5 | Gemini 2.5 Flash-Lite | Delta |
|---|---|---|---|---|
| Small SaaS | 1,000 | $21.25 | $0.90 | Gemini 2.5 Flash-Lite -$20.35 |
| Growing product | 10,000 | $212.50 | $9.00 | Gemini 2.5 Flash-Lite -$203.50 |
| Heavy usage | 100,000 | $2,125 | $90.00 | Gemini 2.5 Flash-Lite -$2,035 |
Which should you use?
For the typical chat-shape request (~1k input, 2k output), Gemini 2.5 Flash-Lite comes out 2261% cheaper. If you're picking one as the default, that's usually the right choice on cost alone.
Gemini 2.5 Flash-Lite wins on both sides of the bill - cheaper input and cheaper output. If the quality gap is in your favour, there's no cost argument for the other side.
Context window differs: Gemini 2.5 Flash-Lite holds 1M of input vs 400K on the other side. If you regularly push past the smaller ceiling, the comparison ends there.
Live cost calculator
Type in any token counts - both prices update instantly. Uses base input/output rates (no cache discount, no long-context tier).
GPT-5
$0.0063
per request
Gemini 2.5 Flash-Lite
$0.0003
per request
Try both in the estimator →
Drop your actual prompt in, tokens are counted with the provider's own tokenizer, and the dollar number matches what lands on your invoice.
Frequently asked
- Which is cheaper, GPT-5 or Gemini 2.5 Flash-Lite?
- On a typical 1,000-input / 2,000-output request, Gemini 2.5 Flash-Lite costs ~$0.0009 vs ~$0.0213 on GPT-5. Input or output rates can flip the answer for very lopsided workloads - see the cost ladder above.
- What's the difference in per-token pricing?
- GPT-5 charges $1.25 per 1M input tokens and $10.00 per 1M output tokens. Gemini 2.5 Flash-Lite charges $0.10 / $0.40 per 1M.
- Which has the bigger context window?
- Gemini 2.5 Flash-Lite is larger (1M) vs 400K on the other.
- Is there a cached-input discount on either?
- GPT-5 caches at $0.125 per 1M (90% off). Gemini 2.5 Flash-Lite caches at $0.010 per 1M (90% off). Workloads with repeated static prefixes see the biggest savings.
- How fresh is this comparison?
- GPT-5 was re-verified on 2026-04-06 and Gemini 2.5 Flash-Lite on 2026-04-06 against each provider's published rate card. Calcis re-checks every row on a rolling schedule and re-deploys when a provider changes pricing.
GPT-5 verified 2026-04-06 · Gemini 2.5 Flash-Lite verified 2026-04-06. Rate cards at OpenAI and Google.