Know before you send.
Exact token counts and spend projections for any prompt, across every major model.
Prompt
Prompts aren't stored. OpenAI counts run locally; Claude and Gemini are sent to provider count-only endpoints. Privacy details
Target model
INPUT
$0.90
/ 1M tok
OUTPUT
$5.50
/ 1M tok
CONTEXT
400K
tokens
Reasoning effort
1.0× baselineHigher effort lets the model reason longer: reasoning tokens bill at the output rate, so cost scales with the level.
Mediumbaseline
Forecast
Projected spend · single request
$0.00
Input volume
0
tokens
Prompt · 0 tok$0.00
Response · ~0 tok projected
Monthly spend projection
1K req/mo
$0.00
10K req/mo
$0.00
100K req/mo
$0.00
GPT-5.5 mini
400K context
Estimates only. Actual charges are set by the LLM provider at request time and may differ. Learn more.
Modelling a multi-turn workflow? Open the Session Simulator →