Know before you send.
Exact token counts and spend projections for any prompt, across every major model.
Prompt
Target model
Reasoning effort
1.0× baselineHigher effort lets the model reason longer — reasoning tokens bill at the output rate, so cost scales with the level.
Forecast
Projected spend · single request
$0.00
Input volume
0
tokens
Prompt · 0 tok$0.00
Response · ~0 tok projected
At 1,000 requests / month$0.00
At 100,000 requests / month$0.00
GPT-5
400K context window
Modelling a multi-turn workflow? Open the Session Simulator →