Prompt vs Completion Cost Calculator

Paste your prompt and the model's completion separately to see the exact token count and USD cost for each side of the API call.

Prompt (Input)
Tokens0
Words0
Characters0
Input cost$0.000000
Completion (Output)
Tokens0
Words0
Characters0
Output cost$0.000000
Total Tokens
0
Input Cost
$0.00
Output Cost
$0.00
Total Cost
$0.00

Prompt vs Completion Cost Calculator

Most LLM APIs charge differently for prompt tokens (what you send) versus completion tokens (what the model generates). Completions typically cost 3–5× more per token because generating text is computationally more intensive than processing it.

Why Output Tokens Cost More

During generation, the model must run a full forward pass for every token it produces. Processing your input prompt (prefill) is highly parallelizable and batched across the GPU, making it cheaper per token. Use this tool to understand the true cost split of any API call.

Tips to Reduce Completion Costs

Related Tools