Try it live
See PromptDiff in action. No sign-up required.
How it works
From prompt to comparison in milliseconds. One request, all the data you need to make the right model choice.
POST your prompt
Send your prompt and choose which models to compare. Include system instructions or variables as needed.
We run all models in parallel
PromptDiff calls each model simultaneously, measuring latency, collecting tokens, and computing costs in real time.
Get structured results
Receive a unified JSON response with outputs, latency, cost, and token breakdown per model. Compare and decide.
Simple API, powerful results
Integrate in minutes. Works with any HTTP client.
curl -X POST https://promptdiff.bizmarq.com/api/v1/compare \
-H "Content-Type: application/json" \
-H "Authorization: Bearer pd_your_api_key" \
-d '{
"prompt": "Explain async/await in JavaScript in one paragraph.",
"models": ["gpt-4o-mini", "claude-3-haiku", "gemini-1.5-flash"],
"options": {
"temperature": 0.7,
"max_tokens": 300
}
}'Supported models
All major providers in one comparison. More added regularly.
Check the docs for the complete and up-to-date model list.
Simple, transparent pricing
Start free. Pay only for what you use.
Note: PromptDiff pricing is separate from underlying LLM costs, which are billed by each provider on your behalf.