Build Your Prompt
Choose a provider, fill in the sections you need, then click Generate to produce a ready-to-use prompt file or paste-ready text.
Max Tokens
Upper limit on tokens the model may generate. ~0.75 words per token in English.
| Value | ~Words | Use For |
|---|---|---|
| 64–256 | 50–190 | Short answers |
| 512 | ~380 | Summaries |
| 1024 | ~760 | Detailed replies |
| 2048 | ~1500 | Articles, code |
| 4096+ | 3000+ | Long documents |
Setting too low truncates the response mid-sentence.
Temperature
Controls randomness. Higher = more creative; lower = more focused and deterministic.
| Value | Character | Best For |
|---|---|---|
| 0.0 | Deterministic | Fact lookup, classification |
| 0.1–0.3 | Very focused | Code, data extraction |
| 0.5–0.7 | Balanced | General Q&A, summaries |
| 0.8–1.0 | Creative | Brainstorming, copywriting |
| 1.2–2.0 | Highly random | Experimental / poetry |
Avoid combining high Temperature with low Top P simultaneously.
Top P (Nucleus Sampling)
Only tokens whose cumulative probability reaches P are considered. Filters unlikely tokens without a hard count cut-off.
| Value | Pool | Effect |
|---|---|---|
| 0.5 | 50% mass | Very tight, predictable |
| 0.75 | 75% mass | Focused + some variety |
| 0.90 | 90% mass | Balanced (common default) |
| 0.95 | 95% mass | Creative, broad vocabulary |
| 1.0 | All tokens | No filtering applied |
Adjust either Temperature or Top P — tuning both makes behaviour harder to predict.
Top K
At each step only the K most probable tokens are considered. A hard limit on vocabulary breadth.
| Value | Pool | Effect |
|---|---|---|
| 1 | Greedy | Always picks most likely |
| 10–20 | Small | Coherent, conservative |
| 40 | Medium | Good balance (Anthropic default) |
| 100–200 | Large | Diverse, creative outputs |
| 0 / -1 | Disabled | No K filtering (use Top P only) |
Not all APIs expose Top K. When unsupported the field is disabled.
- 1Choose a provider, model and fill in the required fields, then click ⚡ GENERATE PROMPT to build your prompt.
- 2⚡ GENERATE PROMPT generates a prompt specific to the AI platform and model selected — output format adapts per provider.
- 3🌐 UNIVERSAL PROMPT generates a generic, structured prompt in .txt format that can be uploaded or pasted into any AI assistant or platform.
- 4🐍 PYTHON CODE generates a ready-to-run Python script with all imports and API setup, suitable for embedding in a program using the provider's API.
- 5💾 SAVE CONFIG saves your current prompt — all entered data and selections — to a .promptjson file for reuse later.
- 6📂 LOAD CONFIG reloads a previously saved configuration from a .promptjson file created with 💾 SAVE CONFIG.
Comments
Post a Comment