Universal AI Prompt Builder
v3.0 · multi-provider

Build Your Prompt

Choose a provider, fill in the sections you need, then click Generate to produce a ready-to-use prompt file or paste-ready text.

Select AI Provider
ClaudeAnthropic
ChatGPTOpenAI
GeminiGoogle
DeepSeekDeepSeek AI
CopilotMicrosoft
GrokxAI
PerplexityPerplexity AI
MistralMistral AI
QwenAlibaba Cloud
Generates YAML-style prompt file compatible with Claude API and Claude.ai chat interface.
01
Model
Which Claude model to target
Required +
Model
⚡ This is a reasoning model — it thinks before responding. Chain-of-thought instructions may be redundant. Temperature is typically fixed at 1. Parameters like top_p may have limited effect.
02
System Prompt
Role, persona, tone & global constraints
Required +
ℹ️ Copilot does not expose a system prompt API. These instructions will be included as the opening of your chat message.
Role / Persona e.g. "a senior software engineer"
Primary Goal e.g. "help users debug code"
Tone
Output Language
Audience who reads the output?
Always do
Never do
03
Conversation History
Prior turns for multi-turn context
Optional +
04
User Message
Your core task or question
Required +
Task / Question
Context background info
Input Data code, CSV, text to process
05
Output Format
Structure and shape of the response
Optional +
Format Type
06
Examples (Few-Shot)
Input/output pairs to guide the model
Optional +
07
Chain-of-Thought Reasoning
Ask the model to reason before answering
Optional +
08
Response Parameters
Token limits & sampling controls
Optional +
Max Tokens ?

Max Tokens

Upper limit on tokens the model may generate. ~0.75 words per token in English.

Value~WordsUse For
64–25650–190Short answers
512~380Summaries
1024~760Detailed replies
2048~1500Articles, code
4096+3000+Long documents

Setting too low truncates the response mid-sentence.

Temperature 0–1 ?

Temperature

Controls randomness. Higher = more creative; lower = more focused and deterministic.

ValueCharacterBest For
0.0DeterministicFact lookup, classification
0.1–0.3Very focusedCode, data extraction
0.5–0.7BalancedGeneral Q&A, summaries
0.8–1.0CreativeBrainstorming, copywriting
1.2–2.0Highly randomExperimental / poetry

Avoid combining high Temperature with low Top P simultaneously.

Top P N/A ?

Top P (Nucleus Sampling)

Only tokens whose cumulative probability reaches P are considered. Filters unlikely tokens without a hard count cut-off.

ValuePoolEffect
0.550% massVery tight, predictable
0.7575% massFocused + some variety
0.9090% massBalanced (common default)
0.9595% massCreative, broad vocabulary
1.0All tokensNo filtering applied

Adjust either Temperature or Top P — tuning both makes behaviour harder to predict.

Top K N/A ?

Top K

At each step only the K most probable tokens are considered. A hard limit on vocabulary breadth.

ValuePoolEffect
1GreedyAlways picks most likely
10–20SmallCoherent, conservative
40MediumGood balance (Anthropic default)
100–200LargeDiverse, creative outputs
0 / -1DisabledNo K filtering (use Top P only)

Not all APIs expose Top K. When unsupported the field is disabled.

09
Stop Sequences
Strings that halt generation
Optional +
One per line
10
Metadata & Caching
Tracking IDs & prompt cache control
Optional +
User ID
Session ID
11
Tools / Function Calling
Functions the model can invoke
Optional +
12
Vision / File Input
Images and documents as input
Optional +
PROMPT CONFIG
Generated Prompt
Fill in the form above and click Generate
💡 How to Use
  • 1Choose a provider, model and fill in the required fields, then click ⚡ GENERATE PROMPT to build your prompt.
  • 2⚡ GENERATE PROMPT generates a prompt specific to the AI platform and model selected — output format adapts per provider.
  • 3🌐 UNIVERSAL PROMPT generates a generic, structured prompt in .txt format that can be uploaded or pasted into any AI assistant or platform.
  • 4🐍 PYTHON CODE generates a ready-to-run Python script with all imports and API setup, suitable for embedding in a program using the provider's API.
  • 5💾 SAVE CONFIG saves your current prompt — all entered data and selections — to a .promptjson file for reuse later.
  • 6📂 LOAD CONFIG reloads a previously saved configuration from a .promptjson file created with 💾 SAVE CONFIG.

Comments