The free prompt engineering guide
The seven dimensions of a great AI prompt — for ChatGPT, Claude, Gemini, and any other large language model. Practical, no theory, worked examples throughout.
Last reviewed
Why prompt engineering matters
The difference between a mediocre AI response and a brilliant one almost always comes down to the prompt, not the model. The same GPT-4, Claude Sonnet, or Gemini that produces filler from a vague request will produce a polished, on-brief answer from a well-engineered one. This is true across every major model and every task.
Promptrace breaks down what “well-engineered” actually means into seven dimensions. Each one is independently measurable, each one has a meaningful effect on output quality, and together they predict almost all of the variance in how good a prompt is. You can score any prompt against these dimensions for free using the Promptrace prompt scorer.
The rest of this guide walks through each dimension with examples of what works and what doesn't.
1. Role
Scorer weight: 12%Tell the AI who to be before you tell it what to do.
Models behave very differently depending on the role you assign them. "You are a senior B2B marketing strategist with 10 years of experience" produces a noticeably more rigorous answer than "Write me a marketing plan." The role acts as a soft prior over tone, vocabulary, and the kind of evidence the model will reach for. For best results, name the seniority, the domain, and (if it matters) the audience the role usually speaks to.
- Bad: "Help me with marketing."
- Good: "You are a senior performance marketer at a B2B SaaS company."
2. Task clarity
Scorer weight: 18%Use specific action verbs and one task per sentence.
Vague verbs like "help with" or "do something about" force the model to guess what you mean. Strong action verbs — write, analyse, compare, summarise, generate, classify, refactor, translate — give the model an unambiguous directive. If you have multiple tasks, list them as a numbered set rather than burying them in a single run-on sentence.
- Bad: "Look at my landing page and let me know what you think."
- Good: "(1) Analyse the headline of my landing page below for clarity. (2) List three concrete rewrites. (3) Explain which would best appeal to a CFO."
3. Specificity
Scorer weight: 20%Numbers, proper nouns, and concrete details anchor the model.
Prompts containing numbers, named entities, dates, and quoted text consistently outperform vague ones. "Write a blog post" produces filler. "Write a 1,200-word blog post for CTOs at Series-B fintechs explaining why event sourcing makes audit logs cheaper" produces something usable. Specificity is the single highest-weight dimension in the Promptrace scorer because it correlates more strongly with output quality than any other factor.
- Bad: "Write a few social posts."
- Good: "Write 5 LinkedIn posts of 80–120 words each, targeting head-of-marketing roles at UK Series-B SaaS companies, about the cost of bad onboarding."
4. Context
Scorer weight: 15%Give the model the background a human collaborator would need.
Audience, purpose, tone, prior decisions, constraints from the brief — everything you'd tell a freelancer in a kickoff call belongs in the prompt. Models have no memory of who you are, what you're building, or who you're writing for unless you tell them. A two-sentence context paragraph at the top of a prompt is usually the cheapest possible quality upgrade.
- Bad: "Write a product description."
- Good: "Write a product description for a £45 wool blanket sold by a UK direct-to-consumer brand whose voice is warm, plain-spoken, and slightly self-deprecating. The audience is 30–45 year-old urban professionals buying for their own homes."
5. Output format
Scorer weight: 15%Specify exactly what shape the answer should take.
Bullet list? Markdown table? JSON? Numbered steps? Word count? Section headings? If you don't specify, you get the model's default — which is usually a wall of prose. Output format constraints also dramatically reduce hallucination on structured tasks: "Return only valid JSON matching this schema" is far more reliable than "give me the data."
- Bad: "Compare these three tools."
- Good: "Compare these three tools as a markdown table with columns for Price, Best for, Killer feature, and Biggest weakness. Limit each cell to 12 words."
6. Constraints
Scorer weight: 10%Say what NOT to do, and how long the answer should be.
Constraints prevent the model from wandering. "Don't use marketing buzzwords." "Avoid the words 'leverage' and 'utilise'." "Maximum 200 words." "Write in British English." Each one is a guardrail that costs you nothing and meaningfully tightens the output.
- Bad: "Write a CEO update."
- Good: "Write a CEO update under 250 words. No bullet points. No buzzwords. British English. End with one specific ask of the board."
7. Examples
Scorer weight: 10%Show the model one or two worked examples of what good looks like.
Few-shot prompting is one of the most consistently effective techniques in the literature. A single example of the input/output mapping you want is often worth a paragraph of explanation. If you have a "good" version and a "bad" version, show both — the contrast helps the model understand the direction you want to push in.
- Bad: "Write taglines in our voice."
- Good: "Write 5 taglines in our voice. Examples of past taglines that worked: 'Boring is the new bold.' 'We don't do confetti.' Examples of taglines that did NOT match our voice: 'Unleash your potential.' 'Empower your journey.'"
Putting it all together
A great AI prompt isn't a magic incantation — it's a clear brief written for a fast, eager, slightly literal collaborator who has no memory of you. Assign the role, state the task, give the context, name the format, set the constraints, show an example, and you'll consistently get usable answers on the first try.
The fastest way to internalise the framework is to score your real prompts against it and see which dimensions you skip. The Promptrace scorer is free and doesn't require an account. You can also browse the AI prompt library for hundreds of examples that already score 85+.
Score your first prompt now
Free, instant, no signup. See exactly which dimensions your prompts are missing.
Open the prompt scorer →