Prompt Templates Library
Battle-tested prompt patterns for common AI tasks. Chain-of-thought, few-shot, role-playing, and more. Copy, paste, and customize.
Description
A collection of proven prompt templates for various use cases. Each template includes the pattern, example, and tips for customization.
Chain-of-Thought (CoT)
Force step-by-step reasoning for complex problems.
Template
Solve this problem step by step:
{problem}
Think through each step carefully before giving your final answer.
Show your reasoning.
Example
Solve this problem step by step:
A store sells apples for $2 each. If I buy 5 apples and pay with a $20 bill,
how much change do I get? Then, if I use half that change to buy oranges
at $1 each, how many oranges can I buy?
Think through each step carefully before giving your final answer.
Show your reasoning.
Pro Tips
- Add “Let’s think step by step” to trigger reasoning
- For math: “Show your work”
- For code: “Explain your approach before writing code”
Few-Shot Learning
Teach by example.
Template
{task_description}
Examples:
Input: {example_1_input}
Output: {example_1_output}
Input: {example_2_input}
Output: {example_2_output}
Input: {example_3_input}
Output: {example_3_output}
Now do this one:
Input: {actual_input}
Output:
Example
Convert informal text to formal business English.
Examples:
Input: hey can u send me that report asap?
Output: Dear colleague, Could you please send me that report at your earliest convenience?
Input: thx for the quick reply!
Output: Thank you for your prompt response.
Input: gonna be late to the meeting, sry
Output: I apologize, but I will be arriving late to the meeting.
Now do this one:
Input: cant make it tmrw, something came up
Output:
Pro Tips
- 3-5 examples is usually optimal
- Include edge cases in examples
- Order matters: put most relevant examples last
Role-Based Prompting
Assign an expert persona.
Template
You are a {role} with {X} years of experience in {domain}.
Your expertise includes:
- {skill_1}
- {skill_2}
- {skill_3}
{communication_style}
{task}
Example
You are a senior software architect with 15 years of experience in distributed systems.
Your expertise includes:
- Microservices design and migration
- High-availability and fault tolerance
- Performance optimization at scale
Communicate in a clear, technical manner. Use diagrams (ASCII) when helpful.
Challenge assumptions and suggest alternatives.
Review this system design and identify potential issues:
[System design description here]
Pro Tips
- Be specific about expertise areas
- Include communication preferences
- Add constraints (“Never recommend X” or “Always consider Y”)
Structured Output
Get consistent, parseable responses.
Template
{task}
Respond in the following JSON format:
```json
{
"field_1": "description",
"field_2": ["list", "items"],
"field_3": {
"nested": "object"
}
}
Only output valid JSON. No additional text.
### Example
Analyze the sentiment of this product review and extract key points.
Review: “The laptop is incredibly fast and the display is gorgeous. However, the battery life is disappointing and it runs quite hot under load. Still, for the price, it’s a solid choice.”
Respond in the following JSON format:
{
"sentiment": "positive|negative|mixed",
"score": 0.0-1.0,
"pros": ["list of positives"],
"cons": ["list of negatives"],
"summary": "one sentence summary"
}
Only output valid JSON. No additional text.
### Pro Tips
- Include type hints in schema
- Add "Only output valid JSON" to prevent extra text
- For complex schemas, provide a filled example
---
## Critique and Improve
Self-correction pattern.
### Template
{task}
After completing the task:
- Critique your own work - what could be better?
- Address those critiques
- Present the improved version
Show all three stages.
### Example
Write a function to validate email addresses in Python.
After completing the task:
- Critique your own work - what could be better?
- Address those critiques
- Present the improved version
Show all three stages.
### Pro Tips
- Works great for code, writing, and design
- Add specific critique dimensions ("focus on performance and edge cases")
- Can iterate multiple times ("repeat this process twice")
---
## Constraint-Based Prompting
Set clear boundaries.
### Template
{task}
Constraints:
- {constraint_1}
- {constraint_2}
- {constraint_3}
Requirements:
- {requirement_1}
- {requirement_2}
Do NOT:
- {anti_pattern_1}
- {anti_pattern_2}
### Example
Explain blockchain technology to a beginner.
Constraints:
- Use no more than 200 words
- Avoid technical jargon
- Use at least one analogy
Requirements:
- Cover what it is and why it matters
- Mention one real-world use case
Do NOT:
- Recommend specific cryptocurrencies
- Include investment advice
- Use the word “decentralized” without explaining it
### Pro Tips
- Explicit constraints beat implicit expectations
- "Do NOT" sections are powerful guardrails
- Include word/length limits when relevant
---
## Multi-Turn Refinement
Progressive improvement.
### Template
[Turn 1] {initial_request}
[Turn 2] Good start. Now improve it by focusing on {aspect_1}.
[Turn 3] Better. Now also consider {aspect_2} and {aspect_3}.
[Turn 4] Final polish: make it {final_quality}.
### Example
[Turn 1] Write a tagline for a sustainable fashion brand.
[Turn 2] Good start. Now make it more memorable and punchy - under 6 words.
[Turn 3] Better. Now also make it hint at environmental impact without being preachy.
[Turn 4] Final polish: make it sound premium, not granola.
### Pro Tips
- Each turn should focus on one dimension
- Acknowledge progress before requesting changes
- Keep a backup of good intermediate versions
---
## Comparison Matrix
Ask for structured analysis.
### Template
Compare {option_1}, {option_2}, and {option_3} for {use_case}.
Create a comparison table with these dimensions:
- {dimension_1}
- {dimension_2}
- {dimension_3}
- {dimension_4}
Then provide a recommendation based on {criteria}.
### Example
Compare PostgreSQL, MongoDB, and DynamoDB for a social media application.
Create a comparison table with these dimensions:
- Query flexibility
- Horizontal scaling
- Operational complexity
- Cost at scale
- Ecosystem/tooling
Then provide a recommendation based on a startup with limited ops resources expecting rapid growth.
---
## Prompt Chaining (Meta)
Use multiple prompts in sequence.
### Pattern
Prompt 1: Generate raw ideas
↓
Prompt 2: Evaluate and rank ideas
↓
Prompt 3: Expand best ideas
↓
Prompt 4: Critique and improve
↓
Prompt 5: Final polish
### Example Chain
```python
# Pseudo-code for prompt chaining
ideas = llm("Generate 10 startup ideas for AI in healthcare")
ranked = llm(f"Rank these by feasibility and impact:\n{ideas}")
expanded = llm(f"Expand on the top 3 ideas:\n{ranked}")
critiqued = llm(f"Critique these ideas and suggest improvements:\n{expanded}")
final = llm(f"Polish into a pitch deck outline:\n{critiqued}")
Pro Tips
- Each step should have a clear, focused goal
- Pass only relevant context between steps
- Save intermediate outputs for debugging
Quick Reference
| Pattern | Best For | Key Phrase |
|---|---|---|
| Chain-of-Thought | Math, logic, planning | ”Let’s think step by step” |
| Few-Shot | Format matching, classification | ”Here are some examples” |
| Role-Based | Expert tasks, consistency | ”You are a [role]“ |
| Structured Output | Parsing, automation | ”Respond in JSON format” |
| Critique & Improve | Quality iteration | ”Now critique your work” |
| Constraints | Guardrails, focus | ”Do NOT / You MUST” |
| Comparison | Decision making | ”Compare X, Y, Z” |
| Chaining | Complex workflows | Multiple focused prompts |
Usage Tips
- Combine patterns: Role + CoT + Constraints often works great
- Iterate: First prompt rarely perfect; refine based on output
- Test edge cases: What happens with unusual inputs?
- Version control: Save prompts that work well
- Temperature: Lower (0.1-0.3) for consistency, higher (0.7-0.9) for creativity