agentshelf.io
Browse the ShelfLearnPromote Your Tool
Sign inSign up free
agentshelf.io

The curated shelf for AI tools. Discover, compare, and add the best AI agents and tools to your workflow.

Stay in the loop

New on the Shelf — weekly digest of the best AI tools.

Explore

  • Browse the Shelf
  • Categories
  • New on the Shelf
  • Top Shelf Picks

Learn

  • Tutorials
  • FAQ

Company

  • About
  • Pricing
  • Contact

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy

© 2026 AgentShelf, Inc. All rights reserved.

TermsPrivacyContact
    ← All Tutorials
    ✍️ PromptingBeginner

    Prompt Engineering 101

    Master the art of writing effective prompts. Learn chain-of-thought, few-shot prompting, role assignment, and output formatting techniques with real before/after examples.

    AS
    AgentShelf Team
    ·February 1, 2025·10 min read
    Share this guide
    Share:Share on X

    Why Prompt Engineering Matters

    The same AI model can produce a vague, unhelpful response or a precise, actionable answer — the only difference is how you ask. Prompt engineering is the practice of structuring your input to reliably get the output you need. It's not magic; it's communication design applied to language models.

    Good prompts can eliminate 80% of the back-and-forth you'd otherwise spend refining AI outputs. For developers building on LLM APIs, better prompts mean fewer tokens, lower costs, and more predictable outputs.

    The Anatomy of a Good Prompt

    Every effective prompt has up to five components. You don't always need all five, but knowing each one helps:

    • Role: Who should the AI be? ("You are a senior financial analyst...")
    • Context: What background information does it need? ("I'm building a B2B SaaS for HR teams...")
    • Task: What exactly should it do? ("Write a 3-paragraph executive summary...")
    • Format: How should the output be structured? ("Respond in JSON with keys: title, summary, action_items")
    • Constraints: What limits apply? ("Under 200 words, no technical jargon, written for a C-suite audience")

    Technique 1: Chain-of-Thought Prompting

    For any task requiring reasoning, logic, or multi-step calculation, appending "Let's think step by step" (or asking the model to reason before answering) dramatically improves accuracy.

    Without chain-of-thought:

    Prompt: "If a store sells 3 items at $4.50 each and gives a 15% discount, what's the total?"
    Response: "$11.47" (often wrong)

    With chain-of-thought:

    Prompt: "If a store sells 3 items at $4.50 each and gives a 15% discount, what's the total? Think step by step."
    Response: "3 items × $4.50 = $13.50. Discount: $13.50 × 0.15 = $2.025. Total: $13.50 - $2.025 = $11.48."

    Chain-of-thought works because it forces the model to generate intermediate reasoning tokens before the answer, reducing the chance of the model "jumping" to a wrong conclusion.

    Technique 2: Few-Shot Prompting

    Show 2–3 examples of the input/output pattern you want before giving the actual request. This is especially powerful for consistent formatting, tone matching, and domain-specific outputs.

    Classify the sentiment of product reviews as Positive, Negative, or Neutral.
    
    Review: "Arrived on time, exactly as described."
    Sentiment: Positive
    
    Review: "The zipper broke after two uses. Very disappointed."
    Sentiment: Negative
    
    Review: "It's okay. Does what it says."
    Sentiment: Neutral
    
    Review: "Absolutely love it, already ordered a second one!"
    Sentiment:

    The model now has a crystal-clear pattern. Without examples, it might return "The sentiment is positive" — with examples, it returns exactly "Positive".

    Technique 3: Role Assignment

    Assigning a specific expert persona produces more authoritative, domain-appropriate responses.

    Generic: "Explain how to improve my website's SEO."

    With role: "You are an SEO consultant with 10 years of experience specializing in e-commerce sites. Explain the three highest-impact changes I can make to improve my product pages' search rankings."

    The role doesn't just change tone — it shifts the model toward vocabulary, frameworks, and recommendations appropriate for that domain.

    Technique 4: Output Format Control

    Be explicit about the format you want. LLMs are highly responsive to format instructions.

    • JSON: "Respond with a JSON object. Fields: name (string), tags (array of strings), priority (1-5 integer)"
    • Markdown table: "Format the comparison as a markdown table with columns: Feature, Tool A, Tool B"
    • Numbered list: "Give me exactly 5 recommendations, numbered, each under 30 words"
    • XML/structured tags: "Wrap your response in <analysis> and <recommendation> tags"

    When building applications, always request JSON output and use response_format: { type: "json_object" } (OpenAI) to enforce it at the API level.

    Technique 5: Constraint Setting

    Constraints focus the model and prevent it from padding responses with filler. Common constraints:

    • Word/character limits: "In exactly 50 words..."
    • Audience level: "Explain as if to a 12-year-old" or "Assume the reader is a PhD in machine learning"
    • Tone: "Use a formal, professional tone" / "Keep it conversational and friendly"
    • Scope: "Focus only on open-source solutions" / "Only consider options under $100/month"
    • What to exclude: "Do not include caveats or disclaimers"

    Before/After Examples

    Scenario 1: Writing a bio

    Bad: "Write a bio for me."

    Good: "Write a 3-sentence professional bio for a product manager with 5 years of experience at SaaS startups. Tone: confident but approachable. Do not use the word 'passionate'."

    Scenario 2: Code review

    Bad: "Review this code."

    Good: "You are a senior Python developer. Review the following function for: (1) correctness, (2) edge cases not handled, (3) performance issues. Format as a numbered list. Be specific — point to line numbers."

    Scenario 3: Email drafting

    Bad: "Write an email to a client about a delay."

    Good: "Draft a professional email to a B2B client informing them their project delivery will be delayed by 2 weeks due to unexpected technical issues. Acknowledge the inconvenience, explain briefly without over-detailing, offer a specific revised date, and include a small gesture of goodwill. Keep it under 150 words."

    Scenario 4: Data extraction

    Bad: "Extract the key info from this job posting."

    Good: "Extract structured data from this job posting. Return JSON with: title, company, location (remote/hybrid/onsite), salary_range (null if not listed), required_years_experience, top_5_required_skills (array)."

    Scenario 5: Summarization

    Bad: "Summarize this article."

    Good: "Summarize this article in 3 bullet points, each under 20 words. Focus on: the core finding, the method used, and the practical implication. Avoid jargon."

    Common Mistakes to Avoid

    • Being too vague: "Help me with marketing" gives the model no direction. Always specify the task.
    • Asking for too much at once: Break complex requests into sequential prompts or use chain-of-thought
    • Assuming context: The model doesn't know your industry, audience, or goals unless you tell it
    • Ignoring negative instructions: "Don't include..." works, but "Focus only on..." is usually more reliable
    • Not specifying format: If you need structured data, always specify the exact format

    Prompt Chaining for Complex Tasks

    For tasks too complex for a single prompt, break them into a chain. Each output becomes the input for the next step:

    1. Prompt 1: "Given this brief, generate 10 potential blog post angles."
    2. Prompt 2: "From these 10 angles, pick the 3 most likely to perform well for SEO and explain why."
    3. Prompt 3: "Write a full outline for angle #2."
    4. Prompt 4: "Write the full article from this outline."

    Quick Reference Cheat Sheet

    • Add role: "You are an expert [role]..."
    • Force reasoning: "Think step by step before answering."
    • Get JSON: "Respond only with a JSON object. No prose."
    • Consistent format: Provide 2–3 examples before your request
    • Constrain length: "In exactly N words / bullet points / sentences"
    • Audience: "Explain to [specific audience]"
    • Verify: Add "If you are unsure, say so." to reduce hallucinations
    ← PreviousBuilding AI Agents with CrewAINext →How to Use Midjourney: Complete Beginner's Guide