Follow Us

CodeWithSabir

  • Contact Us
  • Privacy Policy
  • About
  • Terms & Conditions

All Rights Reserved © 2026

  • Light
  • Dark
AI

Prompt Engineering for Developers: Write Prompts That Actually Work

Sabir Soft
Sabir Lkhaloufi
  • April 10, 2026
  • 4 min read

Prompt Engineering for Developers: Write Prompts That Actually Work

Prompt engineering has a reputation for being vague or mystical — "just ask it nicely" kind of advice. That's not useful. This guide treats prompting as what it actually is: a programming discipline with learnable patterns and measurable outcomes.

If you're building on top of LLMs, the quality of your prompts directly determines the quality of your product. Here's how to write prompts that produce consistent, useful results.

The Mental Model: LLMs Are Pattern Completers

Before techniques, understand what's happening. An LLM isn't reasoning from first principles — it's predicting the most likely continuation of the text you give it. Your prompt is the beginning of a document, and the model completes it.

This means: the more your prompt looks like the beginning of a document where a good answer would follow, the better your results.

A prompt like "explain async" looks like the beginning of a confused question. A prompt like "Explain JavaScript's async/await to a developer who understands Promises but hasn't used async syntax before. Use a practical example with error handling." looks like the beginning of a high-quality technical explanation.

Technique 1: Be Explicit About Format

The model doesn't know you want JSON, a bulleted list, or a specific structure unless you say so.

// Vague
"List the main differences between REST and GraphQL"

// Explicit
"Compare REST and GraphQL APIs. Format your response as a markdown table with these columns: 
Aspect | REST | GraphQL
Cover these aspects: data fetching, over-fetching, versioning, tooling, learning curve."

For structured data output, specify the exact schema:

"Analyze the following code and return a JSON object with this exact structure:
{
  "issues": [
    {
      "severity": "critical" | "warning" | "info",
      "line": number,
      "description": string,
      "suggestion": string
    }
  ],
  "summary": string,
  "score": number (0-100)
}

Code to analyze:
[code here]"

Technique 2: Few-Shot Examples

Showing examples of what you want is dramatically more effective than describing it:

"Rewrite these variable names to follow our naming convention.

Convention rules:
- Boolean variables start with 'is', 'has', or 'can'
- Arrays use plural nouns
- Callback functions start with 'on' or 'handle'

Examples:
Input: active → Output: isActive
Input: users_list → Output: users
Input: clicked → Output: onClick

Now rewrite these:
Input: loaded
Input: error_messages
Input: submit"

The model understands your convention from the examples better than it would from a written description alone.

Technique 3: Chain of Thought

For complex tasks, asking the model to think step-by-step dramatically improves accuracy:

// Without chain of thought — often wrong on complex logic
"Does this function have any security vulnerabilities?"

// With chain of thought — much more thorough
"Analyze this function for security vulnerabilities. 
Think through it step by step:
1. What inputs does the function accept?
2. Are those inputs validated or sanitized?
3. What operations are performed with the inputs?
4. What could an attacker do with control over those inputs?
5. What's your final assessment?

Function:
[code here]"

In code, you can enforce this with explicit instructions:

const systemPrompt = `You are a security code reviewer.
When analyzing code, you MUST follow this process:
1. First, identify all input sources
2. Trace each input through the code  
3. Note any operations that could be dangerous with malicious input
4. Only after this analysis, state your findings
 
Always show your reasoning before your conclusion.`

Technique 4: Personas and Context

Giving the model a specific persona shapes its responses significantly:

// Generic — produces textbook-level explanation
"Explain database indexing"

// With persona — produces targeted, practical explanation
"You are a senior database engineer explaining database indexing to a junior 
developer who just joined your team. They understand SQL basics but have never 
thought about query performance. Explain indexing using a concrete example from 
a blog application with posts and users tables. Focus on when to add an index 
and the trade-offs, not just what an index is."

Technique 5: Constraints and Negative Instructions

Tell the model what NOT to do:

"Explain how to implement rate limiting in Express.js.

Rules:
- Do not use any external libraries — only built-in Node.js features
- Do not explain what rate limiting is — assume I know
- Do not include basic Express setup boilerplate
- Keep the implementation under 30 lines
- Do include a comment explaining the algorithm"

Constraints focus the output and prevent the common problem of getting a generic 1000-word essay when you wanted a specific 20-line answer.

Technique 6: Output Length Control

Models default to whatever length feels "complete." Control it explicitly:

// For concise answers
"In exactly 2-3 sentences, explain what a closure is in JavaScript."

// For structured long-form
"Write a technical explanation of React's reconciliation algorithm.
Structure: 
- Introduction (1 paragraph)
- How it works (3-4 paragraphs with examples)
- Performance implications (2 paragraphs)
- Common mistakes (bulleted list)
Total target: 600-800 words"

Building Reusable Prompt Templates

In production systems, prompt templates should be version-controlled, testable functions:

// lib/prompts.ts
interface CodeReviewOptions {
  language: string
  focusArea?: 'security' | 'performance' | 'readability' | 'all'
  reviewerLevel?: 'junior' | 'senior' | 'principal'
}
 
export function buildCodeReviewPrompt(code: string, options: CodeReviewOptions): string {
  const { language, focusArea = 'all', reviewerLevel = 'senior' } = options
 
  return `You are a ${reviewerLevel} ${language} engineer performing a code review.
 
${focusArea !== 'all' ? `Focus specifically on ${focusArea} issues. Ignore other categories unless they are critical.` : ''}
 
Review the following ${language} code:
 
\`\`\`${language.toLowerCase()}
${code}
\`\`\`
 
Return your review as JSON:
{
  "summary": "2-3 sentence overview",
  "criticalIssues": [{"line": number, "issue": string, "fix": string}],
  "suggestions": [{"line": number, "suggestion": string}],
  "positives": [string],
  "overallRating": "approve" | "approve_with_changes" | "request_changes"
}`
}

Testing Your Prompts

Treat prompts like code — they need tests:

// tests/prompts.test.ts
import { buildCodeReviewPrompt } from '../lib/prompts'
import { claude } from '../lib/claude'
 
describe('Code review prompt', () => {
  it('returns valid JSON for simple function', async () => {
    const code = `function add(a, b) { return a + b }`
    const prompt = buildCodeReviewPrompt(code, { language: 'JavaScript' })
 
    const response = await claude.messages.create({
      model: 'claude-haiku-4-5-20251001', // Use cheaper model for tests
      max_tokens: 512,
      messages: [{ role: 'user', content: prompt }],
    })
 
    const text = response.content[0].type === 'text' ? response.content[0].text : ''
    const parsed = JSON.parse(text)
 
    expect(parsed).toHaveProperty('summary')
    expect(parsed).toHaveProperty('criticalIssues')
    expect(Array.isArray(parsed.criticalIssues)).toBe(true)
  })
})

Common Mistakes

1. Being vague about what "good" looks like. "Write good documentation" means nothing. "Write JSDoc documentation with a description, @param for each argument, @returns, and one usage @example" gives the model a target.

2. Expecting the model to read your mind. Every assumption you have about format, tone, length, or audience needs to be stated.

3. Not iterating. The first prompt is never the best prompt. Test with edge cases, refine, and version your prompts.

4. Ignoring temperature. For structured output (JSON, code), use temperature 0. For creative tasks, use 0.7-1.0. Many developers never touch this setting.

5. Putting everything in the user message. Use the system prompt for instructions/persona and the user message for the actual task. The model treats them differently.

Key Takeaways

  • Prompts are code — version control them, test them, refine them
  • Explicit format instructions prevent the most common output quality issues
  • Few-shot examples are more effective than written descriptions of what you want
  • Chain-of-thought dramatically improves accuracy on complex analysis tasks
  • Use constraints ("do not...") to focus responses and eliminate common unwanted patterns
  • Test prompts with a cheaper model variant before running production workloads
Popular Blogs
Claude AI vs ChatGPT: An Honest Comparison for Developers
  • April 28, 2026
AI Tools Every Developer Should Be Using in 2026
  • April 20, 2026
Using the Claude API in Real Projects: A Practical Developer Guide
  • April 15, 2026
Prompt Engineering for Developers: Write Prompts That Actually Work
  • April 10, 2026
Categories
AIDevOpsNext.jsMobile DevelopmentWeb Development

Related Posts

AI
Claude AI vs ChatGPT: An Honest Comparison for Developers
Sabir Khaloufi·Apr 28, 2026
AI
AI Tools Every Developer Should Be Using in 2026
Sabir Khaloufi·Apr 20, 2026
AI
Using the Claude API in Real Projects: A Practical Developer Guide
Sabir Khaloufi·Apr 15, 2026