The AI tooling landscape has matured significantly. We're past the "wow it can write Hello World" phase — these tools are now saving real hours in real workflows. But not all of them are worth your time, and the hype around some of them is way ahead of reality.
I've spent the last year integrating various AI tools into a daily development workflow — building web apps, writing API services, reviewing PRs, writing documentation. Here's my honest breakdown of what actually makes a difference.
Before listing tools, it helps to think in terms of where AI actually helps in a dev workflow:
Let's go category by category.
Still the most widely used. The autocomplete is excellent for repetitive patterns — generating test cases, writing boilerplate, completing obvious next steps.
What Copilot is great at:
What it struggles with:
// Write a comment like this and Copilot usually nails it
// Function to calculate compound interest given principal, rate, time in years, and compounding frequency
function calculateCompoundInterest(
principal: number,
annualRate: number,
years: number,
compoundingsPerYear: number = 12
): number {
// Copilot will typically complete this correctly
return principal * Math.pow(1 + annualRate / compoundingsPerYear, compoundingsPerYear * years);
}Cursor is the IDE built on VS Code with Claude/GPT deeply integrated. The key difference from Copilot is the "Composer" feature — you describe changes across multiple files and it applies them.
This is genuinely powerful. You can say "extract this logic into a custom hook and update all components that use it" and watch it do it. Not perfectly every time, but often well enough to save 15 minutes.
When to use Cursor over VS Code + Copilot: when you're doing refactoring work or building new features that span multiple files.
Free, fast, and surprisingly good. If you're on a budget and don't need the multi-file editing features of Cursor, Codeium's autocomplete is comparable to Copilot for single-file tasks.
I covered this in detail in my Claude vs ChatGPT comparison, but the short version:
The workflow I've settled into: Copilot for autocomplete while writing, Claude for any question that requires more than 30 seconds of thinking.
The most underused pattern is pasting error messages with full stack traces into Claude and asking it to reason about root cause — not just what the error says, but why it happened.
# Example: instead of just googling this error, paste the full trace + relevant code
TypeError: Cannot read properties of undefined (reading 'map')
at PostList (PostList.tsx:23:15)
at renderWithHooks (react-dom.development.js:14985:18)
...Claude will often identify that the issue is an async data fetch that hasn't resolved yet, or a missing null check — not just tell you "the variable is undefined."
This is an underrated use case. AI code review is available 24/7, never gets tired, and never skips a file because it's late on a Friday.
Security patterns:
// AI will catch this immediately
app.get('/user/:id', (req, res) => {
const query = `SELECT * FROM users WHERE id = ${req.params.id}`; // SQL injection
db.query(query, (err, result) => res.json(result));
});Performance issues:
// AI will flag the N+1 query problem here
const posts = await Post.findAll();
const postsWithAuthors = await Promise.all(
posts.map(async (post) => ({
...post,
author: await User.findById(post.authorId), // N queries for N posts
}))
);Missing error handling, accessibility issues, type safety gaps. Feed your PR diff to Claude with the prompt: "Review this code for security issues, performance problems, and missing error handling." The output is often genuinely useful.
Writing tests is the task most developers delay. AI makes it significantly faster.
// Paste this function to Claude and ask for comprehensive tests
export function parseQueryString(url: string): Record<string, string> {
const params: Record<string, string> = {};
const queryString = url.split('?')[1];
if (!queryString) return params;
queryString.split('&').forEach(pair => {
const [key, value] = pair.split('=');
if (key) params[decodeURIComponent(key)] = decodeURIComponent(value || '');
});
return params;
}Claude will generate edge case tests you probably wouldn't have thought of — empty strings, encoded characters, duplicate keys, URLs without query strings, malformed input.
The key insight: don't just ask for "write tests." Ask for "write tests covering all edge cases and failure modes." The quality difference is significant.
If you're not using Copilot in your terminal, start now. The ?? command lets you describe what you want to do:
gh copilot explain "git rebase -i HEAD~3"
gh copilot suggest "find all files modified in the last 7 days larger than 10MB"AI has become genuinely useful for infrastructure-as-code. Paste your architecture diagram description and get a working Terraform config as a starting point.
# Generated starting point for a basic ECS service with load balancer
resource "aws_ecs_service" "api" {
name = "api-service"
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.api.arn
desired_count = 2
load_balancer {
target_group_arn = aws_lb_target_group.api.arn
container_name = "api"
container_port = 3000
}
}Always review AI-generated infrastructure code carefully — the logical structure is usually right but permissions, VPC configuration, and security groups often need adjustment.
AI writes excellent documentation. Feed it your function and ask for JSDoc comments. Feed it your API endpoint and ask for OpenAPI spec. Feed it your README and ask for improvements.
// Before: undocumented function
function retryWithBackoff(fn, maxRetries, baseDelay) {
// ...implementation
}
// After asking Claude to document it
/**
* Retries an async function with exponential backoff on failure.
*
* @param fn - The async function to retry
* @param maxRetries - Maximum number of retry attempts (default: 3)
* @param baseDelay - Initial delay in milliseconds before first retry (default: 1000)
* @returns Promise resolving to the function's return value
* @throws Error after all retries are exhausted
*
* @example
* const data = await retryWithBackoff(() => fetch('/api/data'), 3, 500);
*/
async function retryWithBackoff<T>(
fn: () => Promise<T>,
maxRetries: number = 3,
baseDelay: number = 1000
): Promise<T> {1. Accepting code without reading it. The biggest mistake. AI generates plausible-looking code that can be wrong. Always read it.
2. Not providing context. "Fix this bug" gets worse results than "This function processes payments. It fails when the user has multiple pending transactions. Here's the error and here's the relevant code."
3. Using AI for everything. Simple lookups, basic syntax questions, well-documented APIs — just read the docs. AI is for complex reasoning, not replacing documentation.
4. Not iterating. The first response is rarely the best. Follow up with "now add error handling" or "refactor this to be more testable."
5. Ignoring hallucinations in library versions. AI will sometimes cite library methods that don't exist or were removed in a recent version. Always verify against official docs.