AI Assisted Coding

Going from -1 to 0, then 0 to 100 with agentic coding.

That first perfect React component ChatGPT conjured from a single prompt. The dopamine surge when Cursor autocompletes your exact thought. The intoxicating promise: maybe you will never think about semicolons again.

Then you try to ship something real.

Suddenly, the AI is hallucinating APIs that do not exist. It is rewriting your carefully crafted database schema. It is introducing bugs faster than you can catch them. You cannot get away with vibes.

Welcome to negative productivity. Welcome to -1.

I will explain the deep truths behind this. Everything to learn and unlearn.

# The Mirror

People often expect AI to handle tasks perfectly, only to face disappointment. This expectation highlights a significant disconnect. AI coding is not failing them. It is exposing every weakness in their development process.

AI is the most brutal mirror you will ever encounter for your engineering practices. It reflects the quality of everything you put in. Messy requirements become messier implementations. Unclear mental models lead to unreadable code. Ad hoc processes give rise to rigorously chaotic systems.

The models, even the most advanced ones, should be thought of as savants, not geniuses. A savant is brilliant in narrow domains but completely lost without proper context and guidance. You would not ask a math genius to improvise a business presentation.

Yet that is what we do when we dump vague requirements into Cursor and expect perfection. AI needs carefully designed environments and guardrails to succeed.

# The Multiplier Effect

The defining characteristic of software engineering is not writing code, just as writing is not wrist exercise with ink on paper.

Software engineering is the art and science of maintaining unambiguous mental models to solve a real business or economic problems. The principal task is architecting and nurturing vast sociotechnical systems, with code as just one of their many representations.

Until AI can fully take over these systems and replace the humans maintaining them, it must operate within and benefit from the same structure. In other words, AI thrives in environments where humans also thrive. Your team’s software fundamentals matter.

AI is a multiplier of engineering capabilities.

A senior engineer with sharp communication skills, strong fundamentals, and refined taste will extract 10x value. Meanwhile, those who treat coding like a series of Stack Overflow copy-pastes end up with expensive technical debt.

The best engineers possess what I call the mechanic touch: the ability to feel when a system is heading in the right direction. Strong foundations help them master new tools quickly. Most crucially, they understand that AI mirrors the prompter sensibilities and taste, so they spend years sharpening both.

# The Collaboration Shift

Working with a junior teammate means giving clear instructions, sharing your codebase context, and reviewing their work closely. Yet most developers use AI coding tools with vague prompts, expecting perfect results without collaboration.

Get out of the solo-coding mindset. Enter the pair-programming frame.

The shift from -1 to 0 begins when you treat AI like a pair-programming partner: outline requirements, share your system patterns and constraints, and validate incremental suggestions.

Working with a junior teammate means giving clear instructions, sharing your codebase context, and reviewing their work closely. Yet most developers use AI coding tools with vague prompts, expecting perfect results without collaboration.

Get out of the solo-coding mindset. Enter the pair-programming frame.

The shift from -1 to 0 begins when you treat AI like a pair-programming partner: outline requirements, share your system patterns and constraints, and validate incremental suggestions.

Rely on one-line prompts and bypass these steps, and Cursor will churn out flawed code with bold confidence, seeding hidden bugs and costly rewrites, much like an overconfident intern.

# The Environment

AI thrives in the same environments where humans thrive.

A codebase with good test coverage, automated linting, continuous integration, clear documentation, consistent patterns, and well-defined features isn’t just easier for humans—it’s exponentially easier for AI.

I’ve seen it again and again. Put an AI coding agent in a clean, well-structured codebase, and it cycles through the agentic loop smoothly, self-correcting with your test suite and static analysis tools. Drop the same AI into a messy codebase with inconsistent patterns and no guardrails, and it flails, just like a human would.

The environment shapes the outcome. Clean systems produce clean AI interactions. Messy systems produce messy ones.

# The Path to Zero

Mastering AI coding means becoming the kind of engineer who thrives with amplified intelligence. Many assume the tool itself drives success, but that is misleading: without best practices, you will ship chaos faster.

The requisites for successful AI coding are not AI-specific skills. They are just great engineering, but are essential if you want AI to amplify your capabilities instead of your chaos.

Software development is fundamentally a sociotechnical system, and AI is just another participant in that system.

Build the foundational practices that make any complex software project successful, and you will create systems where intelligence, artificial or otherwise, can succeed.

The Right Strat

# Model Economics

Use the best coding model available. Do not try to save credits and cost by using a worse model. The goodness of a good model compounds.

# Context

Your skill with AI coding is your skill in providing context. For an LLM, context shapes everything.

  • You cannot drop the entire codebase as context. Models with massive stated limits have much smaller effective windows. Gemini Pro claims 1M tokens but degrades after 600k. Claude Sonnet similarly performs best within its practical limits, not theoretical ones.
  • LLMs get distracted easily. Cluttered context creates confused output. @-mention only relevant files. Link only helpful documentation. Every piece of context should serve the immediate task.
  • Create detailed system architecture docs that becomes a part of every AI interaction.

# Rules & Personas

Use specific personas that trigger significantly better outputs - “Act as Linus Torvalds and John Carmack debating the best implementation…”

Linus Torvalds works so well because he has 3 decades of being an asshole online which neutralizes sycophancy built into models. You get opinionated, no-nonsense code and commentary.

Combine this with constraints that define not just who to code as, but how to code and what steps to take to refine approaches that lead to failure.

  • Build a knowledge base that is your project rules (here are mine). It should outline the app flow, best practices for every part of your tech stack, how to use the dev tooling, coding standard and patterns, and cover for common mistakes that the LLMs have made when working with the code.

  • Following test-driven development (TDD)

  • Implement in specific order: e.g. stubs → pseudocode → data layer → business logic → FE.

Why that specific order? Traditional "jump right to code" feels faster, but AI thrives on structure, guardrails, and incremental context. This layered path is like building a runway for the AI to "land" on cleanly.

  • Attach documentation through Cursor (@Docs) or using llms.txt for libraries that you are using. This helps LLMs reference accurate function schema and not hallucinate.

MCP (Model Context Protocol) is incredible. MCP lets AI tools directly access your development environment and external services. Make the most advantage of it.

https://cursor.directory/mcp

Task-master is insanely good for shipping stuff.

# How to Actually Do Stuff

Feature development becomes systematic when you separate thinking from coding. These models excel in writing code but severely lack in design thinking. So we emphasise the structural planning part.

  • Capture
    • Voice dump raw thoughts into a transcription tool
    • Explain in detail (what, why, how), don’t be hesitant to overshare. Be specific.
    • Refine with AI: Some key questions to ensure you’re working in the Problem Space instead of Solution Space:
      • “Reverse engineer the problem I’m trying to solve based on this dump.”
      • “Take a step back. What open-ended questions should I be asking myself before diving into implementation?”
    • Generate a [feature-name]-idea.md file summarizing the core concept, goals, and key considerations.
  • Explore
    • Start with an open discussion of the problem space.
    • Provide ALL relevant context including code, tech specs, architecture, library docs, while capturing in extreme detail how these elements relate to the proposed changes.
    • Work through expectation & scope until you have a shared understanding. Document key constraints and acceptance criteria.
    • Ask for potential approaches. Don’t take AI suggestions for granted. Ask it to justify its choices, present alternatives, and think about advantages, drawbacks, and edge cases.
    • Goal: Pick the best approach and create a detailed PRD.
    • Planning
      • Use taskmaster to break it down into small tasks and make a step by step plan.
      • You can play around with complexity of each task and subtasks to get it to something that feels right.
  • Execution
    • For frontend changes, make sure you have an MCP connected to Chrome to visually verify changes.
    • For backend changes, make sure it builds unit tests and integration tests as part of the change.
    • The system runs in a loop: implement code, run tests/verifications, AI self-corrects based on failures or your feedback, until the task is complete and passes all checks.
  • Testing
    • At the end, do a git diff to ask Gemini Pro 2.5 what part of your changes are most at risk at breaking your app and then tests are created and passing.
    • Have your Linus and Carmack personas independently review and grade the git diff of commits and the overall state of the feature branch using your F to A+ rubric. Their feedback guides further refinement.
    • Follow continuous integration.

This structured approach might seem heavy initially, but it pays dividends in practice.

AI writes elegant code but makes poor design choices, reinforcing wrong directions rather than course-correcting. This works fine for greenfield experiments but fails in real codebases.

Collaborative discipline and methodical planning solve this gap, bringing agentic coding's productivity gains to real engineering work.

You think. AI types. The process ensures proper sequencing and sustainable reliance on AI coding.

# Tests & Refactoring

Writing tests is important. It can be avoided, but when the cost of intelligence is cents, this should not be compromised on. It also significantly improves code quality.

  • While planning out features, think of acceptance tests and basic unit tests for small components.
  • Write these, then iterate on code while ensuring that tests are passing.
  • After that, refactor while maintaining test coverage. Ask for code improvements, structure code with SOLID principles, keep functions short.
  • Write integration & E2E tests.

Use https://www.coderabbit.ai/ide

Use https://github.com/browser-use/vibetest-use

# Create Extensive Documentation

  • Create lots of detailed documentation easily by feeding codebases to the LLM. Egs:
    • Explain functionality, create a knowledge base
    • Summarise all the current metrics being collected
    • Identify missing test cases more intelligently

There’s a good reason to do this—documentation is now cheap to generate and feeds back into making your LLMs (and humans) on the project a lot more effective.

# Browser Access

Cursor already stops you from jumping between AI tools and your code editor. But you still have to look at your browser’s developer console, then go back to Cursor to tell it what you found.

Why keep doing this back-and-forth? Give Cursor browser access so everything works together without the extra steps.

https://browsertools.agentdesk.ai/installation

https://github.com/coffeeblackai/browser-use-examples

# YOLO Mode

If you have implemented everything till now in your workflow, you would have significantly exceeded the baseline experience of AI coding. Beyond this, things aren’t critical but still helpful.

Cursor has a YOLO mode, basically auto-executing commands which are usually fine, like npm stuff. Enabling it saves time. Make an acceptable whitelist for your usecase and put it.

any kind of tests are always allowed like vitest, npm test, nr test, etc. also basic build commands like build, tsc, etc. creating files and making directories (like touch, mkdir, etc) is always ok too

# Cloning Stuff

Use Lovable to quickly spin up high-quality frontends with minimal effort—it combines curated UI components and solid design sense to deliver polished results fast, then export to Cursor.

Or you might want to borrow proven designs and have AI rebuild them into production-ready UI. The sauce to clone the UI/UX of great apps: Mobbin -> Figma -> Cursor + Figma MCP

Also try -

# UI/UXR Analysis

Google AI Studio + Gemini 2.5 can analyze a video recording of a user journey from your app, and provide you a senior UX/UXR analysis

Act as a senior UX researcher and UX design expert. Please carefully analyze this [video description], and document all the flows, steps and issues to resolve in detail.

# Security Checklist

Before you launch your vibe-coded apps, do this to not be sorry:

  • Enable RLS on every table
  • Add rate limiting with Vercel Middleware
  • Authorization checks
  • Server-side validation and sanitization
  • CAPTCHA on auth forms
  • CSRF & XSS protection
  • Don’t store sensitive data in the browser
  • Proper error handling
  • Secure Cookies
  • File upload security
  • Use LLMs to help you optimise databases and tune configuration. When doing so provide context on the infrastructure, hardware, and query plans.