How I Actually Use AI to Build: Claude Code, Cursor, and the Reality

AIDevelopmentWorkflow

Author

Navas

Published

January 10, 2026

Category

AI

How I Actually Use AI to Build: Claude Code, Cursor, and the Reality
I've been using AI tools since 2023. Here's what actually works, what doesn't, and how my workflow has evolved from patching code gaps to shipping entire platforms.

The honest version

I've been using AI for development since ChatGPT-3 dropped in late 2022. Back then, I was finishing my computer science degree and desperately needed help shipping my final year project - a career guidance chatbot. AI helped me patch gaps in my coding knowledge. Copy-paste debugging, learning syntax I'd forgotten, understanding error messages.

Three years later, the workflow looks completely different.

What I actually use

Claude Code is my primary tool now. It can read my entire project, understand context, write code, run tests, execute bash commands, and deploy. It's not autocomplete - it's genuine pair programming with something that's read more documentation than I ever will.

The game-changer was MCP integrations. I've connected Claude to my databases (Neon), deployment platform (Vercel), project management (Linear), and file storage. When I'm building a feature, Claude can check the current schema, implement the change, and verify the deployment - all in one conversation. No tab switching, no context loss.

Cursor handles the daily editing. VS Code with AI superpowers. The inline completions are useful, and chatting with your codebase helps when navigating unfamiliar code. But it's a complement to Claude Code, not a replacement.

What the workflow looks like

On a recent client project - Athletic AbhyAn - here's how it actually went:

Discovery: Pure human work. Understanding what the founders needed, what problems they were solving, what success looked like. No AI here.

Architecture: I sketched the data model and key flows, then talked through assumptions with Claude. "Here's what I'm thinking - what am I missing?" Useful for stress-testing decisions.

Scaffolding: Claude set up the project structure, installed dependencies, created the database schema. With Neon MCP connected, it created the actual database and ran migrations directly. What would have taken an afternoon happened in minutes.

Building: This is where AI shines. I describe what I want, Claude writes the code, I review and iterate. The feedback loop is tight. If something doesn't work, we debug together. If I don't like the approach, we try something else.

Polish: Animations, loading states, error handling, accessibility. AI helps, but this is where human judgment matters. What feels right? What's the appropriate level of feedback? These aren't questions with obvious answers.

What AI can't do

Here's what I've learned: AI is excellent at the "how" but struggles with the "what" and "why."

It can't understand your client's business context. It can't make product decisions that balance competing priorities. It can't build relationships and trust. And it definitely can't take responsibility when things go wrong.

I've also caught AI making confident mistakes. Security issues, performance problems, UX antipatterns - suggested with complete certainty. Trust but verify. Always.

The mindset shift

I used to feel like an imposter when building things. The gap between what I could envision and what I could execute was demoralising. AI closed that gap.

Now I think of myself as the product lead and the AI as my engineering team. I handle discovery, architecture, design, and quality control. The AI handles implementation under my direction. We're collaborators.

This framing keeps me accountable for the thinking. AI can write code, but it can't decide what to build or why it matters. That's still my job. And honestly, that's the valuable part.

Practical advice

If you're starting out with AI-assisted development:

Start with one tool and learn it properly. I'd recommend Claude with a good IDE integration.

Invest in context. The better you can explain what you want and why, the better the AI performs. This is a skill. Writing good prompts is product thinking in miniature.

Review everything. Don't trust, verify. Especially for security, performance, and anything user-facing.

Ship fast, iterate faster. AI makes iteration cheap. Don't try to get it perfect on the first pass.

The tools will keep getting better. But the fundamentals won't change: understand the problem, ship something real, learn from what happens.

Let's Talk.

Have a project in mind? Let's build something exceptional together.

NMO digital.

Crafting premium digital experiences for forward-thinking brands.

© 2026 NMO Digital. All rights reserved.

NMO digital.