Claude Code Guide

I was a top 0.01% Cursor user.
Here's why I switched to Claude Code 2.0.

You have 6-7 articles bookmarked about Claude Code. You've seen the wave. You want to be a part of it. Here's a comprehensive guide from someone who's been using coding AI since 2021 and read all those Claude Code guides so you don't have to.

Cursor vs Claude Code
You'll be on the right side after reading this

This is a guide that combines:

  1. my experience from 5 years of coding with AI
  2. my experience with Claude Code
  3. 10+ articles and countless X posts I consolidated about Claude Code (references at the bottom)
  4. my setup
  5. advanced tips

After this article, the only limit to be your own ideas.

The journey

It's March of 2023. Github Copilot is our frontier of AI coding.

ChatGPT is still a novelty. Model improvement isn't taken for granted.

GPT-4 gets released.

Instantly it's clear that this is paradigm shifting.

We could create a loop of AI thinking with some tools to search the web and write code for us. The smell of AGI is in the air!

We decide to call these loops "agents". I was lucky enough to help build the first AI agent AutoGPTwhich went mega viral and is to this day the fastest growing repo to 100k stars.

But it didn't really work. If you got lucky, every once in a while you could get an almost working tic tac toe game.

For anything more complex? Forget about it.

Cursor came along in 2023 with promises. I tried and churned in October 2023 and again in May 2024. Good old copy and paste from ChatGPT was still better.

Then in September 2024 Cursor Composer came out.

From that moment, 90% of my code became AI generated.

I lived in that editor. I pushed it to its limits, wrote an internal guide on best practices that I never published, and figured out every trick: surgical cursor placement, context window management, Cursor rules, the perfect prompting patterns for different scenarios.

I had found the answer to all my problems. I even got an email from the team that I was a top 0.01% Cursor user.

Cursor Top 0.01%
Tweet link

I tried Claude Code earlier this year. Churned.

The workflow felt like a step backward from what I had built with Cursor. The model wasn't quite there yet, I still needed to know what was going on in the code more often than not.

Why would I ever use a tool that's barely as good and a 10x worst UX?

What's changed

Enter Claude Code 2.0.

The UX had evolved. The harness is more flexible and robust. Bugs are fixed. But that's all secondary.

The truth is that whatever RLHF Anthropic did on Opus 4.5 completely changed the equation. We've now evolved to the next level of abstraction.

Abstraction Progression

You no longer need to review the code. Or instruct the model at the level of files or functions. You can test behaviors instead.

I built a genetic algorithm simulator with interactive visualizations showing evolution in real-time including complex fitness functions, selection pressure, mutation rates, and more in 1 day. I didn't write a single line of the code.

Genetic Algorithm Simulator
Github link

Here are the types of things people are building with Claude Code:

Other things I've recently built with Claude:

Everyone has a magic wand now. You just have to figure out how to use it.

Why I switched

A skeptic on Twitter asks:

"Can someone explain to me why people use Claude Code instead of Cursor?"

— @ohabryka

Until a month ago, I shared the sentiment. Here's my answer:

  • Async first mindset: Being in the IDE lends to instinctive code review and perfection. But we've ascended to the next level of abstraction. And the terminal native workflow is a forcing function for taking that next step.
  • RLHF'd for its own scaffold: Claude models (especially Opus 4.5+) perform noticeably better in Claude Code. File searching, tool use, everything is tuned for this interface.
  • Cost efficiency: Claude Code costs seem to be more bang per token compared to Cursor plans.
  • Customizability: DIY is native and composability is built in.

When to use Cursor

  • Pixel perfect frontend: Often I find myself in the loop still to get a pixel perfect UI.
  • Learning: When iterating on something educational for yourself, the feedback loop is much quicker.
  • Prevent context pollution: If it's a small change unrelated to any of my Claude Code terminals, it's easiest to do it in Cursor.

Recommendation: Use Cursor as your default if you're

a) an organic coder who finds abstracting all code away to behavior scary, or
b) want to learn how to code.

Use Claude Code if you

a) never plan on learning and just care about outputs, or
b) are an abstraction maximilist.

There is a VSCode extension for Claude Code which can ease your transition. But the UX still isn't near as good as Cursor. And it defeats the purpose.

You should be abstraction maxxing.

My current setup

Claude Code with Opus 4.5 for most tasks. Planning, code generation, complex refactors, architectural decisions.

Cursor with GPT 5.2 / Sonnet 4.5 when I need tight feedback loops. Learning, UI perfection, small changes.

ChatGPT for a few things: (a) programming related questions that don't need project context (like setting up an A100 VM in Azure), (b) second opinions on plans, and (c) when I don't understand an output or need clarification on something Claude said.

Ghostty as my terminal. Made by the cofounder of HashiCorp. Fast, no flickering, natively supports terminal splitting, better text editing experience, and native image support.

Wispr for voice to text. If you work from home or have your own office, not having to type all the time is valuable. Begone Carpal tunnel. I need to be in a certain mood to use it.

The 5 pillars of agentic coding

Capture the Alpha

Don't worry about taking in depth notes. To make your life easier, I've encoded the entire alpha of this article in these two commands: /setup-claude-code (global, run once per machine) and /setup-repo (per project). They interview you for what you need and set everything up. Just paste this into your Claude Code chat and run it:
Download these commands to ~/.claude/commands/:

1. /setup-claude-code (run once per machine - installs all other commands):
https://gist.github.com/SilenNaihin/3f6b6ccdc1f24911cfdec78dbc334547

2. /setup-repo (run once per project):
https://gist.github.com/SilenNaihin/e402188c89aab94de61df3da1c10d6ca

Fetch each gist and save as [command-name].md in ~/.claude/commands/

Then run /setup-claude-code to install everything else.

It was written in the scriptures. Buckle up.

Agentic Tradeoffs

Context

Here's some tips I've picked up over time on context management:

  1. "spawn a subagent to do deep research on this topic" Spawn subagents for parallel work. They do not pollute the main agent's context. They can individually do work and add just the valuable context from their work to the main agent's context.
  2. /compact - Others are iffy about compacting, but often the tradeoff to stay in the same chat and eat the compacting is worth it. Their system for compacting is smart.
  3. /transfer-context That said, after compacting enough times or doing any task not directly related, quality will degrade. Don't be afraid to create new chats. If you need to transfer context, just tell the model to give you a prompt to put into another model with the related context for the task with the files (for anything advanced create md files, although I find managing these md files to be annoying). Here's a gist of my /transfer-context command.
  4. /contextShows you how much context you have left. You'll get a report like this:
    Context Report
    This is in the bottom right of the terminal.
    Claude will also tell you when you're running low on context.
    Context Report
    This is in the bottom right of the terminal.
    Make the decision to compact or to switch chats at this point. Don't wait until it hits 0% as it will degrade your outputs, and if it compacts in the middle of a task it will forget potentially relevant context that you've given it even in the last chat.
  5. Maintain focus: one chat = roughly one task. If a chat is focused on a single task it will have more relevant context. The definition of a 'task' is miles broader than what it used to be. Test the limits and see what works for you.
  6. Generating things in a chat that already has context will always perform best, whether it's docs, tests, or related code. Sometimes for one off changes (like a bug fix), I'll do them within the chat context, commit changes, and then rewind the conversation back to save context.

Use /resume to continue from a previous chat.

Note on context limits: Claude Code has a 200k context limit. You hit the wall faster than alternatives like Codex (400k) or Gemini (1M).

Planning

Your time spent planning is directly proportional to agent output.

Rule of thumb: a good prompt will save you 3 minutes of time on follow up prompts and debugging for every 1 minute you spend planning.

Shift+Tab twice: Enter plan mode. I use it, but only for larger tasks or when the exact shape of what I want it to be is unclear. Note: plan mode saves to a .md in the global ~/.claude folder, which isn't accessible within your repo. I'll either ask Claude to create a plan.md in the repo after plan mode, or skip plan mode entirely and plan in-chat.

Three Approaches

  1. Plan mode dialogue: Start a conversation, ask questions, let it explore code, create a plan together. When you're happy, say "write plan to docs/*.md and start coding." or if in plan mode, "yes, and bypass permissions."
  2. Sprint-style todo list: For larger projects, set up a progress.txt and structured task list (prd.json). More on this in the Advanced section.
  3. Generate revert plan: Run your prompt, see what it generates, then revert and keep planning for the final plan.
Planning principles

After creating your plan, use our /interview-me-planmd command which interviews you in depth about your plan before building. (See this X post, I've also tried it myself and found it genuinely effective.)

Read @plan.md and interview me in detail using the AskUserQuestionTool about literally anything: technical implementation, UI & UX, concerns, tradeoffs, etc. Make sure the questions are not obvious. Be very in-depth and continue interviewing me continually until it's complete, then write the plan to the file.

Opus 4.5 is amazing at explaining things and makes stellar ASCII diagrams. My exploration involves asking lots of questions, clarifying requirements, understanding where/how/why to make changes.

Backwards compatibility: Models are currently, RLHF'd so far into oblivion that you need to explicitly ask it to not maintain backwards compatibility.

Watch out for overengineering: Claude models love to do too much. Extra files, flexibility you didn't ask for, unnecessary abstractions. Be as explicit with what NOT to do as possible. Pete puts it well: "We want the simplest change possible. We don't care about migration. Code readability matters most, and we're happy to make bigger changes to achieve it."

Keep in mind: Coding agents are better at creating new files than editing existing ones. It can often be valuable to tweak the seed prompt and reset all the code from scratch.

Closing the loop

There's a classic XKCD about programmers spending a week automating a task that takes 5 minutes.

With agentic coding, this equation has flipped. Closing the loop is now almost always worth it. The cost of automation has collapsed. What used to take a week now takes a conversation.

If you find yourself doing something more than once, close the loop. If you spend a lot of time doing x thing, close the loop.

  • Make commands for repeated prompts
  • Make agents for repeated work
  • Update your claude.md
  • Make prompts in .mds (like Cursor rules!)
  • Change tsconfig and other config files

Verifiability

The only way you or the model know that you're right is if you can verify the outputs.

Before, you had to be in the code. Then with Cursor, you had to approve every edit. Now, just test behaviors with interface tests.

Interface tests are the ability to know what's wrong and explaining it.

For UI this means looking, for UX this means clicking around, for API this means making requests. And checking the responses.

A good way to think about closing the loop is to make it easy for you to verify by making it easy for the agent to verify.

For large refactors: Ask Claude to build comprehensive interface tests beforehand. This ensures you got the refactor right. The tests become your verification layer.

Writing tests: The best tests are written in the same context as the code they are testing.

Let Jesus take the wheel. For production apps, test in staging or dev on a PR. Integration tests are your safety net. If they pass, ship it.

Debugging

AI writes code fast, but debugging AI code requires different skills than debugging your own. You didn't write it, so you don't have the mental model.

The debugging loop

When something fails, use systematic debugging. I have a /debug command that triggers thorough investigation:

  1. Create hypotheses for what's wrong
  2. Read ALL related code (take your time)
  3. Add strategic logging to verify assumptions
  4. Tackle it differently in a new chat
  5. Try a different model
  6. Revert and try again
  7. Worst case: dive into the code yourself

The rule of three: If you've explained the same thing three times and Claude still isn't getting it, more explaining won't help. Change something.

Show instead of tell: If Claude keeps misunderstanding, show it a minimal example of what you want the output to look like. Claude is good at following examples.

Start fresh: If you're making lots of changes to your plan, start a new session. Get the agent to summarize the situation, what has been tried, and learnings. Copy paste into a new Claude session.

Council of models

Different models have different blind spots. When stuck, get fresh perspectives:

You can automatically review your PRs and commits. Claude can catch issues, suggest improvements, and provide context aware feedback before human review even begins.

You can do this via a Stop hook (more on these later) with Claude code in headless mode (-p) that triggers on every commit, or via prs. When I've used automated reviewing it was always at the via PR level.

If you have access, use Codex for PR review. You don't want the same inductive biases that wrote the code reviewing it. Codex catches things Claude misses and vice versa.

Refactoring and cleanup

Tools to use:

Run /refactor to do a focused cleanup session with these tools.

I refactor when I either feel pain because Claude is making mistakes, or after large additions to codebases. I'm not the only one of the opinion that doing this continuously kills momentum. Treat it as a distinct phase.

Claude won't understand your preferences around code cleanliness. Over time, add context to your Claude.md that reveals your preferences and reduces refactoring time.

Tips for an effective Claude Coder

Always bias towards using the most powerful models. Use /model to switch to Opus 4.5. The cost difference is negligible compared to the quality difference.

Use @ to mention files directly in your prompt. Sometimes you need to type @/ to get the full file list to appear.

Keyboard shortcuts I use the most

Shift+Tab twice: Plan mode.

Ctrl+R: Search through prompt history (similar to terminal backsearch).

Esc+Esc: Access /rewind checkpointing. Reverts to a previous checkpoint when Claude messes something up. Can rewind both code and conversation.

!: You can type any bash command in the chat by prefixing your message with !.

Rewind

Useful Mac shortcuts:

  • Shift+Enter: Add newlines without sending the message.
  • Cmd+Option+C (Raycast): Access full clipboard history. Essential for copying multiple things.
  • Option+Arrow: Skip by words. Cmd+Arrow: Jump to start/end of line.

The "just ask Claude" mindset

You can often ask Claude to do things you think you have to do manually. Changing default permissions, configuration, anything file-related.

Get in the mindset:just ask Claude. It knows how to do things like creating custom commands (it will search the web and figure it out if it doesn't know).

Using 12 parallel terminals at once

I often don't even have an IDE open for a repo anymore. I have 12 terminals open at once, actively working from on 1-8 at any given time. Typically 2 per project: one for context management or Ralph, one for active multithreading.

Parallel terminals

Making real progress in 4 projects at once requires the projects to be more execution than thinking. And I have to be locked in off a Celsius and a pack of Zyns (metaphorically, I don't consume either. Just hard drugs like life).

The natural instinct is to use git worktrees to isolate parallel work.

But hammering the same branch is the best approach when running multiple instances. For speed and simplicity. In practice if you do this right you'll rarely need to use worktrees. Armin agrees. So does Pete.

Think in terms of the "blast radius" of one of your terminals. Evaluate the scope of your changes prior to sending your prompt. If it overlaps with another instance, you should be getting Claude to do it in that instance. You'll find that it's truly rare that this mindset doesn't work.

Worst case if there are errors or you miscalculated, you can always revert or fix it. The cost of the rare times this happens is worth it.

Our /commit-smart command helps to make contextual commits. It only commits files that the Claude Code instance touched which allows me to revert a specific change without losing unrelated work.

For solo projects I just push to main directly. When working with others

  1. if it's one person I'll have a branch called silen which I periodically create a PR for and merge in, or
  2. if it's multiple collaborators I'll create branches and check out the Claude instances on that branch, and
  3. if it's a more established repo I'll create a second worktree, and have two terminals associated with two different branches.

Another tip is when I have one session doing something like a refactor but I already have your next prompt typed out for a feature add, you can do !sleep 600 in that second instance and then send in your prompt.

Your CLAUDE.md

The default /init command analyzes your project and generates a starter configuration. Our /setup-repo command goes further to configure your repo in the first place. It includes best practices (also referencing the /init command) for agentic repos, and interviews you for what you need.

Generally, what you include should be driven by pain. Outside of the basic template that /setup-repo gives you:

  • Project summary + directory structure: Gives Claude immediate orientation.
  • Main dependencies and patterns: If you use domain-driven design, microservices, specific frameworks, document it.
  • Non-standard organizational choices: Anything that would confuse someone (or Claude) new to the codebase.
  • Tooling and repo layout info: Minimize the amount the agent needs to search.
  • Comments only where needed (minimize Chesterton's fence errors): Add comments on code that's referenced elsewhere or would be hard to understand without context. You don't need human-readable documentation everywhere, you can generate that later if needed.
  • Monorepos need extra guidance: Claude is worse at monorepos. Be explicit about which package you're working in, use full paths from repo root, and document package-specific scripts.

Use /update-claudemd in the same chat as you discover a gotcha, add a new pattern, or change the project structure. It aims to update conservatively, only the signal.

Domain playbooks

Frontend

Great prompting and guidelines help most. UI is hard to explain and difficult to verify.

I found giving access to take screenshots of the UI was slow and imperfect but it's possible this won't be the case a month down the line.

The interactions with the books and podcasts sections on my website took forever. Human judgment, screenshots, explaining, and organic coding.

  • Keep leaf components presentational: Business logic lives in parent components. Separating concerns makes it easy to audit props and spot irregularities. It also helps the agent find patterns to follow.
  • For the last mile, jump into your IDE: AI struggles with pixel perfect work. You'll likely need some organic coding to make it perfect.
  • Responsiveness gets forgotten: Models are bad at remembering responsiveness matters. Tell it to keep it in mind from the start, and expect to do some organic coding to make it perfect.
  • frontend-design plugin: From the Claude Code plugin store. Helps with design decisions and component structure. Just ask Claude to install it.

Screenshots

Drag screenshots into chat.

Pete mentions that at least 50% of his prompts contain screenshots. I use them sparingly. They can be powerful for UI fixes but are slow and imperfect for iterative frontend work.

I like to generate UIs or visual components in Nano Banana Pro and paste the screenshot to get Claude to generate it.

Gotchas

  • Silencing linters: There's been times agents biased towards shutting the linter up with eslint-disable instead of fixing issues. If you notice this happening, use eslint-comments/no-restricted-disable.
  • Make styling reference docs: Create styling components and component reference markdown files. Reference them always. Set up tailwind.config with main colors, spacing tokens, etc.
  • Install Vercel's React best practices skill: Vercel released a skill that encodes React patterns and conventions. Worth installing for any React/Next.js project.

Backend

Verifiability is straightforward here. Some advice:

  • Use an ORM for schema-as-context: The entire DB schema in a file that AI understands. There' other ways to do this but I've loved using Prisma with coding agents.
  • Realistic seed data: Invest in tooling to keep your local DB populated with realistic data. This lets agents self verify in a realistic way.
  • Generate API docs and Postman workspaces: When working with APIs, ask Claude to generate documentation and a Postman collection so you can easily test endpoints yourself.

AI research

I'm just starting to experiment with this. I've given Claude access to a VM with an A100 to see what experiments it performs.

Karpathy shared his experience:

Claude has been running my nanochat experiments since morning. It writes implementations, debugs them with toy examples, writes tests and makes them fail/pass, launches training runs, babysits them by tailing logs and pulling stats from wandb, keeps a running markdown file of highlights, keeps a running record of runs and results so far, presents results in nice tables, we just finished some profiling, noticed inefficiencies in the optimizer resolved them and measured improvements.

It's not perfect but I'm used to doing all of these things manually, so just seeing it running on the side cranking away at larger scope problems and coordinating all these flows in relatively coherent ways is definitely a new experience and a complete change of workflow.

— Andrej Karpathy

So far it seems to me that models are able to grok pretty much anything and are great partners to think through things. Paste in a document about your hypothesis, and they can help you execute on it.

Have fun. But remember, extraordinary claims require extraordinary evidence. Avoid getting one shotted into psychosis land.

Learning

Claude Code is great for generating IPYNBs and using a large prompt for learning concepts. But when you're sitting there trying to grok things and want to stay in the loop, Cursor's Cmd+K and chat is best (along with ChatGPT).

  • Questioning your assumptions: Ask it to challenge your understanding.
  • Fill-in-the-blank code: Instead of generating everything, have it create scaffolding with gaps for you to fill.

Advanced Claude Code usage

On Mac, run caffeinate -dimsu to prevent your laptop from sleeping while Claude works. Start a task, close your laptop, go places.

When you paste something longer, Claude simplifies this in the terminal to [Pasted text #x +x lines]. 9/10 times I like this.

The other 10% here's my workaround: I send a bash command like !sleep 100, then I click enter on my prompt, then the up arrow. This will bring the fully expanded queued prompt into your terminal.

We used to have shiny rainbow text whenever we typed 'ultrathink' which would maximize Claude's thinking.

ultrathink old

But it's 'on' by default now.

ultrathink deprecated

Ralph for larger projects

Unfortunately, for almost everything, Ralph is more of a pain to get working than it's worth. Sorry to disappoint the hype.

Ralph basically puts a bunch of Claude codes into a loop and coordinates them using a prd.json and progress.txt. I do sometimes use it when I'm starting a new project.

If you are brave enough to give it a shot, the /setup-ralph command sets up all the Ralph files for you.

Ralph exits early when it detects keywords like "done", "complete", or "finished" in your progress files. Use status terms like PASSED/PENDING instead to avoid premature exits.

Also, Claude can confuse itself into thinking it's advising about Ralph rather than being the agent. Make your PROMPT.md direct: "You are the agent. Do the work."

The key to Ralph: keep an accompanying chat open to guide it and check on progress. Ralph runs in the background; you steer from the side. When starting Ralph, I tell my monitoring chat: "Sleep for 30 seconds, then check if Ralph is executing correctly. Repeat 3 times." This catches early issues before you walk away.

Ralph 1
The Ralph repo
Ralph 2
Running Ralph for one of my projects

Code on your phone

Use vibetunnel.sh with Tailscale to code from your phone and you can run Claude Code on your phone from anywhere.

Phone coding
The VibeTunnel desktop connection

Hooks, Subagents, Skills, and MCP

Custom Subagents are spawned instances that don't pollute your main context but can report back directly to it. I have a custom agents for different types of deep research, and a claude-critic agent for opinions. A friend uses a /f command in a subagent to find relevant files and context without cluttering the main agent.

Use cases: large refactoring (subagent for each logical group of files), code review pipelines (style-checker, security-scanner, test-coverage in parallel), research tasks (explore subagent for unfamiliar codebases).

Hooks execute on specific events (tool call, stop, etc.). I've experimented but nothing has stuck. One use case: running Prettier on .ts files after Claude finishes. A good mental model for when to use hooks: a) specific things you do at a certain point (like after chat) often, and b) it can be done through a bash command.

I've heard about running a "Do more" prompt when Claude finishes via the Stop hook to keep it working for hours.

Skills are folders where the LLM decides when or what to load. Files with scripts, prompts, etc. They' a superset of commands, coming with their own executable code and many potential prompt files. Use cases: code review standards, commit message conventions, database query patterns, API documentation formats. Vercel's React best practices skill is worth installing for React/Next.js projects.

MCP (Model Context Protocol) lets Claude talk to external services directly. Connect to GitHub, Slack, databases, issue trackers. Use cases: implement features from JIRA issues, query PostgreSQL directly, integrate Figma designs, draft Gmail responses, summarize Slack threads. Run /mcp to see your connections.

Advanced Concepts

Here's a guide that goes more in depth on these. I think you should get started without reading it. If you feel something is missing or you try to close the loop some other way and it doesn't work, come back and read it.

Headless Mode

The -p flag runs Claude Code in headless mode. It runs your prompt and outputs the result without entering the interactive interface. This means you can script it, pipe output to other tools, chain it with bash commands, integrate into automated workflows.

People use this for automatic PR reviews, automatic support ticket responses, documentation updates. All logged and auditable.

The bottom line

To make your life easier, I've encoded the entire alpha of this article in these two commands: /setup-claude-code (global, run once per machine) and /setup-repo (per project). They interview you for what you need and set everything up. Just paste this into your Claude Code chat and run it:
Download these commands to ~/.claude/commands/:

1. /setup-claude-code (run once per machine - installs all other commands):
https://gist.github.com/SilenNaihin/3f6b6ccdc1f24911cfdec78dbc334547

2. /setup-repo (run once per project):
https://gist.github.com/SilenNaihin/e402188c89aab94de61df3da1c10d6ca

Fetch each gist and save as [command-name].md in ~/.claude/commands/

Then run /setup-claude-code to install everything else.

This will all change. Quickly. Here's what I believe will persist:

  • Planning leverage will only increase. As models get better, a well-structured prompt/repo pays off even more.
  • The ability to quickly verify work will remain important.
  • Close the loop. If you do something more than once, abstract it. Commands, docs, components. This principle predates AI.
  • Don't be lazy. Figure out what needs to be done logically. You don't actually have to code, but do the hard thinking. A lot of times you'll realize what you're trying to do is either simpler or harder than you think.
  • There's no "right answer." The only way to create your best system is to create it yourself by being in the loop. Best is biased by taste and experience. Experiment, iterate, and discover what works for you.

The people whose work informed this guide, and resources worth exploring:

People

  • steipete — Prolific writer on AI coding workflows
  • vibekanban — Comprehensive vibe coding guide
  • sankalp — Experience reports on Claude Code
  • addyo — LLM coding workflow breakdown

Key Posts

Tools Mentioned

  • VibeTunnel — Terminal multiplexer for mobile coding
  • knip — Dead code finder
  • jscpd — Code duplication detection
  • agent-scripts — steipete's AGENTS.MD and scripts
  • Ralph — For larger sprint-style development
  • Wispr — Voice-to-text for coding

Command Summary

All commands I recommend, organized by type and frequency.

CommandTypeUsage
/compactDefaultEvery chat
/contextDefaultEvery chat
/rewindDefaultSituational
/resumeDefaultSituational
/agentsDefaultSituational
/clearDefaultSituational

Custom commands (gists linked):

CommandUsageLink
/setup-claude-codeOnce per machinegist
/setup-repoOnce per projectgist
/setup-ralphWhen using Ralphgist
/commit-smartEvery commitgist
/interview-me-planmdPlanninggist
/update-claudemdEnd of sessiongist
/transfer-contextWhen context degradesgist
/debugWhen stuckgist
/refactorAfter large additionsgist
/ensemble-opinionRarely (multi-model)gist
/codex-delegateRarely (requires Codex CLI)gist
/gemini-delegateRarely (requires Gemini CLI)gist

This guide captures a moment in time. The tools will change, the models will improve, but the principles of planning, verifiability, and closing the loop will persist. Take what works, leave what doesn't, and develop your own workflow through experimentation.