A Field Report

Building My Personal AI Coding Workflow

How I replaced an over-engineered system with a leaner one that combines three independent tools into a single integrated stack — designed to be understandable even if you don't write code yourself.

What you'll find in here
  1. Why I built this
  2. The three systems that make it work
  3. System one: The Brain
  4. System two: Archon
  5. System three: My Personal Workflow Repo
  6. How they fit together
  7. A day in the life
  8. The plumbing that makes it real
  9. Trade-offs and honest costs
  10. What you'd need to recreate this
  11. Credits and resources

Why I built this

I'm not a software engineer. I'm a builder who works with AI coding assistants to make things — most recently, native iOS and Android apps. For most of the past year I used BMAD method v6 as my workflow. BMAD is a framework that gives AI agents very specific roles (analyst, project manager, architect, scrum master, developer, QA) and shepherds them through tightly defined phases. It worked, but it felt heavy. A lot of ceremony before any code got written. A lot of artifacts to maintain. A lot of clicking between agent personas.

What I wanted was something lighter — a workflow with fewer moving parts but with the same end-to-end coverage, from idea to merged pull request. So I built one, drawing on three independent tools that each happened to solve part of the problem perfectly. The result is what this document describes.

If you've felt the same friction with whatever you're using today, this might give you a model worth borrowing from.

The three systems that make it work

The whole stack is built on three independent open-source tools. None of them know about each other directly — they're glued together by a personal "workflow repo" I keep on my Mac. A single slash command wires any new project into the same setup, and symlinks mean any improvement I make to my commands propagates everywhere automatically.

The Brain

A federated knowledge graph that indexes every project on my Mac. It gives the AI a permanent memory across all my work — so when I start a new project, the AI can reference what I built in earlier projects without me having to repeat myself.

Archon

A workflow engine that takes a GitHub issue (or any task) and runs it end-to-end autonomously — research, plan, implement, test, open a pull request, review itself, fix its own findings. I trigger it from Slack, GitHub, or my terminal.

My Workflow Repo

A folder of slash-commands and project templates — most of them drawn directly from Cole Medin's Dynamous Agentic Coding course, with my own additions for Archon and Brain integration. It gets wired into every new project so each one starts from the same baseline.

Each of these can run on its own. What's interesting is how they reinforce each other when used together.

System one: The Brain

Think of the Brain as a personal Wikipedia, except it builds itself automatically from the projects I already have on my computer. Every code repository, every set of notes, every PDF library — anything I tell it to ingest — gets parsed into a giant interconnected graph of concepts, files, functions, and ideas.

Once that graph exists, I can ask questions about it from anywhere:

The Brain refreshes itself automatically at 2am every night. New code I wrote during the day shows up in the graph by the next morning. I never manually re-index anything.

A note on privacy

The Brain has a built-in "sensitivity tier" system. Some projects are marked shareable — those can be queried from Slack and could (in the future) sync to a cloud server. Most are marked confidential — those only respond to local queries from my Mac, never reach Slack, never leave my machine. The default for new projects is confidential, fail-safe.

What's underneath: graphify, and what I built on top

The Brain isn't all my own code. The engine underneath is an open-source tool called graphify, which does the heavy lifting — it parses source code, markdown, PDFs, transcripts, and 25+ other file types into a graph of interconnected nodes. Graphify alone gives you one corpus, one graph, and a CLI to query it.

What I built around graphify is a set of capabilities it doesn't include on its own. Each addressed something specific that I needed for the way I wanted to work:

Why build this layer on top instead of just using graphify directly? Graphify is excellent as a parser-and-graph-builder, but it doesn't have a privacy model, doesn't federate, doesn't have a phone-accessible interface, and doesn't self-maintain. Those four gaps are what made it not quite ready to be a personal AI memory I could trust across my entire computer. The Brain is what fills them. Graphify does the hard part; my Brain wraps it in the boring-but-essential infrastructure that makes it safe and convenient.

The Brain runs entirely on my computer. No cloud dependencies. Open source. It's the foundation everything else builds on, because it solves the biggest weakness AI coding tools have today: they forget everything between sessions. The Brain gives them a memory that's mine, that I control, and that grows continuously as I work.

System two: Archon

Archon is the most ambitious piece. It's a "workflow engine for AI" — like a recipe book where each recipe is a YAML file describing a sequence of steps the AI should take. Some steps are deterministic (run a test, push a branch, make a git commit) and some steps are AI-driven (plan a feature, review a PR, write code).

Where Archon shines is autonomous delegation. I don't have to babysit it. Here's the workflow I use most:

I file a GitHub issue Comment @archon fix this Archon does the rest

"The rest" means: Archon receives the GitHub webhook, classifies whether it's a bug or feature, does web research on relevant libraries, explores my codebase to understand context, writes an implementation plan, executes the plan in an isolated copy of my repo (so my main code is never touched), runs the tests, opens a draft pull request, runs its own comprehensive review with multiple AI agents, fixes any review findings it can address, and finally posts a comment back on the issue with the PR link.

I tested this end-to-end while writing this document. A real bug got reported, I commented @archon fix this, and ~12 minutes later a draft PR appeared with a clean, accurate fix and a detailed report. Two commits — one for the implementation, one for review fixes. Zero intervention from me during those 12 minutes.

Archon ships with 20+ pre-built workflows — fix-github-issue is just one. There are workflows for code review, refactoring, conflict resolution, PR validation, feature development, and more. I can also write my own workflows in YAML for processes that are unique to how I work.

When NOT to delegate

Archon is great for well-scoped tasks with clear specs. It's not great for exploratory work where I'm still figuring out what I want. For that, I drive interactively using the commands in my workflow repo (next section). The rule of thumb: delegate when you'd be able to write a clear ticket for a contractor; drive interactively when you wouldn't.

System three: My Personal Workflow Repo

Credit upfront

Before I describe what's in this repo: the bulk of the slash-commands and the entire mental model come from Cole Medin's Dynamous Agentic Coding course, co-created with Rasmus Widing. I'm an integrator and a customizer — they did the hard work of designing the commands. If anything in this section sounds clever, the cleverness is almost certainly theirs. Course details and signup: community.dynamous.ai.

This is the glue. It's a folder on my Mac that contains:

When I start a new project, I run a single slash command — /init-workflow — from inside the new directory. It symlinks the .claude/commands/, .claude/agents/, and .claude/skills/ folders from my workflow repo into the project, and copies the CLAUDE.md template fresh so I can customize it for that project's tech stack and conventions. Every project gets the same baseline, but each one is individually tuned.

The symlink approach is the key trick. When I improve a command in my workflow repo, every project I've ever onboarded inherits the improvement automatically — no chasing down copies, no re-running setup, no version drift. Meanwhile, CLAUDE.md stays as an independent copy in each project so edits don't leak between them.

The core command library — /prime, /plan-feature, /execute, /code-review, /code-review-fix, /validate, /commit, /create-prd, /init-project, /rca, /implement-fix, the code-reviewer subagent, and the sectioned CLAUDE.md template — comes directly from the Dynamous Agentic Coding course by Cole Medin and Rasmus Widing. I'm using the most refined versions of their commands with light project-specific tweaks. Cole has spent serious time on these — the planning command alone is around 470 lines of careful prompt engineering, with five explicit analysis phases and a full plan template that produces context-rich output an execution agent can implement in one pass.

My own contribution is mostly the integration layer on top, plus a handful of focused tweaks. Here's specifically what I added and why:

Everything else — the PIV loop itself, the planning rigor, the validation pyramid, the original command structure, the careful prompt engineering — is Cole and Rasmus's design. Without their groundwork the workflow repo would be a shadow of what it is.

How they fit together

Here's the picture I keep in my head:

My new project's repo Workflow Repo commands
↑↓
Brain (knowledge)   Archon (execution)   GitHub + Slack (channels)

The Workflow Repo is the cockpit. The Brain is the memory. Archon is the autopilot. GitHub and Slack are the steering wheel — they're how I trigger things from outside the cockpit.

When I start a brand-new project, I run /init-workflow once to wire the workflow into that directory. Then in a typical session I might:

  1. Open a new project. Run /prime — the AI reads the project's CLAUDE.md, the relevant reference docs, and gets oriented.
  2. Ask the AI a vague question about whether I've built something similar before. The AI queries the Brain across all my projects and finds a pattern from a project I built six months ago.
  3. Use /plan-feature to produce a detailed plan for what I want to build today. The AI uses what it learned from the Brain to make better suggestions.
  4. Use /execute to implement it step by step. I watch and approve each step.
  5. Use /validate to run tests and linters.
  6. Use /commit to make a clean git commit with a structured message.

Later that evening, I notice a bug while testing. I:

  1. Open the GitHub repo on my phone.
  2. File an issue describing the bug.
  3. Comment @archon fix this on the issue.
  4. Close my phone.
  5. Come back ~15 minutes later to find a draft PR ready to review.

Meanwhile, the Brain has been indexing all that new work in the background. Tomorrow's session will know about today's changes automatically.

A day in the life

Here are some concrete scenarios from real workdays:

Morning — exploring a new feature idea

I open my main app project in my AI tool. I run /prime, then I ask: "I want to add an offline mode that syncs back when the connection returns. Have I ever built something like this before across my projects?" The AI queries the Brain, which checks my construction app, my home page, my course materials. It finds a sync pattern I used in a different project two years ago and pulls the relevant files into the conversation. I use that pattern as a starting point.

Afternoon — clearing tickets

I have a backlog of small GitHub issues — typos in docs, a missing tooltip, a styling tweak. I comment @archon fix this on three of them in a row from my phone while waiting at a school pickup. By the time I get home, three draft PRs are waiting. I review and merge.

Evening — Slack DM to the Brain from my phone

I'm thinking about a project on the weekend. I message my Brain bot in Slack: "what were the design decisions in my home page's authentication?" The Brain answers from my phone, with file citations, summarized in plain English. I don't have to open my laptop.

Late-night thought — handing off a feature

I have an idea for a feature I want built but I'm not going to do it myself tonight. I file a GitHub issue describing the feature carefully and comment @archon build this on it. Go to bed. Wake up to a draft PR.

The plumbing that makes it real

Most of the magic is just configuration. None of this requires custom code on my end — it's about connecting things that already exist:

The total amount of code I wrote myself to make all this work is small — mostly configuration files, a few shim commands that just call the upstream tools, and the template CLAUDE.md with my project rules. The hard work is in Archon, the Brain, and the AI tool itself. I just connected the pieces.

Trade-offs and honest costs

This wouldn't be a useful field report without the downsides. Here's what I gave up to get here:

What it costsWhat you get
Subscription usage allowance, not API spending. Archon is configured to use my Claude subscription (via global auth from claude /login) instead of pay-per-token API credits. That means I never see a surprise bill — but each workflow run consumes a real chunk of my 5-hour-window allowance, because Archon uses Claude Opus with extended context across many phases (research → plan → implement → review → self-fix). On Claude Max I can run several big workflows per window; on Claude Pro fewer. If I cap the window, Archon just slows down or queues; it doesn't keep spending. For fixes I'd otherwise spend 30+ minutes on, the trade is great. For trivial fixes I do them by hand to preserve allowance for the bigger work. The big win is predictable cost — flat monthly subscription instead of variable per-token billing.
Setup time. The first time I set everything up, it took me a few hours over multiple sessions — including a few wrong turns and recoveries. Once set up, the system runs itself. Subsequent projects take ~15 minutes to onboard into the workflow.
A learning curve. Even though I don't write much code myself, I had to learn enough about how the pieces connect to debug when something didn't work. I now understand the tools well enough to extend them as my needs evolve.
Trust in autonomous systems. Letting Archon write code while I'm not watching takes a leap of faith. Archon's self-review and validation steps catch most mistakes. Anything that gets through can be reviewed before merge — Archon opens PRs as drafts by default, never auto-merges.
Maintenance. Three different open-source projects mean three different update cycles to track. Each one is in active development by people who care. Updates have been mostly painless.

The bigger philosophical trade: I'm now more dependent on these specific tools than I was when I was just typing prompts into a chat window. If any one of them goes away, parts of my workflow break. I accept that — they're all open source, so worst case I can keep running older versions, but there's a real risk.

What you'd need to recreate this

If you wanted to build something similar, the rough shopping list is:

  1. An AI coding tool that supports MCP and slash commands. I use Claude Code. Cursor and others would also work but the commands would need adapting.
  2. Claude access — Archon can use either your Claude Pro / Max subscription (via local claude /login, recommended — no per-token billing) or an Anthropic API key. The Brain separately uses a small amount of Claude Haiku allowance for the Slack-side prose summarization.
  3. A GitHub account and the GitHub CLI installed locally. Personal Access Token with issue / PR / contents permissions.
  4. A domain you own and Cloudflare, for the tunnel. Free Cloudflare plan is enough.
  5. Bun, Node, or another JavaScript runtime to run Archon from source.
  6. The Archon project — open source at the link in credits below. About 30 minutes to install and configure.
  7. A federated knowledge graph tool — I use a custom one called Brain built on top of graphify. The pattern is more important than the specific tool.
  8. One or two Slack apps created in your own workspace (free for personal use).
  9. A personal workflow repo — this is the folder you build yourself, drawing from your own preferred commands. Cole Medin's Dynamous Agentic Coding course is the best starting point I know of. Once you have it, write yourself a one-shot setup command (mine is a Claude Code slash command called /init-workflow) that symlinks the right folders into any new project — so you're not copy-pasting files every time, and so improvements you make to commands propagate to all your existing projects automatically.

For someone non-technical, the realistic minimum to get the autonomous-fix-from-issue loop working is probably 4-6 hours of focused setup time, including a couple of debugging detours. You can get to "Brain on its own" in less than an hour.

A pragmatic suggestion

Don't try to build everything at once. Start with just one piece — most likely Archon, because it gives the most visible payoff. Get the GitHub-webhook-triggered fix-issue loop working on one repo. Live with it for a week. Then add the workflow repo. Then add the Brain. The whole stack is only valuable in context; piecemeal adoption gives you time to understand what each piece is doing.

Credits and resources

None of this is original work on my part. I'm an integrator, not a builder of these underlying tools. Credit where it's due:

If you're in the Dynamous community, I'd love feedback on this writeup, especially from anyone running a similar stack. And if you do try to set something like this up yourself, please share what you find — the more of us working through these patterns, the better the patterns get.