How I replaced an over-engineered system with a leaner one that combines three independent tools into a single integrated stack — designed to be understandable even if you don't write code yourself.
I'm not a software engineer. I'm a builder who works with AI coding assistants to make things — most recently, native iOS and Android apps. For most of the past year I used BMAD method v6 as my workflow. BMAD is a framework that gives AI agents very specific roles (analyst, project manager, architect, scrum master, developer, QA) and shepherds them through tightly defined phases. It worked, but it felt heavy. A lot of ceremony before any code got written. A lot of artifacts to maintain. A lot of clicking between agent personas.
What I wanted was something lighter — a workflow with fewer moving parts but with the same end-to-end coverage, from idea to merged pull request. So I built one, drawing on three independent tools that each happened to solve part of the problem perfectly. The result is what this document describes.
If you've felt the same friction with whatever you're using today, this might give you a model worth borrowing from.
The whole stack is built on three independent open-source tools. None of them know about each other directly — they're glued together by a personal "workflow repo" I keep on my Mac. A single slash command wires any new project into the same setup, and symlinks mean any improvement I make to my commands propagates everywhere automatically.
A federated knowledge graph that indexes every project on my Mac. It gives the AI a permanent memory across all my work — so when I start a new project, the AI can reference what I built in earlier projects without me having to repeat myself.
A workflow engine that takes a GitHub issue (or any task) and runs it end-to-end autonomously — research, plan, implement, test, open a pull request, review itself, fix its own findings. I trigger it from Slack, GitHub, or my terminal.
A folder of slash-commands and project templates — most of them drawn directly from Cole Medin's Dynamous Agentic Coding course, with my own additions for Archon and Brain integration. It gets wired into every new project so each one starts from the same baseline.
Each of these can run on its own. What's interesting is how they reinforce each other when used together.
Think of the Brain as a personal Wikipedia, except it builds itself automatically from the projects I already have on my computer. Every code repository, every set of notes, every PDF library — anything I tell it to ingest — gets parsed into a giant interconnected graph of concepts, files, functions, and ideas.
Once that graph exists, I can ask questions about it from anywhere:
The Brain refreshes itself automatically at 2am every night. New code I wrote during the day shows up in the graph by the next morning. I never manually re-index anything.
The Brain has a built-in "sensitivity tier" system. Some projects are marked shareable — those can be queried from Slack and could (in the future) sync to a cloud server. Most are marked confidential — those only respond to local queries from my Mac, never reach Slack, never leave my machine. The default for new projects is confidential, fail-safe.
The Brain isn't all my own code. The engine underneath is an open-source tool called graphify, which does the heavy lifting — it parses source code, markdown, PDFs, transcripts, and 25+ other file types into a graph of interconnected nodes. Graphify alone gives you one corpus, one graph, and a CLI to query it.
What I built around graphify is a set of capabilities it doesn't include on its own. Each addressed something specific that I needed for the way I wanted to work:
confidential by default, shareable only if I opt in). Confidential content is mathematically blocked from reaching Slack or any future cloud replica. The filter is tested with canary strings to verify there's no accidental leakage. Without this layer, I couldn't safely have a Slack bot that answers questions about my projects — it would be a privacy disaster.[repo:path] citations so I can trace any claim back to source. Filtered through the sensitivity layer before anything leaves the local process. Without this, I couldn't query the Brain from my phone — and being able to ask the Brain things from my couch turns out to be one of its most-used features.extract runs. My Brain has a daily refresh at 02:00 baked in as a launchd agent, plus a self-healing doctor command for a macOS-specific Spotlight quirk that periodically breaks the Python virtual environment. Without this, my Brain would slowly go stale and I'd have to remember to re-index manually.Why build this layer on top instead of just using graphify directly? Graphify is excellent as a parser-and-graph-builder, but it doesn't have a privacy model, doesn't federate, doesn't have a phone-accessible interface, and doesn't self-maintain. Those four gaps are what made it not quite ready to be a personal AI memory I could trust across my entire computer. The Brain is what fills them. Graphify does the hard part; my Brain wraps it in the boring-but-essential infrastructure that makes it safe and convenient.
The Brain runs entirely on my computer. No cloud dependencies. Open source. It's the foundation everything else builds on, because it solves the biggest weakness AI coding tools have today: they forget everything between sessions. The Brain gives them a memory that's mine, that I control, and that grows continuously as I work.
Archon is the most ambitious piece. It's a "workflow engine for AI" — like a recipe book where each recipe is a YAML file describing a sequence of steps the AI should take. Some steps are deterministic (run a test, push a branch, make a git commit) and some steps are AI-driven (plan a feature, review a PR, write code).
Where Archon shines is autonomous delegation. I don't have to babysit it. Here's the workflow I use most:
@archon fix this
→
Archon does the rest
"The rest" means: Archon receives the GitHub webhook, classifies whether it's a bug or feature, does web research on relevant libraries, explores my codebase to understand context, writes an implementation plan, executes the plan in an isolated copy of my repo (so my main code is never touched), runs the tests, opens a draft pull request, runs its own comprehensive review with multiple AI agents, fixes any review findings it can address, and finally posts a comment back on the issue with the PR link.
I tested this end-to-end while writing this document. A real bug got reported, I commented @archon fix this, and ~12 minutes later a draft PR appeared with a clean, accurate fix and a detailed report. Two commits — one for the implementation, one for review fixes. Zero intervention from me during those 12 minutes.
Archon ships with 20+ pre-built workflows — fix-github-issue is just one. There are workflows for code review, refactoring, conflict resolution, PR validation, feature development, and more. I can also write my own workflows in YAML for processes that are unique to how I work.
Archon is great for well-scoped tasks with clear specs. It's not great for exploratory work where I'm still figuring out what I want. For that, I drive interactively using the commands in my workflow repo (next section). The rule of thumb: delegate when you'd be able to write a clear ticket for a contractor; drive interactively when you wouldn't.
Before I describe what's in this repo: the bulk of the slash-commands and the entire mental model come from Cole Medin's Dynamous Agentic Coding course, co-created with Rasmus Widing. I'm an integrator and a customizer — they did the hard work of designing the commands. If anything in this section sounds clever, the cleverness is almost certainly theirs. Course details and signup: community.dynamous.ai.
This is the glue. It's a folder on my Mac that contains:
/prime (load project context at session start), /plan-feature (produce a deep implementation plan), /execute (run that plan task-by-task), /code-review (review what was just built), /validate (run tests and linters), /commit (make a clean git commit)./archon/fix-issue 42 or /archon/pr-review 17 — so I can trigger Archon from inside my coding session without leaving the terminal./brain-onboard (register a new project with the Brain) and /brain-query (ask the Brain something from terminal).CLAUDE.md file that tells the AI how to behave on a per-project basis: which tech stack, which conventions, when to invoke Archon, when to consult the Brain, what sensitivity rules to follow.When I start a new project, I run a single slash command — /init-workflow — from inside the new directory. It symlinks the .claude/commands/, .claude/agents/, and .claude/skills/ folders from my workflow repo into the project, and copies the CLAUDE.md template fresh so I can customize it for that project's tech stack and conventions. Every project gets the same baseline, but each one is individually tuned.
The symlink approach is the key trick. When I improve a command in my workflow repo, every project I've ever onboarded inherits the improvement automatically — no chasing down copies, no re-running setup, no version drift. Meanwhile, CLAUDE.md stays as an independent copy in each project so edits don't leak between them.
The core command library — /prime, /plan-feature, /execute, /code-review, /code-review-fix, /validate, /commit, /create-prd, /init-project, /rca, /implement-fix, the code-reviewer subagent, and the sectioned CLAUDE.md template — comes directly from the Dynamous Agentic Coding course by Cole Medin and Rasmus Widing. I'm using the most refined versions of their commands with light project-specific tweaks. Cole has spent serious time on these — the planning command alone is around 470 lines of careful prompt engineering, with five explicit analysis phases and a full plan template that produces context-rich output an execution agent can implement in one pass.
My own contribution is mostly the integration layer on top, plus a handful of focused tweaks. Here's specifically what I added and why:
/prime, /plan-feature, and /execute rather than the course's canonical .agents/ versions. Why: the Module 10 variants already have Archon RAG integration hooks baked into the prompts. Choosing them gave me Archon-awareness from day one without me having to rewrite the planning or execution prompts./archon/fix-issue, /archon/pr-review, /archon/create-issue, /archon/feature-dev. Why: Archon runs via its own CLI by default. I wrote thin slash-command wrappers so I can trigger Archon workflows from inside Claude Code without switching to a terminal. Same end behavior, less friction./brain-onboard, /brain-query, /brain-status. Why: the course doesn't include the Brain at all (the Brain is a separate project). These commands integrate the federated knowledge graph into the same slash-command palette as everything else, so I never have to context-switch to remember which tool to reach for./init-workflow command. A one-shot setup command that symlinks the right folders into any new project. Why: the course's adoption procedure is manual copy-paste of files. I wanted something I could run with one keystroke, and I wanted improvements I make to my commands later to propagate automatically to every project I've ever onboarded — without me chasing down stale copies./init-project. Why: the course's /init-project is FastAPI-specific (because the course's example app uses FastAPI). I work in Swift and Kotlin on native mobile apps, so I rewrote it with placeholders for any tech stack.Everything else — the PIV loop itself, the planning rigor, the validation pyramid, the original command structure, the careful prompt engineering — is Cole and Rasmus's design. Without their groundwork the workflow repo would be a shadow of what it is.
Here's the picture I keep in my head:
The Workflow Repo is the cockpit. The Brain is the memory. Archon is the autopilot. GitHub and Slack are the steering wheel — they're how I trigger things from outside the cockpit.
When I start a brand-new project, I run /init-workflow once to wire the workflow into that directory. Then in a typical session I might:
/prime — the AI reads the project's CLAUDE.md, the relevant reference docs, and gets oriented./plan-feature to produce a detailed plan for what I want to build today. The AI uses what it learned from the Brain to make better suggestions./execute to implement it step by step. I watch and approve each step./validate to run tests and linters./commit to make a clean git commit with a structured message.Later that evening, I notice a bug while testing. I:
@archon fix this on the issue.Meanwhile, the Brain has been indexing all that new work in the background. Tomorrow's session will know about today's changes automatically.
Here are some concrete scenarios from real workdays:
I open my main app project in my AI tool. I run /prime, then I ask: "I want to add an offline mode that syncs back when the connection returns. Have I ever built something like this before across my projects?" The AI queries the Brain, which checks my construction app, my home page, my course materials. It finds a sync pattern I used in a different project two years ago and pulls the relevant files into the conversation. I use that pattern as a starting point.
I have a backlog of small GitHub issues — typos in docs, a missing tooltip, a styling tweak. I comment @archon fix this on three of them in a row from my phone while waiting at a school pickup. By the time I get home, three draft PRs are waiting. I review and merge.
I'm thinking about a project on the weekend. I message my Brain bot in Slack: "what were the design decisions in my home page's authentication?" The Brain answers from my phone, with file citations, summarized in plain English. I don't have to open my laptop.
I have an idea for a feature I want built but I'm not going to do it myself tonight. I file a GitHub issue describing the feature carefully and comment @archon build this on it. Go to bed. Wake up to a draft PR.
Most of the magic is just configuration. None of this requires custom code on my end — it's about connecting things that already exist:
The total amount of code I wrote myself to make all this work is small — mostly configuration files, a few shim commands that just call the upstream tools, and the template CLAUDE.md with my project rules. The hard work is in Archon, the Brain, and the AI tool itself. I just connected the pieces.
This wouldn't be a useful field report without the downsides. Here's what I gave up to get here:
| What it costs | What you get |
|---|---|
Subscription usage allowance, not API spending. Archon is configured to use my Claude subscription (via global auth from claude /login) instead of pay-per-token API credits. That means I never see a surprise bill — but each workflow run consumes a real chunk of my 5-hour-window allowance, because Archon uses Claude Opus with extended context across many phases (research → plan → implement → review → self-fix). On Claude Max I can run several big workflows per window; on Claude Pro fewer. If I cap the window, Archon just slows down or queues; it doesn't keep spending. |
For fixes I'd otherwise spend 30+ minutes on, the trade is great. For trivial fixes I do them by hand to preserve allowance for the bigger work. The big win is predictable cost — flat monthly subscription instead of variable per-token billing. |
| Setup time. The first time I set everything up, it took me a few hours over multiple sessions — including a few wrong turns and recoveries. | Once set up, the system runs itself. Subsequent projects take ~15 minutes to onboard into the workflow. |
| A learning curve. Even though I don't write much code myself, I had to learn enough about how the pieces connect to debug when something didn't work. | I now understand the tools well enough to extend them as my needs evolve. |
| Trust in autonomous systems. Letting Archon write code while I'm not watching takes a leap of faith. | Archon's self-review and validation steps catch most mistakes. Anything that gets through can be reviewed before merge — Archon opens PRs as drafts by default, never auto-merges. |
| Maintenance. Three different open-source projects mean three different update cycles to track. | Each one is in active development by people who care. Updates have been mostly painless. |
The bigger philosophical trade: I'm now more dependent on these specific tools than I was when I was just typing prompts into a chat window. If any one of them goes away, parts of my workflow break. I accept that — they're all open source, so worst case I can keep running older versions, but there's a real risk.
If you wanted to build something similar, the rough shopping list is:
claude /login, recommended — no per-token billing) or an Anthropic API key. The Brain separately uses a small amount of Claude Haiku allowance for the Slack-side prose summarization./init-workflow) that symlinks the right folders into any new project — so you're not copy-pasting files every time, and so improvements you make to commands propagate to all your existing projects automatically.For someone non-technical, the realistic minimum to get the autonomous-fix-from-issue loop working is probably 4-6 hours of focused setup time, including a couple of debugging detours. You can get to "Brain on its own" in less than an hour.
Don't try to build everything at once. Start with just one piece — most likely Archon, because it gives the most visible payoff. Get the GitHub-webhook-triggered fix-issue loop working on one repo. Live with it for a week. Then add the workflow repo. Then add the Brain. The whole stack is only valuable in context; piecemeal adoption gives you time to understand what each piece is doing.
None of this is original work on my part. I'm an integrator, not a builder of these underlying tools. Credit where it's due:
If you're in the Dynamous community, I'd love feedback on this writeup, especially from anyone running a similar stack. And if you do try to set something like this up yourself, please share what you find — the more of us working through these patterns, the better the patterns get.