<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AI on Alfero Chingono</title><link>https://www.chingono.com/tags/ai/</link><description>Recent content in AI on Alfero Chingono</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Fri, 03 Apr 2026 20:02:39 -0400</lastBuildDate><atom:link href="https://www.chingono.com/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>How I Run SonarQube in My Own CI Pipeline (And Let AI Fix What It Finds)</title><link>https://www.chingono.com/blog/2026/03/05/how-i-run-sonarqube-in-my-own-ci-pipeline-and-let-ai-fix-what-it-finds/</link><pubDate>Thu, 05 Mar 2026 09:00:00 +0000</pubDate><guid>https://www.chingono.com/blog/2026/03/05/how-i-run-sonarqube-in-my-own-ci-pipeline-and-let-ai-fix-what-it-finds/</guid><description>&lt;p&gt;I wrote in 2024 about &lt;a class="link" href="https://www.chingono.com/blog/2024/09/05/automating-owasp-scan-reports-in-azure-devops/" &gt;automating OWASP scan reports in Azure DevOps&lt;/a&gt; because I wanted security scanning to become part of the delivery flow instead of an afterthought.&lt;/p&gt;
&lt;p&gt;This post is the next step in that same direction.&lt;/p&gt;
&lt;p&gt;The thing I wanted from SonarQube was not another dashboard full of guilt. I wanted a loop that could actually create work, route it, fix it, and come back cleaner on the next scan.&lt;/p&gt;
&lt;p&gt;That changed the design completely.&lt;/p&gt;
&lt;h2 id="the-real-goal-was-not-run-sonarqube"&gt;The real goal was not &amp;ldquo;run SonarQube&amp;rdquo;
&lt;/h2&gt;&lt;p&gt;Running SonarQube is easy.&lt;/p&gt;
&lt;p&gt;Turning findings into a useful engineering loop is the hard part.&lt;/p&gt;
&lt;p&gt;The pattern I have found most practical looks like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;run the scan on a schedule&lt;/li&gt;
&lt;li&gt;translate findings into issues with enough structure to act on&lt;/li&gt;
&lt;li&gt;let AI or agents handle the obvious remediation work&lt;/li&gt;
&lt;li&gt;keep human review as the merge gate&lt;/li&gt;
&lt;li&gt;rescan and repeat&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That is what I have been doing across FireFly and CueMarshal.&lt;/p&gt;
&lt;h2 id="the-firefly-version-temporary-sonarqube-durable-issues"&gt;The FireFly version: temporary SonarQube, durable issues
&lt;/h2&gt;&lt;p&gt;In &lt;a class="link" href="https://github.com/achingono/firefly" target="_blank" rel="noopener"
&gt;FireFly&lt;/a&gt;, the workflow is intentionally self-contained.&lt;/p&gt;
&lt;p&gt;The scheduled GitHub Action spins up a SonarQube Community service container, sets the admin password, creates the project, generates an analysis token, runs the scanner in Docker, and then uses the SonarQube API to fetch open issues.&lt;/p&gt;
&lt;p&gt;From there, the workflow does something I think is more useful than just failing the pipeline: it turns findings into &lt;strong&gt;GitHub issues with meaningful labels&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The labels encode both issue type and severity:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;sonar&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sonar: bug&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sonar: vulnerability&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sonar: security hotspot&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sonar: blocker&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sonar: critical&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sonar: major&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That small step matters a lot. Once the findings live as first-class issues in the repo, they stop being hidden inside a scan report and start participating in the normal engineering workflow.&lt;/p&gt;
&lt;p&gt;The FireFly workflow also keeps the body format clean: key, severity, type, rule, file, line, and the actual message. That makes the issue understandable without forcing someone to click back into SonarQube every time.&lt;/p&gt;
&lt;h2 id="the-cuemarshal-version-findings-re-enter-the-agent-loop"&gt;The CueMarshal version: findings re-enter the agent loop
&lt;/h2&gt;&lt;p&gt;CueMarshal takes the pattern further.&lt;/p&gt;
&lt;p&gt;There, SonarQube is not just a quality gate. It is a signal source for the self-improvement system.&lt;/p&gt;
&lt;p&gt;The scan runs on a schedule, the quality gate is checked, and when issues remain, they are picked up by the self-improvement workflow. That workflow runs deterministic scanners, produces a findings JSON file, and lets AI select the high-value, automation-friendly items to turn into actual repository work.&lt;/p&gt;
&lt;p&gt;At that point the flow becomes very CueMarshal-like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;finding becomes issue&lt;/li&gt;
&lt;li&gt;issue gets labels such as &lt;code&gt;self-improvement&lt;/code&gt; and &lt;code&gt;source:sonar&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;developer agent works the task&lt;/li&gt;
&lt;li&gt;reviewer agent reviews it&lt;/li&gt;
&lt;li&gt;human still controls the merge&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is the part I care about most. Static analysis becomes part of an operational loop instead of a reporting loop.&lt;/p&gt;
&lt;h2 id="what-ai-actually-fixed"&gt;What AI actually fixed
&lt;/h2&gt;&lt;p&gt;This pattern became more convincing to me once I could see it in the commit history instead of just in a diagram.&lt;/p&gt;
&lt;p&gt;In FireFly, the SonarQube-driven fixes moved through recognizable stages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;critical auth and data exposure issues&lt;/li&gt;
&lt;li&gt;medium-severity issues in the LLM, tracer, and execution paths&lt;/li&gt;
&lt;li&gt;blocker and critical tracer problems&lt;/li&gt;
&lt;li&gt;remaining major issues in non-UI files&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In CueMarshal, the same loop showed up in a different form:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;bug-class findings resolved&lt;/li&gt;
&lt;li&gt;cognitive-complexity hotspots refactored&lt;/li&gt;
&lt;li&gt;scan-flow issues fixed so the SonarQube pipeline itself became more reliable&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is the detail that made the whole approach feel real to me. The AI was not &amp;ldquo;doing security&amp;rdquo; in some theatrical sense. It was participating in a bounded remediation loop with concrete input, reviewable output, and a cleaner next scan.&lt;/p&gt;
&lt;h2 id="what-i-still-keep-human"&gt;What I still keep human
&lt;/h2&gt;&lt;p&gt;I do not think static analysis findings should all be auto-fixed blindly.&lt;/p&gt;
&lt;p&gt;Some changes affect security-sensitive behavior. Some touch core orchestration logic. Some need architectural judgment more than mechanical cleanup.&lt;/p&gt;
&lt;p&gt;That is why I still care so much about review gates, protected areas, and explicit pull requests. AI can do triage. AI can do a surprising amount of repair work. But the system becomes trustworthy only when people retain approval authority over the consequential parts.&lt;/p&gt;
&lt;p&gt;This is the same design instinct behind CueMarshal more broadly: automate aggressively, but make the control points obvious.&lt;/p&gt;
&lt;h2 id="why-i-like-this-pattern"&gt;Why I like this pattern
&lt;/h2&gt;&lt;p&gt;The more repositories I maintain, the less patience I have for passive quality tooling.&lt;/p&gt;
&lt;p&gt;If a scan only tells me what is wrong, it is useful.
If a scan creates the next actionable task, it is much more useful.
If that task can be routed through an AI-assisted workflow and still land in a human-reviewed PR, then the tool has become part of delivery rather than commentary on delivery.&lt;/p&gt;
&lt;p&gt;That is the threshold I care about now.&lt;/p&gt;
&lt;p&gt;I still think DAST and pipeline security automation matter deeply; that earlier OWASP post still reflects that. But SonarQube plus an AI remediation loop feels like the next generation of the same idea: make quality signals operational, not ornamental.&lt;/p&gt;
&lt;p&gt;If you want the broader architecture around this, &lt;a class="link" href="https://www.chingono.com/blog/2025/08/28/designing-multi-agent-systems-lessons-from-building-an-8-agent-engineering-orchestra/" &gt;Designing Multi-Agent Systems: Lessons from Building an 8-Agent Engineering Orchestra&lt;/a&gt; covers the orchestration side, and &lt;a class="link" href="https://www.chingono.com/blog/2025/02/15/why-i-started-building-my-own-devops-platform-and-what-i-learned/" &gt;Why I Started Building My Own DevOps Platform&lt;/a&gt; covers the bigger motivation.&lt;/p&gt;
&lt;p&gt;References:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/achingono/firefly/blob/main/.github/workflows/scheduled-scan.yml" target="_blank" rel="noopener"
&gt;FireFly scheduled SonarQube workflow&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/achingono/firefly/blob/main/sonar-project.properties" target="_blank" rel="noopener"
&gt;FireFly SonarQube project configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/cuemarshal/cuemarshal/blob/main/docs/operations/self-improvement.md" target="_blank" rel="noopener"
&gt;CueMarshal self-improvement design&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/cuemarshal/cuemarshal/blob/main/docs/architecture/overview.md" target="_blank" rel="noopener"
&gt;CueMarshal architecture overview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>MCP in Practice: What Anthropic's Model Context Protocol Actually Means for Developers</title><link>https://www.chingono.com/blog/2025/03/20/mcp-in-practice-what-anthropics-model-context-protocol-actually-means-for-developers/</link><pubDate>Thu, 20 Mar 2025 09:00:00 +0000</pubDate><guid>https://www.chingono.com/blog/2025/03/20/mcp-in-practice-what-anthropics-model-context-protocol-actually-means-for-developers/</guid><description>&lt;p&gt;When Anthropic announced the &lt;a class="link" href="https://www.anthropic.com/news/model-context-protocol" target="_blank" rel="noopener"
&gt;Model Context Protocol&lt;/a&gt;, the most interesting part to me was not &amp;ldquo;LLMs can call tools.&amp;rdquo; We already knew that. The interesting part was that someone was finally trying to standardize the connection.&lt;/p&gt;
&lt;p&gt;That may sound like a small distinction, but it is the difference between a clever demo and an architecture you can actually build on.&lt;/p&gt;
&lt;p&gt;For developers, MCP matters because it turns tool access into something more portable, more inspectable, and less bespoke. Instead of wiring every model to every internal system in a slightly different way, you get a shared protocol for secure, two-way connections between AI clients and the systems where work actually lives.&lt;/p&gt;
&lt;p&gt;In other words: fewer one-off connectors, fewer weird wrappers, and less glue code pretending to be strategy.&lt;/p&gt;
&lt;h2 id="the-real-problem-mcp-solves"&gt;The real problem MCP solves
&lt;/h2&gt;&lt;p&gt;Without a protocol, most AI integrations end up with the same shape:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;custom JSON formats&lt;/li&gt;
&lt;li&gt;hand-rolled function schemas&lt;/li&gt;
&lt;li&gt;transport logic mixed into business logic&lt;/li&gt;
&lt;li&gt;a different adapter for every new client&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can absolutely ship systems that way. Many people already have. But you pay for it later in duplication, debugging, and lock-in.&lt;/p&gt;
&lt;p&gt;Anthropic&amp;rsquo;s framing resonated with me because it describes a problem I had already been running into while building CueMarshal. I did not need agents that could merely &amp;ldquo;use tools.&amp;rdquo; I needed a stable way for different parts of the system to use the &lt;strong&gt;same tools&lt;/strong&gt; in different contexts.&lt;/p&gt;
&lt;p&gt;That is where MCP becomes practical.&lt;/p&gt;
&lt;h2 id="what-it-changed-in-my-own-thinking"&gt;What it changed in my own thinking
&lt;/h2&gt;&lt;p&gt;In CueMarshal, I ended up with three MCP servers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;a &lt;strong&gt;Gitea MCP server&lt;/strong&gt; for issues, pull requests, repositories, workflows, and search&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;Conductor MCP server&lt;/strong&gt; for task coordination and agent state&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;System MCP server&lt;/strong&gt; for costs, runners, and health&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That split was not arbitrary. It reflected a design choice: organize tool access around bounded responsibilities instead of dumping everything into one giant catch-all toolbox.&lt;/p&gt;
&lt;p&gt;Even more important, the same MCP server code supports &lt;strong&gt;two transports&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;stdio&lt;/strong&gt; for agent runners&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;HTTP/SSE&lt;/strong&gt; for the long-running chat/orchestration layer&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is the part I think many developers will underestimate. The value is not just that the model can invoke a tool. The value is that your tool layer stops being trapped inside one execution model.&lt;/p&gt;
&lt;p&gt;The CueMarshal runners can spawn MCP servers directly as child processes. The Conductor can hold long-lived connections to those same tool surfaces over the network. Same capability, different runtime, no duplicated tool logic.&lt;/p&gt;
&lt;p&gt;That is not just elegant. It is operationally useful.&lt;/p&gt;
&lt;h2 id="mcp-is-really-about-interface-discipline"&gt;MCP is really about interface discipline
&lt;/h2&gt;&lt;p&gt;One thing building AI systems teaches very quickly is that &amp;ldquo;prompting&amp;rdquo; gets too much credit for problems that are really interface problems.&lt;/p&gt;
&lt;p&gt;If the tool schema is vague, the model will behave vaguely.&lt;/p&gt;
&lt;p&gt;If the permissions are broad, the behavior will feel risky.&lt;/p&gt;
&lt;p&gt;If the transport is brittle, the whole system looks flaky even when the reasoning is fine.&lt;/p&gt;
&lt;p&gt;What I like about MCP is that it nudges teams toward better engineering habits:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Typed tools instead of implied behavior&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Separation between protocol and implementation&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reusable tool layers across multiple clients&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Clearer permission boundaries&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That discipline matters even if you never use Anthropic&amp;rsquo;s stack directly.&lt;/p&gt;
&lt;h2 id="what-developers-should-actually-do-with-it"&gt;What developers should actually do with it
&lt;/h2&gt;&lt;p&gt;My advice is to treat MCP less like a product feature and more like a systems design decision.&lt;/p&gt;
&lt;p&gt;If you are building AI-assisted software delivery, internal automation, or even just richer developer tools, start by asking:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are the real systems my assistant needs to access?&lt;/li&gt;
&lt;li&gt;Which of those interactions deserve typed, validated interfaces?&lt;/li&gt;
&lt;li&gt;Which capabilities should be shared across chat, automation, and background agents?&lt;/li&gt;
&lt;li&gt;Where do I want auditability and permission scoping to live?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That line of thinking will produce a better architecture whether you adopt MCP tomorrow or not.&lt;/p&gt;
&lt;p&gt;In my own work, it pushed me away from raw &lt;code&gt;curl&lt;/code&gt;-driven integration and toward a universal tool layer. Once I made that shift, a lot of downstream problems became easier: orchestration, reuse, security boundaries, and even explanation. It is easier to trust a system when you can say, very plainly, &amp;ldquo;here are the tools it has, here is what they do, and here is how they are invoked.&amp;rdquo;&lt;/p&gt;
&lt;h2 id="what-mcp-does-not-solve"&gt;What MCP does &lt;strong&gt;not&lt;/strong&gt; solve
&lt;/h2&gt;&lt;p&gt;MCP does not magically make an agent reliable.&lt;/p&gt;
&lt;p&gt;It does not fix poor workflow design.&lt;/p&gt;
&lt;p&gt;It does not remove the need for human review.&lt;/p&gt;
&lt;p&gt;And it definitely does not turn vague prompts into good engineering.&lt;/p&gt;
&lt;p&gt;What it does is give you a cleaner control plane for connecting models to real systems. That is already a meaningful improvement.&lt;/p&gt;
&lt;p&gt;For me, that is why MCP feels important. Not because it adds more AI theater, but because it reduces architectural friction in a place where friction compounds very fast.&lt;/p&gt;
&lt;p&gt;If you are curious how that idea plays out in a larger system, I wrote more about the broader coordination problem in &lt;a class="link" href="https://www.chingono.com/blog/2025/02/15/why-i-started-building-my-own-devops-platform-and-what-i-learned/" &gt;Why I Started Building My Own DevOps Platform&lt;/a&gt; and the orchestration lessons in &lt;a class="link" href="https://www.chingono.com/blog/2025/08/28/designing-multi-agent-systems-lessons-from-building-an-8-agent-engineering-orchestra/" &gt;Designing Multi-Agent Systems: Lessons from Building an 8-Agent Engineering Orchestra&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;References:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://www.anthropic.com/news/model-context-protocol" target="_blank" rel="noopener"
&gt;Introducing the Model Context Protocol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://modelcontextprotocol.io/quickstart" target="_blank" rel="noopener"
&gt;MCP quickstart and specification&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/cuemarshal/cuemarshal/blob/main/docs/features/mcp-servers/overview.md" target="_blank" rel="noopener"
&gt;CueMarshal MCP server overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/cuemarshal/cuemarshal/blob/main/docs/architecture/overview.md" target="_blank" rel="noopener"
&gt;CueMarshal architecture overview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Why I Started Building My Own DevOps Platform (And What I Learned)</title><link>https://www.chingono.com/blog/2025/02/15/why-i-started-building-my-own-devops-platform-and-what-i-learned/</link><pubDate>Sat, 15 Feb 2025 09:00:00 +0000</pubDate><guid>https://www.chingono.com/blog/2025/02/15/why-i-started-building-my-own-devops-platform-and-what-i-learned/</guid><description>&lt;p&gt;For a while, I had the same reaction to most AI-for-software-delivery demos: impressive in a narrow way, but not something I would trust with real work. One tool could write code. Another could summarize a diff. Another could review a pull request. But the hard part of software delivery is rarely one isolated step. It is the handoff between steps.&lt;/p&gt;
&lt;p&gt;That was the itch that eventually pushed me to start building &lt;a class="link" href="https://github.com/cuemarshal/cuemarshal" target="_blank" rel="noopener"
&gt;CueMarshal&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I did not start with the ambition to build &amp;ldquo;an AI company&amp;rdquo; or some abstract autonomous future. I started because I wanted a more coherent delivery system: one place where a task could move from idea to issue to branch to pull request to review without losing context every time responsibility changed hands.&lt;/p&gt;
&lt;h2 id="the-problem-i-actually-wanted-to-solve"&gt;The problem I actually wanted to solve
&lt;/h2&gt;&lt;p&gt;CI/CD was never the whole problem. In many teams, the pipeline is the most deterministic part of the process. The mess usually lives around it:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the design decision that only exists in a chat thread&lt;/li&gt;
&lt;li&gt;the issue that says too little&lt;/li&gt;
&lt;li&gt;the reviewer who has to reconstruct intent from commit history&lt;/li&gt;
&lt;li&gt;the documentation that is always &amp;ldquo;we&amp;rsquo;ll do it after&amp;rdquo;&lt;/li&gt;
&lt;li&gt;the growing pile of tools that all know a little, but none of them own the workflow&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;What I wanted was not another dashboard. I wanted a delivery surface that respected how engineering work already happens.&lt;/p&gt;
&lt;p&gt;That led me to a simple conviction: &lt;strong&gt;Git should be the source of truth, not just the storage layer.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If work already becomes legible through issues, branches, pull requests, labels, and reviews, then the orchestration layer should live there too. Not beside it. Not behind it. Inside it.&lt;/p&gt;
&lt;h2 id="why-i-built-it-myself"&gt;Why I built it myself
&lt;/h2&gt;&lt;p&gt;There were three constraints that mattered to me from day one.&lt;/p&gt;
&lt;p&gt;First, I wanted the system to be &lt;strong&gt;self-hosted&lt;/strong&gt;. A lot of AI tooling assumes you are comfortable sending your code, your process, and your delivery metadata into someone else&amp;rsquo;s black box. Many teams are not. I wanted an approach that made data sovereignty a feature, not an apology.&lt;/p&gt;
&lt;p&gt;Second, I wanted the system to be &lt;strong&gt;role-aware&lt;/strong&gt;. Real software delivery is not &amp;ldquo;one super-agent with a clever prompt.&amp;rdquo; Design, implementation, review, testing, DevOps, and documentation are different jobs. Sometimes one person does multiple jobs, but the jobs are still different. That distinction matters.&lt;/p&gt;
&lt;p&gt;Third, I wanted &lt;strong&gt;human control to remain the final gate&lt;/strong&gt;. I am interested in automation, not surrender. If an AI system cannot work inside a reviewable pull-request workflow, I do not think it is mature enough for serious engineering work.&lt;/p&gt;
&lt;p&gt;Those constraints eventually turned into the shape CueMarshal has now: a conductor service in TypeScript, specialized agents for architecture, development, review, testing, DevOps, docs, and linting, a Git-native workflow in Gitea, and a tool layer built around MCP so the same system can reason over structured interfaces instead of raw shell scripts and ad-hoc API calls.&lt;/p&gt;
&lt;h2 id="the-architecture-came-later-the-principles-came-first"&gt;The architecture came later. The principles came first.
&lt;/h2&gt;&lt;p&gt;Long before the implementation solidified, the design principles were already obvious to me.&lt;/p&gt;
&lt;h3 id="1-git-is-a-better-coordination-layer-than-most-agent-uis"&gt;1. Git is a better coordination layer than most agent UIs
&lt;/h3&gt;&lt;p&gt;An issue is a task. A branch is a workstream. A pull request is a proposal. A review is a decision record. A merge is a controlled state change.&lt;/p&gt;
&lt;p&gt;That sounds almost too obvious to say out loud, but it changed how I thought about the whole problem. Once I stopped treating Git as the place where code merely ends up, and started treating it as the place where engineering decisions become inspectable, the rest of the architecture got much simpler.&lt;/p&gt;
&lt;h3 id="2-specialization-beats-a-do-everything-agent"&gt;2. Specialization beats a &amp;ldquo;do everything&amp;rdquo; agent
&lt;/h3&gt;&lt;p&gt;In CueMarshal, the system is intentionally split into named roles: Marshal for orchestration, Ava for architecture, Dave for implementation, Reese for review, Tess for testing, Devin for DevOps, Dot for docs, and Linton for linting.&lt;/p&gt;
&lt;p&gt;That is not branding for its own sake. It is an operational choice.&lt;/p&gt;
&lt;p&gt;The moment one agent tries to be planner, coder, reviewer, tester, and documentarian all at once, you lose clarity. You also lose accountability. Specialization makes prompts sharper, tool permissions narrower, and outputs easier to judge.&lt;/p&gt;
&lt;h3 id="3-tool-contracts-matter-more-than-prompt-cleverness"&gt;3. Tool contracts matter more than prompt cleverness
&lt;/h3&gt;&lt;p&gt;One of the biggest lessons from building CueMarshal is that the quality of an agentic system is heavily constrained by the quality of its interfaces.&lt;/p&gt;
&lt;p&gt;If an agent is forced to improvise around loosely structured APIs, fragile shell commands, or browser automation for tasks that should be typed and validated, the system becomes harder to trust. This is one reason MCP clicked for me so quickly later on: it gave a clean shape to something I already knew was essential.&lt;/p&gt;
&lt;p&gt;Good tool contracts do not just help the model. They help the human operator understand what the system is even allowed to do.&lt;/p&gt;
&lt;h3 id="4-stateless-workers-are-a-feature-not-a-bug"&gt;4. Stateless workers are a feature, not a bug
&lt;/h3&gt;&lt;p&gt;CueMarshal&amp;rsquo;s runners are intentionally stateless. They reconstruct context from the repository, the issue, the pull request, and the tool layer every time.&lt;/p&gt;
&lt;p&gt;That may sound less magical than the &amp;ldquo;persistent AI teammate&amp;rdquo; narrative, but it is much easier to reason about. It scales better. It fails more cleanly. And it produces a better audit trail.&lt;/p&gt;
&lt;p&gt;In practice, that has made me more skeptical of systems that depend on hidden memory to feel smart.&lt;/p&gt;
&lt;h3 id="5-human-control-is-product-design"&gt;5. Human control is product design
&lt;/h3&gt;&lt;p&gt;The more I worked on this, the more convinced I became that &amp;ldquo;human in the loop&amp;rdquo; is not enough as a slogan. It has to be built into the workflow itself.&lt;/p&gt;
&lt;p&gt;That is why I prefer issue-driven execution, reviewable pull requests, typed tools, explicit handoffs, and merge control. Those are not bureaucratic constraints. They are the difference between a system that can support real engineering and a system that is only good for demos.&lt;/p&gt;
&lt;h2 id="what-i-learned-from-building-in-public"&gt;What I learned from building in public
&lt;/h2&gt;&lt;p&gt;The most useful part of this project has not been proving that agents can write code. We already knew that. The useful part has been learning where coordination breaks, where trust gets earned, and what kinds of structure make AI assistance actually usable.&lt;/p&gt;
&lt;p&gt;It also made one thing clearer for me: the next layer of software delivery is not &amp;ldquo;more CI/CD.&amp;rdquo; It is better orchestration around the work humans and machines are already doing together.&lt;/p&gt;
&lt;p&gt;That is the reason I started building CueMarshal, and it is still the reason I keep working on it.&lt;/p&gt;
&lt;p&gt;If you want the more technical follow-up, I wrote about &lt;a class="link" href="https://www.chingono.com/blog/2025/03/20/mcp-in-practice-what-anthropics-model-context-protocol-actually-means-for-developers/" &gt;what MCP actually changed for developers&lt;/a&gt; and the coordination lessons from &lt;a class="link" href="https://www.chingono.com/blog/2025/08/28/designing-multi-agent-systems-lessons-from-building-an-8-agent-engineering-orchestra/" &gt;building an eight-agent engineering orchestra&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;References:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/cuemarshal/cuemarshal" target="_blank" rel="noopener"
&gt;CueMarshal repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/cuemarshal/cuemarshal/blob/main/docs/architecture/overview.md" target="_blank" rel="noopener"
&gt;CueMarshal architecture overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/cuemarshal/cuemarshal/blob/main/docs/features/agents/overview.md" target="_blank" rel="noopener"
&gt;CueMarshal agent profiles&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>