RelayPlane,
shipped solo with the pipeline.
RelayPlane is my open-source AI proxy. I run the same supervised AI engineering pipeline I sell to clients on RelayPlane every day. This is what one senior engineer plus the pipeline produces on their own product, not a client case study.
RelayPlane is mine. Open source under MIT, public on GitHub at github.com/RelayPlane/proxy. It is a local-first AI proxy that does cost-aware model routing across Anthropic, OpenAI, OpenRouter, and Ollama. It works with Claude Code, Cursor, Continue, Windsurf, and MCP clients out of the box. It is also the most rigorous test rig I have for the engineering pipeline I sell to Continuum clients.
Every PR that lands on RelayPlane goes through the same pipeline a client engagement uses. Tasks scoped, agents work in parallel branches, tests run before I see the diff, lint and types must pass before the diff lands in my review queue. I read every line. The agents do the volume. I do the judgment.
Why this is the proof piece, not a client case
A pipeline that survives daily exposure to its own maintainer on their own product is a pipeline you can sell. If the system generates noise, I see it before any client does. If the senior review layer is not catching enough, I feel it before any client does. RelayPlane is the test instrument; the pipeline is the product.
What the pipeline has shipped on RelayPlane
Cost tracking, complexity-based routing, Ollama fallback, MCP client compatibility, and the npm distribution itself, all shipped through the supervised pipeline. The code is public. You can read the commit history.