The official guide to operating the Ussyverse's AI development orchestrator.
Full documentation on GitHubThis page is the Geoffrussy manual. Openclawssy has its own separate documentation and should not be treated as a Geoffrussy module.
Openclawssy is an alternative to OpenClaw for the sensible, thinking gentleman 🎩🚬. Use the links below for project-accurate setup and protocol docs.
PROTOTYPE IN DEVELOPMENT: Do not use in production yet.
Geoffrussy is written in Go and distributed as a single binary with no external dependencies beyond an internet connection for API access. You can install via the Go toolchain or download a pre-built binary from GitHub Releases.
# Install via Go toolchain (requires Go 1.21+)
$ go install github.com/mojomast/geoffrussy@latest
# Or build from source
$ git clone https://github.com/mojomast/geoffrussy.git
$ cd geoffrussy && go build -o geoffrussy .
# Verify installation
$ geoffrussy versionBefore your first run, initialize the configuration. This sets up your model routing table — which AI models to use for the Planner (architecture and reasoning) and the Executor (code generation).
Strategy Guide: Use a high-reasoning model for the Planner — Claude, GPT-4, or Gemini Pro work well. For the Executor, configure a fast, cheap model like Llama or GLM via Ollama to keep costs low while maintaining velocity. Geoffrussy supports OpenAI, Anthropic, GLM, and Ollama providers.
$ geoffrussy initThis creates a ~/.geoffrussy/config.yaml file where you define your model routing, API keys, and preferences.
This is where the magic happens. Instead of writing a prompt, you enter a dialogue. Geoffrussy uses the integrated DevUssy engine to run a five-phase interactive interview about your project.
After the interview, Geoffrussy generates a devplan.md — a structured development plan with phases, tasks, acceptance criteria, and architecture decisions. This is the core artifact that drives everything downstream.
Once you approve the plan, Geoffrussy begins the orchestration loop. The full pipeline follows: interview -> design -> plan -> review -> develop.
Reads the DevPlan and uses your configured high-reasoning model (Claude, GPT-4, Gemini) to reason about dependencies, architectural integrity, and task ordering.
Dispatches implementation tasks to fast, cost-effective models (Llama, GLM, or any Ollama-compatible model). Built-in cost tracking shows exactly what each step costs.