Terminal-first multi-agent coding assistant

Spettro brings planning, approval, and coding into one terminal.

A Go + Bubble Tea interface with configurable agents, live tool traces, local or hosted model routing, and session persistence built into the workflow.

Product

Built around repository work, explicit permissions, and real agent handoffs.

Spettro is not trying to hide the workflow. The product is the workflow: trust the folder, connect a model, plan the change, approve execution when needed, and keep the session alive long enough to finish serious work.

Go 1.26+ and Bubble Tea TUI
Configurable agent manifest and prompts
Approval-native execution flow
Sessions, tasks, compact, and resume
Docs

Everything important is already documented in the repo.

FAQ

Questions that come up once you start thinking about using it for real work.

What makes Spettro different from a single-agent terminal assistant?

Spettro is built around explicit roles and handoffs. Planning, coding, review, docs, git, test, and explorer agents can each own part of the work instead of one assistant pretending to do everything the same way.

Does it support local models?

Yes. The docs cover OpenAI-compatible local endpoints such as LM Studio or Ollama, alongside hosted providers.

How does approval work?

Spettro exposes ask-first, restricted, and yolo permission modes. In stricter flows, plans pause before execution and continue only after approval.

What does first-time setup look like?

The documented path is straightforward: launch Spettro, confirm project trust, run /connect to add an API key or local endpoint, run /models to select a model, then start in plan mode.

Can it work with Anthropic and OpenAI-compatible providers at the same time?

Yes. The provider layer supports native Anthropic plus OpenAI-compatible APIs and local OpenAI-compatible endpoints, with model metadata loaded from models.dev and cached locally.

What happens in ask-first mode?

In ask-first mode, the normal path is to generate a plan first, review it, and then run /approve so the coding agent executes the queued plan instead of acting immediately.

Can I resume longer conversations later?

Yes. The docs describe persistent session storage for messages, tasks, and agent events, plus /clear, /resume, and /compact so longer threads stay manageable instead of vanishing.

Where does Spettro store its state?

It uses both global and project-local storage. Global state lives under ~/.spettro for config, encrypted keys, trusted paths, sessions, and cached models. Project-local state lives in .spettro for things like PLAN.md, allowed_commands.json, hooks, and optional index data.

Are API keys stored in plain text?

No. The configuration docs state that keys are encrypted with AES-GCM and are not stored in plaintext inside config.json.

Can I customize the agent workflow per repository?

Yes. Spettro can load a project-level spettro.agents.toml and prompt files in agents/, so each repository can define its own default agent, runtime settings, and handoff structure.

Does Spettro support hooks and command policies?

Yes. Global and project-local hooks can run on events like PreToolUse, PostToolUse, PermissionRequest, and SessionStart, and command approvals can be persisted per project in .spettro/allowed_commands.json.

What should I check if provider setup fails?

The troubleshooting docs recommend running /connect again, verifying your API key or local endpoint, checking that the endpoint responds to /v1/models, and then confirming the selected model in /models actually exists for that provider.

What if /approve does nothing useful?

The docs call out three common causes: no plan was generated first, there is no pending plan to execute, or the current permission mode is not what you think it is.

How do I keep large-repo sessions from blowing up context?

Use /budget to adjust token limits, /compact to summarize active context, /compact auto to control automatic compaction, and narrower prompts or focused file mentions when repository scans get heavy.