Santra Documentation
Santra is a repository-aware coding assistant that runs in your terminal. It reads your codebase, reasons about it, and makes precise edits using a swarm of specialised sub-agents — all routed through whichever model you choose.
Key properties:
- Provider-agnostic. Works with any OpenAI-compatible API, Anthropic, or Nvidia NIM.
- Bring your own key. Requests go directly from your machine to your provider.
- Multi-agent. An orchestrator spawns specialised sub-agents for file search, reading, implementation, review, and reasoning.
- Terminal-native. Built with Ink (React for terminal). No electron, no browser.
- MIT licensed. Fork, extend, and embed freely.
Installation
Install globally via npm, pnpm, or yarn. Requires Node.js 18+.
See the install guide for provider-specific setup and alternative package managers.
Quickstart
1. Set your API key
2. Run in your project
3. Type a task
The agent will ask for your approval before applying any file changes. You'll see a diff with line counts and can accept, reject, or provide feedback.
Agent system
Santra uses a dynamic multi-agent architecture. An orchestrator agent handles your request directly or delegates subtasks to specialist sub-agents via spawn_agent or spawn_agents (parallel).
Built-in agents
| Agent | Role | Tools |
|---|---|---|
orchestrator | Entry point. Reads request, explores repo, delegates or acts directly. | all tools |
file-picker | Locates relevant files using glob and ripgrep. | search_files, code_search, glob, list_directory |
reader | Synthesises architecture and implementation context from files. | read_file, read_subtree, list_directory |
executor | Makes precise edits — str_replace for surgical changes, write_file for new files. | read_file, write_file, str_replace, apply_patch |
reviewer | Critiques diffs and flags risks. Does not make edits. | read_file |
thinker | Reasons through complex decisions. No file tools — pure reasoning. | task_completed |
Execution flow
The orchestrator is always the entry point. It reads your prompt, classifies the task, explores the repository, and either acts directly or delegates to specialists. Specialists return their output to the orchestrator, which synthesises the result.
Agents share conversation context between runs. The orchestrator can spawn multiple agents in parallel using spawn_agents for independent subtasks (e.g. simultaneously reading two different files).
Change approval
Every file write surfaces a diff overlay in the TUI. You can:
- Accept all — apply changes immediately
- Reject — discard the change
- Feedback — type a correction; the agent revises
Tools reference
Agents have access to 23 built-in tools. Each tool is defined in packages/shared/ and executed locally by the agent runtime.
File operations
| Tool | Description |
|---|---|
read_file | Read full text content of a file. Truncates at 40 KB. |
write_file | Create a new file or fully overwrite an existing one. |
str_replace | Surgical edit — replace an exact string in a file. |
apply_patch | Apply a unified diff patch across one or more files. |
Search & navigation
| Tool | Description |
|---|---|
list_directory | List immediate children of a directory with type metadata. |
search_files | Find files matching a glob pattern. |
search_text | Ripgrep-based content search with regex support. |
code_search | Semantic code search (ripgrep with type hints). |
glob | Fast pattern-based file matching. |
read_subtree | Recursive directory snapshot with content limits (12 KB default). |
get_cwd | Return the current working directory. |
Agent control
| Tool | Description |
|---|---|
spawn_agent | Delegate a bounded subtask to a single specialist agent. |
spawn_agents | Run multiple specialist agents in parallel. |
lookup_agent_info | Inspect registered agent metadata. |
task_completed | Signal task completion with an optional summary. |
set_output | Set structured output for programmatic consumers. |
set_messages | Replace or append conversation messages. |
suggest_followups | Return suggested follow-up actions to the user. |
Interactive & external
| Tool | Description |
|---|---|
ask_user | Pause the run and request clarification. Supports multi-choice questions. |
web_search | Search the public web for current information. |
read_docs | Read documentation from a URL or local path. |
run_terminal_command | Execute a shell command with configurable timeout. |
write_todos | Lightweight planning memory scoped to the current run. |
Files larger than 40 KB are automatically truncated. Standard directories like node_modules, .git, dist, and .next are excluded from all search and listing operations.
Commands
Type / in the input to see available commands. The TUI shows a suggestion overlay as you type.
| Command | Description |
|---|---|
/resume | Browse and reopen a saved session. Supports fuzzy filtering. |
/clear | Wipe the current transcript and cumulative history. |
/copy | Copy the visible transcript to clipboard (pbcopy / xclip / wl-copy). |
/model | Show the active model name. |
Keyboard shortcuts
| Key | Action |
|---|---|
Enter | Send the current prompt. |
Esc | Stop the running agent, or clear input. |
F2 | Toggle between scroll mode and select mode (for text selection). |
↑ / ↓ | In scroll mode: scroll transcript. In input: navigate suggestions. |
j / k | Scroll transcript (vim-style, scroll mode only). |
PgUp / PgDn | Scroll by page. |
Home / End | Jump to top or bottom of transcript. |
Tab | Accept the top suggestion in the suggestion overlay. |
Ctrl+C | Exit Santra. |
Providers
Santra uses environment variables for provider configuration. Any server implementing the OpenAI chat-completions API is compatible.
| Provider | Environment variables |
|---|---|
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI | OPENAI_API_KEY |
| Nvidia NIM | OPENAI_BASE_URL, OPENAI_API_KEY |
| Groq | OPENAI_BASE_URL, OPENAI_API_KEY |
| Ollama | OPENAI_BASE_URL=http://localhost:11434/v1, OPENAI_API_KEY=ollama |
| Any compat. | OPENAI_BASE_URL, OPENAI_API_KEY |
See the providers page for complete setup instructions including Nvidia NIM and local inference.
Configuration
Environment variables
| Variable | Description |
|---|---|
ANTHROPIC_API_KEY | Anthropic API key for Claude models. |
OPENAI_API_KEY | Key for OpenAI or any OpenAI-compatible provider. |
OPENAI_BASE_URL | Override the OpenAI base URL to point at any compatible server. |
SANTRA_MODEL | Pin a specific model ID (overrides the agent default). |
Session files
Sessions are auto-saved to .santra-logs/ in your project directory. Each session file uses an ISO timestamp as its ID. Use /resume to browse and reopen saved sessions.
Custom agents
Drop agent definition files into a .agents/ directory in your project root. Santra loads them automatically on startup.
Each agent file exports an object with the following shape:
Fields
| Field | Required | Description |
|---|---|---|
id | yes | Unique identifier for this agent. Used by spawn_agent. |
name | yes | Human-readable display name. |
description | yes | Short description of the agent's role. |
systemPrompt | yes | The full system prompt string. |
toolNames | yes | Array of tool names this agent has access to. |
canBeSpawned | no | If true, the orchestrator can delegate to this agent. Default: false. |
The toolNamesarray controls which tools the agent has access to. Use the smallest set that fits the agent's role — this reduces context and keeps the agent focused.