Santra Documentation

Santra is a repository-aware coding assistant that runs in your terminal. It reads your codebase, reasons about it, and makes precise edits using a swarm of specialised sub-agents — all routed through whichever model you choose.

Key properties:

  • Provider-agnostic. Works with any OpenAI-compatible API, Anthropic, or Nvidia NIM.
  • Bring your own key. Requests go directly from your machine to your provider.
  • Multi-agent. An orchestrator spawns specialised sub-agents for file search, reading, implementation, review, and reasoning.
  • Terminal-native. Built with Ink (React for terminal). No electron, no browser.
  • MIT licensed. Fork, extend, and embed freely.

Installation

Install globally via npm, pnpm, or yarn. Requires Node.js 18+.

$ npm install -g santra
$ santra --version
santra v1.0.0

See the install guide for provider-specific setup and alternative package managers.

Quickstart

1. Set your API key

# Anthropic (recommended)
$ export ANTHROPIC_API_KEY=sk-ant-...

# Or Ollama (local, no key needed)
$ export OPENAI_BASE_URL=http://localhost:11434/v1
$ export OPENAI_API_KEY=ollama

2. Run in your project

$ cd my-project
$ santra

● santra / my-project
ready for input…

3. Type a task

Add input validation to the /api/users POST route

[thinking] Reading route file…
[file-picker] found src/api/users.ts
[executor] str_replace → validation middleware added
[reviewer] diff approved

The agent will ask for your approval before applying any file changes. You'll see a diff with line counts and can accept, reject, or provide feedback.

Agent system

Santra uses a dynamic multi-agent architecture. An orchestrator agent handles your request directly or delegates subtasks to specialist sub-agents via spawn_agent or spawn_agents (parallel).

Built-in agents

AgentRoleTools
orchestratorEntry point. Reads request, explores repo, delegates or acts directly.all tools
file-pickerLocates relevant files using glob and ripgrep.search_files, code_search, glob, list_directory
readerSynthesises architecture and implementation context from files.read_file, read_subtree, list_directory
executorMakes precise edits — str_replace for surgical changes, write_file for new files.read_file, write_file, str_replace, apply_patch
reviewerCritiques diffs and flags risks. Does not make edits.read_file
thinkerReasons through complex decisions. No file tools — pure reasoning.task_completed

Execution flow

The orchestrator is always the entry point. It reads your prompt, classifies the task, explores the repository, and either acts directly or delegates to specialists. Specialists return their output to the orchestrator, which synthesises the result.

Agents share conversation context between runs. The orchestrator can spawn multiple agents in parallel using spawn_agents for independent subtasks (e.g. simultaneously reading two different files).

Change approval

Every file write surfaces a diff overlay in the TUI. You can:

  • Accept all — apply changes immediately
  • Reject — discard the change
  • Feedback — type a correction; the agent revises

Tools reference

Agents have access to 23 built-in tools. Each tool is defined in packages/shared/ and executed locally by the agent runtime.

File operations

ToolDescription
read_fileRead full text content of a file. Truncates at 40 KB.
write_fileCreate a new file or fully overwrite an existing one.
str_replaceSurgical edit — replace an exact string in a file.
apply_patchApply a unified diff patch across one or more files.

Search & navigation

ToolDescription
list_directoryList immediate children of a directory with type metadata.
search_filesFind files matching a glob pattern.
search_textRipgrep-based content search with regex support.
code_searchSemantic code search (ripgrep with type hints).
globFast pattern-based file matching.
read_subtreeRecursive directory snapshot with content limits (12 KB default).
get_cwdReturn the current working directory.

Agent control

ToolDescription
spawn_agentDelegate a bounded subtask to a single specialist agent.
spawn_agentsRun multiple specialist agents in parallel.
lookup_agent_infoInspect registered agent metadata.
task_completedSignal task completion with an optional summary.
set_outputSet structured output for programmatic consumers.
set_messagesReplace or append conversation messages.
suggest_followupsReturn suggested follow-up actions to the user.

Interactive & external

ToolDescription
ask_userPause the run and request clarification. Supports multi-choice questions.
web_searchSearch the public web for current information.
read_docsRead documentation from a URL or local path.
run_terminal_commandExecute a shell command with configurable timeout.
write_todosLightweight planning memory scoped to the current run.

Files larger than 40 KB are automatically truncated. Standard directories like node_modules, .git, dist, and .next are excluded from all search and listing operations.

Commands

Type / in the input to see available commands. The TUI shows a suggestion overlay as you type.

CommandDescription
/resumeBrowse and reopen a saved session. Supports fuzzy filtering.
/clearWipe the current transcript and cumulative history.
/copyCopy the visible transcript to clipboard (pbcopy / xclip / wl-copy).
/modelShow the active model name.

Keyboard shortcuts

KeyAction
EnterSend the current prompt.
EscStop the running agent, or clear input.
F2Toggle between scroll mode and select mode (for text selection).
↑ / ↓In scroll mode: scroll transcript. In input: navigate suggestions.
j / kScroll transcript (vim-style, scroll mode only).
PgUp / PgDnScroll by page.
Home / EndJump to top or bottom of transcript.
TabAccept the top suggestion in the suggestion overlay.
Ctrl+CExit Santra.

Providers

Santra uses environment variables for provider configuration. Any server implementing the OpenAI chat-completions API is compatible.

ProviderEnvironment variables
AnthropicANTHROPIC_API_KEY
OpenAIOPENAI_API_KEY
Nvidia NIMOPENAI_BASE_URL, OPENAI_API_KEY
GroqOPENAI_BASE_URL, OPENAI_API_KEY
OllamaOPENAI_BASE_URL=http://localhost:11434/v1, OPENAI_API_KEY=ollama
Any compat.OPENAI_BASE_URL, OPENAI_API_KEY

See the providers page for complete setup instructions including Nvidia NIM and local inference.

Configuration

Environment variables

VariableDescription
ANTHROPIC_API_KEYAnthropic API key for Claude models.
OPENAI_API_KEYKey for OpenAI or any OpenAI-compatible provider.
OPENAI_BASE_URLOverride the OpenAI base URL to point at any compatible server.
SANTRA_MODELPin a specific model ID (overrides the agent default).

Session files

Sessions are auto-saved to .santra-logs/ in your project directory. Each session file uses an ISO timestamp as its ID. Use /resume to browse and reopen saved sessions.

Custom agents

Drop agent definition files into a .agents/ directory in your project root. Santra loads them automatically on startup.

Each agent file exports an object with the following shape:

export const myAgent = { id: "my-agent", name: "My Agent", description: "What this agent does.", systemPrompt: `Your custom system prompt here.`, toolNames: ["read_file", "str_replace", "task_completed"], canBeSpawned: true, };

Fields

FieldRequiredDescription
idyesUnique identifier for this agent. Used by spawn_agent.
nameyesHuman-readable display name.
descriptionyesShort description of the agent's role.
systemPromptyesThe full system prompt string.
toolNamesyesArray of tool names this agent has access to.
canBeSpawnednoIf true, the orchestrator can delegate to this agent. Default: false.

The toolNamesarray controls which tools the agent has access to. Use the smallest set that fits the agent's role — this reduces context and keeps the agent focused.