Python autonomous agent · open source
DULUS
// hunt. patch. ship.

~12K lines of readable Python. Any model — Claude, GPT, Gemini, DeepSeek, Kimi, Qwen, and 14 free models via NVIDIA NIM. No build step. No gatekeeping.

27
Built-in tools
11
Providers
263+
Unit tests
dulus — interactive session
▲ DULUS v1.01.20 · ready
0
Tool calls executed
and counting
0+
Models supported
11 providers
0+
Unit tests
all green
0K
Lines of Python
readable. forgiving.
// loadout

Everything in the clip

🤖
01

Multi-Provider

Anthropic · OpenAI · Gemini · DeepSeek · Kimi · Qwen · Zhipu · MiniMax · Ollama · LM Studio · custom endpoints. /model to switch mid-session.

🔧
02

27 Built-in Tools

Read, Write, Edit, Bash, Glob, Grep, WebFetch, WebSearch, NotebookEdit, GetDiagnostics, Memory, Tasks, Agents, Skills, and more. Everything the agent needs.

🔌
03

MCP Integration

Drop a .mcp.json. Any MCP server registers instantly as mcp__server__tool. stdio, SSE, HTTP. Manage with /mcp.

🧩
04

Plugin System

Auto-Adapter onboards any Python repo with zero manifest. Hot-reload in-session. No restart. Tools appear immediately.

🦅
05

Sub-Agents

Spawn typed agents — coder, reviewer, researcher, tester — each in its own git worktree. Agents communicate via message passing.

🎙️
06

Voice Input

Offline STT via Whisper. No API key. No cloud. /voice lang zh · /voice device. Hint domain terms with voice_keyterms.txt.

🧠
07

Brainstorm Mode

Multi-persona AI debate. Dulus generates expert roles and has them argue. Council of ghosts. Skeptic PM, Staff Eng 2037, Hot-take Intern.

08

SSJ Developer Mode

Ten workflow shortcuts behind one keystroke. Refactor → review → test → commit → ship. Chained. Unattended. /ssj.

📡
09

Telegram Bridge

Run Dulus from your phone. Slash commands, vision, and voice from Telegram. Poke a long-running agent from the bus. /telegram token id.

💾
10

Checkpoints

Auto-snapshot conversation + files every turn. Break something? /checkpoint 042 and files + context rewind together.

🧬
11

Persistent Memory

Dual-scope (user + project). Ranked by confidence × recency. Mark memories gold to pin them forever. /memory consolidate.

📋
12

Plan Mode

Read-only analysis phase before touching anything. Only plan.md is writable. Think first, break things later. /plan.

// bring your own brain

Works with every model worth knowing

Swap models mid-session with /model <name>. Auto-detection handles provider prefix. Colon syntax also works.

11
Cloud + Local Providers
40+
Models Ready Today
via OpenAI-compat endpoints
Anthropic
Claude Opus · Sonnet · Haiku
ANTHROPIC_API_KEY
OpenAI
GPT-4o · O3 · O1
OPENAI_API_KEY
Google
Gemini 2.5 · Flash
GEMINI_API_KEY
DeepSeek
Chat · Reasoner · V3
DEEPSEEK_API_KEY
Kimi
Moonshot · K2.5
MOONSHOT_API_KEY
Qwen
Max · Plus · QwQ
DASHSCOPE_API_KEY
Zhipu
GLM-4 · Flash
ZHIPU_API_KEY
MiniMax
Text-01 · VL-01
MINIMAX_API_KEY
Ollama
Any local model
NO KEY NEEDED
LM Studio
Local · GUI
NO KEY NEEDED
Custom
OpenAI-compat
CUSTOM_BASE_URL
Free Tier

14 frontier models.
Zero cost.

NVIDIA NIM hosts frontier models at 40 RPM each, free. Sign up at build.nvidia.com and Dulus routes to them automatically — with fallback when limits hit.

14
Models
40
RPM each
AUTO
Fallback
DeepSeek R1
REASONING
DeepSeek V3
INSTRUCT
Kimi K2.5
LONG CONTEXT
GLM-4
ZHIPU AI
MiniMax T-01
TEXT + VISION
Mistral Nemotron
NVIDIA-TUNED
Llama 3.3 70B
META
Llama 3.1 405B
META · FLAGSHIP
Llama Nemotron
REASONING
Qwen2.5 Coder
ALIBABA
Qwen3 235B A22B
MoE
Phi-4
MICROSOFT
Gemma 3 27B
GOOGLE
Mistral Large
INSTRUCT
AUTO-FALLBACK: deepseek-r1 kimi-k2.5 llama-3.3-70b mistral-nemotron …14 deep. zero downtime.
Get free NVIDIA key ↗
// zero to flight in 30 seconds

Quick Start

01 · Clone

Get the code

Clone the repo. No monorepo, no workspace, no lockfile drama. Just a folder.

02 · Install

One command

Use uv tool install . for a global install or pip install -r requirements.txt and run directly. No build step.

03 · Key

Set a model key

Any of the provider keys. Or skip entirely and use Ollama locally — no API key needed.

04 · Fly

Start hunting

Type dulus. Hit Enter. Tell it what to do. /help if you need a map.

bash
# clone git clone https://github.com/KevRojo/Dulus cd Dulus # install (pick one) uv tool install . # ← recommended pip install -r requirements.txt # ← or direct # set a key export ANTHROPIC_API_KEY=sk-ant-... # or: OPENAI_API_KEY, GEMINI_API_KEY, NVIDIA_API_KEY, ... # go dulus
bash · local models (no key)
ollama pull qwen2.5-coder dulus --model ollama/qwen2.5-coder # or use NVIDIA's free tier export NVIDIA_API_KEY=nvapi-... dulus --model nvidia-web/deepseek-r1
bash · useful flags
dulus --model gpt-4o # pick model dulus --accept-all -p "init repo" # non-interactive dulus --thinking # extended thinking git diff | dulus -p "write commit"# pipe in
// /brainstorm in action

The Mesa Redonda

Dulus spawns model personas and has them argue your problem in parallel — then lets you interrupt, address one directly, or stop the whole table mid-debate.

DEBATE TOPIC Should we migrate the API to full async/await? LIVE · ROUND 1
C
Claude
Sonnet 4
D
DeepSeek
R1
K
Kimi
K2.5
G
Gemini
2.5 Pro
/b confirma tu path actual → DeepSeek only
// interrupt anatomy
While agents run in parallel, you keep full control. Drop into any agent's context at any time without stopping the others.
/a → agent A (Claude)
/b → agent B (DeepSeek)
/c → agent C (Kimi)
/d → agent D (Gemini)
/stop → halt all agents
/mesa → broadcast to all
// agent activity feed

The Flock, Online

Sub-agents work autonomously in parallel. Every push, review, and message is logged in real time. The flock never sleeps.

ACTIVE AGENTS
agent://coder
feat/api-v2 · 12 tools used
agent://reviewer
feat/api-v2 · 4 issues found
agent://tester
ci/test-suite · 63/64 ✓
agent://researcher
spec/rfc-042 · reading docs
0
tools fired this session
INTER-AGENT MESSAGES
coder → reviewer
Pushed auth refactor to worktree. Can you check line 87?
just now
reviewer → coder
@rate_limit missing on /users endpoint. Also UserOut leaks .email.
12s ago
// /plugin install · auto-adapter

Any repo.
Zero manifest.

Plugins are not built-in — that's the point. Dulus ships with zero plugins by default. When you need one, point it at any Python repo and the Auto-Adapter reads the code, generates the manifest, installs deps, and registers the tools live. No YAML. No API. No manifest file required. This is a Dulus-exclusive feature.

auto-adapter · live install
$ /plugin install dorks@https://repo.url/dorks Running: git clone --depth 1 → ~/.dulus/plugins/dorks No plugin manifest found. Would you like Dulus to auto-adapt this repository? This uses AI to analyze the repo and generate a plugin manifest. It may take a few minutes. [Y/n] Y Missing manifest for 'dorks', attempting auto-adaptation... ✦ Read(dorks) [OK] → Detected 2 files ✦ Bash(pip install beautifulsoup4==4.13.5 requests==2.32.5 yagoogle) Running: python.exe -m pip install --quiet beautifulsoup4==4.13.5 [OK] → Success ✦ Write(dorks/plugin_tool.py) [OK] → Success Running adapter worker for 'dorks'... [OK] → plugin_tool.py compiles (no SyntaxError) [OK] → plugin_tool.py imports without runtime errors [OK] → TOOL_DEFS and TOOL_SCHEMAS are exported [OK] → TOOL_DEFS contains valid ToolDef objects (all 3) [OK] → Tool 'list_dork_categories' runs successfully [OK] → Tool 'list_google_dorks' runs successfully ✓ Dependencies installed for 'dorks'. ✓ Plugin 'dorks' installed successfully (user scope). ✓ Reloaded plugins: 31 tools registered, 7 modules cleared $ hey cuántos plugins tenemos? 🦅🔥 Papi, tenemos 7 plugins instalados y activos: 1 sherlock – busca usernames por to'as las redes sociales 2 fastcli – speedtest pa' medir la velocidad del internet 3 mempalace – memoria local con búsqueda semántica 4 composio – conexión a 1000+ apps 5 yfinance – datos del mercado (stocks, precios, etc.) 6 art – ASCII art con 600+ fuentes y 700+ piezas 7 dorks – Google dorks y búsqueda pasiva ¿Quieres que te enseñe qué tools tiene alguno? 💪
// cómo funciona
01
Apunta a cualquier repo Python
Dulus clona el repo. No necesita manifest, API, ni configuración.
02
Auto-Adapter analiza el código
IA lee el repo, genera plugin_tool.py, instala dependencias, verifica exports.
03
Herramientas registradas en caliente
Sin reiniciar. Los tools aparecen en la sesión actual inmediatamente.
04
Dulus los usa solo
El agente llama los tools automáticamente cuando el prompt lo requiere.
// ejemplo: plugins activos
sherlock
busca usernames en todas las redes sociales
OSINT
yfinance
datos del mercado · stocks · precios en tiempo real
FINANCE
art
ASCII art con 600+ fuentes y 700+ piezas
CREATIVE
dorks
Google dorks y búsqueda pasiva automatizada
SEARCH
mempalace
memoria local con búsqueda semántica
MEMORY
fastcli
speedtest · mide la velocidad del internet
NETWORK
// instalar cualquier repo
/plugin install nombre@https://repo.url/repo
Sin manifest · sin configuración · Dulus lo resuelve solo
/plugin list /plugin enable dorks /plugin disable art /plugin update sherlock /plugin uninstall dorks /plugin reload
// /skills · composio · anthropic-compatible

800+ Skills.
Ready to drop in.

Dulus connects natively to Composio — the largest library of Anthropic-compatible tools. GitHub, Slack, Linear, Notion, Jira, Gmail, Google Sheets, Postgres, Stripe… inject any skill in seconds.

800+
Skills available
1
Command to install
MCP
Protocol compatible
Composable chains
⊕ skill injection · composio
# browse and install any skill $ /skills ▲ loading composio skill registry... ✓ 800+ skills available # inject a skill into this session $ /skills inject github ✓ github skill loaded · 32 tools ▲ tools registered: create_issue · merge_pr ▲ review_code · get_diff… # use immediately — no restart $ /skills inject particle-playground ✓ particle-playground loaded · 1 tool # now ask dulus to use it [Dulus] » create a fireworks particle system → skill particle_playground.generate_prompt ✓ prompt generated · 6 parameters set ✓ canvas code written to fireworks.html
// popular skills
particle-playground · live demo · injected skill ↗ open full
↑ This is a live Composio skill running inside Dulus's sandbox. Tweak the controls — the prompt updates in real time. Copy it and paste into Dulus.
// zero API spend · playwright · session harvest

Use the chats you
already pay for.

Dulus can talk to Claude, Kimi, Gemini and DeepSeek through their browser sessions — no API key, no per-token billing. Your Pro subscription becomes a Dulus provider.

$0.00
API cost per token
5
Web providers
AUTO
Cookie harvest
Context via Pro plan
⬡ session harvest · playwright
# one-time setup: capture your browser session $ /harvest ▲ opening Claude.ai in Chromium... log in normally, then press Enter ✓ session captured · cookies saved ✓ claude-web provider ready # harvest other providers $ /harvest-kimi ✓ kimi-web provider ready $ /harvest-gemini ✓ gemini-web provider ready # use them exactly like any other provider $ dulus --model claude-web "refactor auth" ▲ routing via claude.ai web session... → read src/auth/session.py ✓ → edit src/auth/session.py ✓ → test tests/auth/** ✓ 42 passed # claude thinks it's in its own chat UI # but dulus is orchestrating every tool call ▲ tokens billed: $0.00 · session: claude_pro
Powered by Playwright · headless browser automation
Claude
claude-web
ACTIVE
claude.ai Pro session. Opus 4, Sonnet 4, full context window. Your subscription, Dulus's talons.
/harvest
Claude Code
claude-code-web
ACTIVE
Claude Code's browser session. Agentic mode, full tool belt, zero API bill.
/harvest
Kimi
kimi-web
ACTIVE
kimi.ai web session. 128k context, K2.5 reasoning. Harvest once, use forever.
/harvest-kimi
Gemini
gemini-web
ACTIVE
Google Gemini 2.5 Pro via browser. 1M context. Your Google One subscription, weaponized.
/harvest-gemini
DeepSeek
deepseek-web
ACTIVE
DeepSeek V3 / R1 via chat.deepseek.com. Free tier, no key, reasoning mode included.
/harvest-deepseek
How it works
01 Dulus opens a Chromium window via Playwright
02 You log in normally — Dulus captures the session cookies
03 Subsequent requests replay those cookies headlessly
04 The model sees its own web UI; Dulus sees the output
05 Tool calls, streaming, context — all proxied transparently
// every model. one cli.

Pick your brain.
We'll handle the rest.

One flag. Any provider. Dulus speaks every dialect — cloud, local, free, paid. Switch mid-session with /model.

model switcher · live demo
# same prompt. different brain. zero config change.
OpenAI
OPENAI_API_KEY
gpt-4o · gpt-4o-mini · o3 · o4-mini · o1
5 models
Google Gemini
GEMINI_API_KEY
gemini-2.5-pro · gemini-2.0-flash · gemini-1.5-pro
3 models
DeepSeek
DEEPSEEK_API_KEY
deepseek-v3 · deepseek-r1 (reasoner)
2 models
Kimi / Moonshot
MOONSHOT_API_KEY
kimi-k2.5 · moonshot-v1-8k/32k/128k
4 models
Qwen
DASHSCOPE_API_KEY
qwen-max · qwen-plus · qwen-turbo · qwq-32b
4 models
MiniMax
MINIMAX_API_KEY
MiniMax-Text-01 · MiniMax-VL-01 · abab6.5s
3 models
Zhipu / GLM
ZHIPU_API_KEY
glm-4-plus · glm-4 · glm-4-flash
3 models
N
NVIDIA NIM
NVIDIA_API_KEY
FREE TIER
14 models · 40 RPM each · auto-fallback · no credit card
deepseek-r1 · kimi-k2.5 · llama-3.3-70b · mistral-nemotron…
14 models FREE
Ollama
NO KEY NEEDED
LOCAL
any GGUF model · qwen2.5-coder · llama3.3 · mistral · phi4
∞ models
LM Studio
NO KEY NEEDED
LOCAL
any local model via OpenAI-compat server
∞ models
Custom Endpoint
CUSTOM_BASE_URL
any OpenAI-compat server · vLLM · TGI · remote GPU
∞ models
// zero cloud. zero key.

Runs Offline.
Completely.

no internet required
# pull a model from ollama.com $ ollama pull qwen2.5-coder pulling manifest... ████████████ 100% ✓ model ready # point dulus at it $ dulus --model ollama/qwen2.5-coder ▲ DULUS ollama/qwen2.5-coder · local ✓ model loaded · 0ms cold start ✓ no API key · no telemetry · no network [Dulus] [0%] »
  • Air-gapped
    No packets leave your machine. Works on flights, submarines, government networks.
  • 🦙
    Any Ollama model
    Everything on ollama.com — Llama 3, Mistral, Phi-4, Gemma, Qwen, DeepSeek local…
  • LM Studio compatible
    Running LM Studio? Point CUSTOM_BASE_URL at it. Same Dulus, zero changes.
  • Full tool support
    Function-calling models (qwen2.5-coder, llama3.3, phi4) get every Dulus tool — no cloud required.
PRO TIP
For coding: ollama/qwen2.5-coder:32b
For reasoning: ollama/qwq
For speed: ollama/phi4-mini
// /voice · /tts

Talk. Listen. Ship.

Full offline voice pipeline. Whisper in, Kokoro out. No cloud. No subscription. Your machine, your voice.

🎙️
Voice Input
Whisper · offline · multilingual
/voice
listening_
# press-and-hold to record $ /voice ✓ Whisper loaded · base.en model ✓ mic: MacBook Pro Microphone ● recording... speak now ✓ transcribed: "refactor the auth module" ▲ 🦅 Sharpening talons on the AST...
  • Offline Whisper — no API key
  • Any microphone · /voice device
  • Multilingual · /voice lang zh
  • Hint domain terms via voice_keyterms.txt
🔊
TTS — Dulus Talks Back
Kokoro · offline · natural voice
/tts
speaking_
# enable voice output $ /tts ✓ Kokoro engine loaded ✓ voice: af_heart · 24kHz # dulus now speaks its responses aloud ▶ playing: "I've refactored auth.py. Tests pass."
  • Kokoro TTS — fully offline
  • No ElevenLabs, no latency, no cost
  • Natural voice · multiple voice profiles
  • Streams audio as response generates
// /telegram token chat_id

Dulus in Your Pocket.

Full Dulus in Telegram. Slash commands, model switching, file sharing, streaming responses. Poke a long-running agent from the bus.

🦅
Dulus Bot
● online
refactor auth, no compromise
🦅 Dulus · claude-sonnet
On it. Reading session.py and tokens.py…
read grep edit ✓
/model nvidia-web/deepseek-r1
🦅 Switched → deepseek-r1
Model changed. Continuing...
✓ Done
Auth refactored. 3 files, +142 -218. Tests: 42/42 ✓
/checkpoint list
  • 📲
    Full Dulus in Telegram
    Every slash command, every model, every tool — from your phone.
  • Streaming responses
    Responses stream in real-time as Telegram messages. Long tasks post progress updates.
  • 📎
    File sharing
    Send code files, get diffs back. Send a screenshot to the vision model.
  • 🔑
    One env var
    TELEGRAM_BOT_TOKEN — that's the whole config. Auto-starts next launch.
SETUP
1. Create a bot via @BotFather
2. /telegram <token> <chat_id>
3. Done — persists across restarts
// /ssj · developer mode

Developer Mode:
Unlocked.

SSJ = Super Saiyan. When you need to see everything. Token counts, provider debug logs, stream latency, tool inspector, prompt viewer. Nothing hidden.

⚡ SSJ MODE ACTIVE
══════════════════════════════════════ ⚡ SSJ DEVELOPER MODE ══════════════════════════════════════ [1] Raw token counts ON [2] Provider debug logs ON [3] Stream latency timers ON [4] Tool call inspector ON [5] Prompt injection viewer ON [6] Memory trace ON [0] Exit SSJ ────────────────────────────────────── tokens in: 4,892 out: 1,247 cost: $0.0041 latency: first_token=420ms total=3.2s tools: read×3 edit×1 bash×1 grep×2 »
🔢
Raw token counts
Input, output, context usage — every turn, every tool call.
🔍
Tool call inspector
See exactly what the model called, with what args, and what came back.
⏱️
Stream latency timers
Time to first token, total generation time, per-tool latency.
💉
Prompt injection viewer
See the full system prompt, memory injections, and context assembly.
// /remember · /checkpoint

Never Lose Context.
Ever.

Like git commits for your conversations. Persistent memory survives sessions. Checkpoints let you rewind files and context together.

🧬

Persistent Memory

Facts, preferences, project context — remembered across sessions. Ranked by confidence × recency.

memory
$ /remember "always use anyio for async" ✓ saved · confidence: 1.0 $ /memory search async ♛ anyio for async [conf: 1.0 · gold] auth_module_patterns [conf: 0.94] team_preferences [conf: 0.79] $ /memory consolidate ✓ 3 new memories distilled from session
💾

Checkpoints

Auto-snapshot conversation + files every turn. Break something? Rewind. Files and context restored together.

checkpoints
$ /checkpoint list #041 pre-refactor 2h ago (files: 14) #042 pre-migration 1h ago (files: 8) #043 post-auth-fix [current] # something went wrong, rewind $ /checkpoint 041 ✓ files restored · 14 files rewound ✓ context restored to #041
SESSION TIMELINE
start
#041
pre-refactor
edits
#042
pre-migration
💥 broke it
↺ rewind
#041
#043
current
// / + tab to explore

Every Command.
One Cheat Sheet.

// /webchat [port]

Dulus in the Browser.

No terminal required. Spin up a local web UI with one command — Flask backend, full streaming, task manager, personas, everything. Same Dulus, glass UI.

kimi/kimi-k2.5
Pa' luego, streamear y correr Dulus sin problemas! 130 ↑, 1084 🐦
just now
¿ok me', qué tal? El estado del repo
3s ago
🦅 Dulus claude-sonnet-4 · streaming
Analizando el repo... 3 archivos sin trackear. ⚠ backend/tasks.py tiene cambios sin commitear. El último commit fue un nuevo engine. Listo para hacer push cuando quieras.
→ read ✓ grep ✓ bash ⧗ write
Habla a Dulus [ Enter input. Shift+Enter nueva línea ]
One command
Just /webchat — starts Flask on localhost:5000. LAN-accessible too.
Full streaming
Token-by-token output, tool call indicators, model badge. No refresh.
Task Manager baked in
Create tasks, track agents, view status — same window, TASK MANAGER button.
📱
Mobile ready
LAN URL printed on startup. Open on your phone. Full Dulus from the couch.
$ dulus
[Dulus] » /webchat
✓ WebChat listening → http://localhost:5000
From phone (same wifi) → http://10.0.0.6:5000
Stop with: /webchat stop
// /task · task manager

Tasks. Tracked.
Agents. Assigned.

Create, assign, filter, and close tasks from the REPL, the WebChat, or the Desktop GUI. Agents report progress automatically. Everything in one board.

Dulus Task Board
@kimi-code:0 @kimi-code2:0 @kimi-code3:0 Total: 3 · 10% done
Pendiente 2
#1
Refactor auth module
created via REPL · 2h ago
#2
Write e2e tests for /users
created via webchat · 45m ago
En Progreso 1
#3
Update OpenAPI schema
started 12m ago · 4 tools used
Completadas 1
#0
love dulus
created via REPL · completed 30m ago
// task commands
task manager
# create from REPL $ /task create "refactor auth" ✓ #1 created · pending # assign to agent $ /task assign 1 kimi-code ✓ #1 → @kimi-code # check status $ /task list ✓ #1 in-progress · @kimi-code #2 pending · @kimi-code2 # close it out $ /task done 1 ✓ #1 → completed
ALSO AVAILABLE IN
WebChat → TASK MANAGER button
Desktop GUI → Tareas view
Agents → auto-create tasks via REPL
// python dulus_gui.py

Native Desktop GUI.

Full PyQt app. Sidebar history, persona switching, integrated task board, tool panel, theme selector, settings dialog. Every Dulus feature, no terminal required.

Dulus
kimi-code/kimi-for-coding
● Listo
Historial
16:55Nueva conversación×
16:54hey×
16:26Nueva conversación×
23:48hey hija como estas?×
06:14hola hija como estas?×
23:09[MemPalace — relevant memories pre-l…]×
🦅
Nueva conversación
Empieza a escribir o activa una tarea
📎 Escribe un mensaje... 🎙
  • 🖼
    PyQt6 native app
    Runs on Windows, macOS, Linux. Native menus, keyboard shortcuts, system tray.
  • 🎭
    Persona switcher
    Swap Dulus's personality mid-session. Sidebar shows active persona with one click.
  • 📋
    Integrated task board
    Full kanban view inside the GUI. Create tasks, watch agents move them to done.
  • 🔧
    Tool panel
    Visual tool inspector. See every tool call live, with args and output.
  • 🎨
    Themes
    Dark, light, and custom themes via gui/themes.py. Hot-swap without restart.
LAUNCH
python dulus_gui.py
python dulus_gui.py --theme dark
GUI + terminal run side-by-side
// questions we actually get

FAQ

Use a model with native function-calling support: qwen2.5-coder, llama3.3, mistral, phi4. Base models without tool-use fine-tuning won't dispatch tools reliably.
In the REPL: /config custom_base_url=http://your-server:8000/v1 then /model custom/your-model-name. Any OpenAI-compatible server works.
--accept-all auto-approves every write and shell command. On prod: don't. Use plan mode (read-only, only plan.md writable) or the default auto mode that prompts before writes. Use your brain — Dulus will use its talons.
Add domain terms to .dulus/voice_keyterms.txt, one per line. Whisper respects the hint list. Works great for obscure package names, internal project names, acronyms.
Type /cost in the REPL. Dulus tracks token usage and estimates USD cost for every turn, broken down by model. Session totals persist across /save//load.
Yes and please do. Edit dulus/spinners.py, add your line, PR it. Bonus points for a cultural reference we'll understand in 2046. The current record holder: "☕ If I'm taking so long, don't worry, I'm just talking to your mom."