7.0 KiB
Troubleshooting
Something broken? Let's fix it.
Installation Issues
"command not found: claudish"
With npx (no install):
npx claudish@latest --version
Global install:
npm install -g claudish
# or
bun install -g claudish
Verify:
which claudish
claudish --version
"Node.js version too old"
Claudish requires Node.js 18+.
node --version # Should be 18.x or higher
# Update Node.js
nvm install 20
nvm use 20
"Claude Code not installed"
Claudish needs the official Claude Code CLI.
# Check if installed
claude --version
# If not, get it from:
# https://claude.ai/claude-code
API Key Issues
"OPENROUTER_API_KEY not found"
Set the environment variable:
export OPENROUTER_API_KEY='sk-or-v1-your-key'
Or add to .env:
echo "OPENROUTER_API_KEY=sk-or-v1-your-key" >> .env
"Invalid API key"
- Check at openrouter.ai/keys
- Make sure key starts with
sk-or-v1- - Check for extra spaces or quotes
# Debug
echo "Key: [$OPENROUTER_API_KEY]" # Spot extra characters
"Insufficient credits"
Check your balance at openrouter.ai/activity.
Free tier gives $5. After that, add credits.
Model Issues
"Model not found"
Verify the model exists:
claudish --models your-model-name
Common mistakes:
- Typo in model name
- Model was removed from OpenRouter
- Using wrong format (should be
provider/model-name)
"Model doesn't support tools"
Some models can't use Claude Code's file/bash tools.
Check capabilities:
claudish --top-models
# Look for ✓ in the "Tools" column
Use a model with tool support:
x-ai/grok-code-fast-1✓openai/gpt-5.1-codex✓google/gemini-3-pro-preview✓
"Context length exceeded"
Your prompt + history exceeded the model's limit.
Solutions:
- Start a fresh session
- Use a model with larger context (Gemini 3 Pro has 1M)
- Reduce context by being more specific
Connection Issues
"Connection refused" / "ECONNREFUSED"
The proxy server couldn't start.
Check if port is in use:
lsof -i :3456 # Replace with your port
Use a different port:
claudish --port 4567 "your prompt"
Or let Claudish pick automatically:
unset CLAUDISH_PORT
claudish "your prompt"
"Timeout" / "Request timed out"
OpenRouter or the model provider is slow/down.
Check OpenRouter status: Visit status.openrouter.ai
Try a different model:
claudish --model minimax/minimax-m2 "your prompt" # Usually fast
"Network error"
Check your internet connection:
curl https://openrouter.ai/api/v1/models
If that fails, it's a network issue on your end.
Runtime Issues
"Unexpected token" / JSON parse error
The model returned invalid output. This happens occasionally with some models.
Solutions:
- Retry the request
- Try a different model
- Simplify your prompt
"Tool execution failed"
The model tried to use a tool incorrectly.
Common causes:
- Model doesn't understand Claude Code's tool format
- Complex tool call the model can't handle
- Sandbox restrictions blocked the operation
Solutions:
- Try a model known to work well (
grok-code-fast-1,gpt-5.1-codex) - Use
--dangerousflag to disable sandbox (careful!) - Simplify the task
"Session hung" / No response
The model is thinking... or stuck.
Kill and restart:
# Ctrl+C to cancel
# Then restart
claudish --model x-ai/grok-code-fast-1 "your prompt"
Interactive Mode Issues
"Readline error" / stdin issues
Claudish's interactive mode has careful stdin handling, but conflicts can occur.
Solutions:
- Exit and restart Claudish
- Use single-shot mode instead
- Check for other processes using stdin
"Model selector not showing"
Make sure you're in a TTY:
tty # Should show /dev/ttys* or similar
If piping input, the selector is skipped. Use --model flag:
echo "prompt" | claudish --model x-ai/grok-code-fast-1 --stdin
MCP Server Issues
"MCP server not starting"
Test it manually:
OPENROUTER_API_KEY=sk-or-v1-... claudish --mcp
# Should output: [claudish] MCP server started
If nothing happens, check your API key is set correctly.
"Tools not appearing in Claude"
- Restart Claude Code after adding MCP config
- Check your settings file syntax (valid JSON?)
- Verify the path:
~/.config/claude-code/settings.json
Correct config:
{
"mcpServers": {
"claudish": {
"command": "claudish",
"args": ["--mcp"],
"env": {
"OPENROUTER_API_KEY": "sk-or-v1-..."
}
}
}
}
"run_prompt returns error"
"Model not found"
Check the model ID is correct. Use list_models tool first to see available models.
"API key invalid" The API key in your MCP config might be wrong. Check it at openrouter.ai/keys.
"Rate limited" OpenRouter has rate limits. Wait a moment and try again, or check your account limits.
"MCP mode works but CLI doesn't" (or vice versa)
They use the same API key. If one works and the other doesn't:
- CLI: Uses
OPENROUTER_API_KEYfrom environment or.env - MCP: Uses the key from Claude Code's MCP settings
Make sure both have valid keys.
Performance Issues
"Slow responses"
Causes:
- Model is slow (some are)
- OpenRouter routing delay
- Large context
Solutions:
- Use a faster model (
grok-code-fast-1is quick) - Reduce context size
- Check OpenRouter status
"High token usage"
Check your usage:
claudish --audit-costs # If using cost tracking
Reduce usage:
- Be more specific in prompts
- Don't include unnecessary files
- Use single-shot mode for one-off tasks
Debug Mode
When all else fails, enable debug logging:
claudish --debug --verbose --model x-ai/grok-code-fast-1 "your prompt"
This creates logs/claudish_*.log with detailed information.
Share the log (redact sensitive info) when reporting issues.
Getting Help
Check documentation:
Report a bug: github.com/MadAppGang/claude-code/issues
Include:
- Claudish version (
claudish --version) - Node.js version (
node --version) - Error message (full)
- Steps to reproduce
- Debug log (if possible)
FAQ
"Is my code sent to OpenRouter?" Yes. OpenRouter routes it to your chosen model provider. Check their privacy policies.
"Can I use this with private/enterprise models?" If they're accessible via OpenRouter, yes. Use custom model ID option.
"Why isn't X model working?" Not all models support Claude Code's tool-use protocol. Stick to recommended models.
"Can I run multiple instances?" Yes. Each instance gets its own proxy port automatically.