Intelligent CLI assistant that intercepts failed commands with AI-powered feedback
Parrot combines a 6-tier intelligence system with modern generative AI to deliver context-aware, personality-matched responses for every command failure. Works immediately with zero dependencies, seamlessly upgrades with API or local LLM backends. Implements ML-inspired algorithms (TF-IDF, Markov chains, adversarial generation, vector embeddings) in pure Go with no external ML dependencies.
Seamlessly falls back from API → Local LLM → 6-tier intelligence
TF-IDF, Markov chains, ensemble voting, adversarial generation, RL
Works immediately after installation with ~3000 built-in insults
Native hooks for Bash, Zsh, and Fish shells with async mode
Mild, sarcastic, or savage response styles with context adaptation
Detects late night coding, CI environments, git branches, project types
Permission, network, syntax, merge conflicts, build failures, and more
< 200KB memory footprint, instant fallback responses
| Target | Description |
|---|---|
make build | Production build with optimizations |
make dev | Development build with debug symbols |
make test | Run test suite |
make rpm | Build RPM package |
make clean | Remove build artifacts |
After installation, run the guided setup wizard for complete configuration:
The 6-step wizard walks through system check, intelligence level selection, backend configuration, shell integration, personality choice, and testing.
Parrot works immediately with its built-in 6-tier intelligence system. For AI-enhanced responses:
Then edit ~/.config/parrot/config.toml
and set provider = "anthropic" and
endpoint = "https://api.anthropic.com/v1".
| Model | Size | Use Case |
|---|---|---|
qwen2.5:0.5b | ~500MB | Ultra-light, older hardware |
phi3.5:3.8b | ~2.3GB | Best speed/quality balance |
llama3.2:3b | ~2GB | Default, good quality |
mistral:7b | ~4.1GB | Higher quality, slower |
| Flag | Description |
|---|---|
--version | Show version number |
--help, -h | Show help text |
--spicy | Use quality mode (richer responses, slower) |
--debug | Show which backend generated the response |
Parrot searches for configuration in this priority order:
PARROT_CONFIG environment variable/etc/parrot/config.toml (system-wide, for RPM installs)~/.config/parrot/config.toml (XDG user config)~/.parrot.toml (home directory)./parrot.toml (current directory, for development)# Minimal: Use local Ollama only
[general]
personality = "sarcastic"
[api]
enabled = false
[local]
enabled = true
model = "llama3.2:3b" # API Backend Configuration
[api]
enabled = true # Enable cloud LLM backend
provider = "openai" # openai, anthropic, or custom
endpoint = "https://api.openai.com/v1" # API base URL
api_key = "" # Or use PARROT_API_KEY env
model = "gpt-3.5-turbo" # gpt-4, gpt-4o, claude-3-haiku
timeout = 3 # Seconds (keep low for shell UX)
# Local Ollama Backend Configuration
[local]
enabled = true # Enable local LLM backend
provider = "ollama" # Currently only ollama supported
endpoint = "http://127.0.0.1:11434" # Ollama server URL
model = "llama3.2:3b" # phi3.5:3.8b, qwen2.5:0.5b, etc.
timeout = 5 # Local needs more time
# General Behavior
[general]
personality = "savage" # mild, sarcastic, or savage
generation_mode = "snappy" # snappy (fast) or spicy (quality)
fallback_mode = false # Force 6-tier fallback only
debug = false # Show backend in output
colors = true # Colored terminal output
enhanced = false # Fancy formatting with borders | Option | Type | Default | Description |
|---|---|---|---|
enabled | bool | true | Enable API backend (OpenAI, Anthropic, or compatible) |
provider | string | openai | API provider: openai, anthropic, or custom |
endpoint | string | https://api.openai.com/v1 | API base URL (change for Anthropic or self-hosted) |
api_key | string | - | Your API key (or use PARROT_API_KEY environment variable) |
model | string | gpt-3.5-turbo | Model name (gpt-4, gpt-4o, claude-3-haiku, etc.) |
timeout | int | 3 | Request timeout in seconds (keep low for shell responsiveness) |
| Option | Type | Default | Description |
|---|---|---|---|
enabled | bool | true | Enable local Ollama backend for private, offline LLM |
provider | string | ollama | Local provider (currently only ollama supported) |
endpoint | string | http://127.0.0.1:11434 | Ollama server URL (default localhost) |
model | string | llama3.2:3b | Model to use (phi3.5:3.8b, qwen2.5:0.5b, mistral:7b) |
timeout | int | 5 | Request timeout in seconds (local models may need more time) |
| Option | Type | Default | Description |
|---|---|---|---|
personality | string | savage | Response style: mild (gentle), sarcastic (witty), or savage (brutal) |
generation_mode | string | snappy | Speed/quality tradeoff: snappy (fast) or spicy (richer, slower) |
fallback_mode | bool | false | Force fallback responses only (disable all AI backends) |
debug | bool | false | Show which backend generated the response |
colors | bool | true | Enable colored terminal output (respects NO_COLOR) |
enhanced | bool | false | Fancy formatting with borders and decorations |
Environment variables override config file values. This allows per-session or per-shell customization:
| Option | Type | Default | Description |
|---|---|---|---|
PARROT_CONFIG | path | - | Override config file location |
PARROT_API_KEY | string | - | API key for cloud providers |
PARROT_API_ENDPOINT | string | - | Override API endpoint |
PARROT_API_MODEL | string | - | Override API model |
PARROT_OLLAMA_ENDPOINT | string | - | Override Ollama endpoint |
PARROT_OLLAMA_MODEL | string | - | Override Ollama model |
PARROT_PERSONALITY | string | - | Override personality setting |
PARROT_MODE | string | - | Override generation mode |
PARROT_DEBUG | bool | - | Enable debug output |
PARROT_ASYNC | bool | - | Run in background (non-blocking) |
PARROT_FALLBACK_ONLY | bool | - | Force fallback mode |
NO_COLOR | bool | - | Disable colored output (standard) |
PARROT_NO_COLOR | bool | - | Disable colored output (parrot-specific) |
OLLAMA_KEEP_ALIVE | duration | - | Keep Ollama model loaded (e.g., "1h", "30m") |
# Parrot configuration via environment
export PARROT_API_KEY="sk-proj-..."
export PARROT_PERSONALITY="savage"
export PARROT_MODE="snappy"
export PARROT_ASYNC=true
export OLLAMA_KEEP_ALIVE="1h"
# Source parrot hooks
source /usr/share/parrot/parrot-hook.sh [api]
enabled = true
provider = "openai"
endpoint = "https://api.openai.com/v1"
model = "gpt-4o-mini" # Cost-effective, fast
# model = "gpt-4o" # Higher quality
# model = "gpt-4-turbo" # Best quality
timeout = 3 [api]
enabled = true
provider = "anthropic"
endpoint = "https://api.anthropic.com/v1"
model = "claude-3-haiku-20240307" # Fast, cost-effective
# model = "claude-3-sonnet-20240229" # Balanced
# model = "claude-3-opus-20240229" # Best quality
timeout = 3 [api]
enabled = true
provider = "custom"
endpoint = "http://localhost:1234/v1" # LM Studio default
model = "local-model" # Model name in your server
api_key = "lm-studio" # Some servers need a placeholder
timeout = 5 Parrot is an intelligent CLI tool that intercepts failed command executions and responds with witty, context-aware feedback. It combines a sophisticated 6-tier intelligence system with modern generative AI to deliver responses that are relevant, humorous, and occasionally helpful.
Go 1.21+ with no external ML dependencies. All ML-inspired algorithms (TF-IDF, Markov chains, adversarial generation, vector embeddings) are implemented in pure Go for maximum portability and minimal footprint.
Parrot follows a layered architecture with graceful degradation at each level:
User Command Fails (Exit Code != 0)
↓
Shell Hook (bash/zsh/fish)
↓
parrot mock "command" "exit_code"
↓
Load Configuration
- Check PARROT_CONFIG env
- Load /etc/parrot/config.toml
- Load ~/.config/parrot/config.toml
- Apply environment overrides
↓
Context Extraction
├── Command type detection (40+ types)
├── Error classification (20+ categories)
├── Intent parsing (what were they trying to do?)
└── Environment context (CI, git, project, time)
↓
LLM Manager (Smart Fallback)
├── 1. Try API backend (OpenAI/Anthropic)
│ └── Timeout: 3s default
├── 2. Try Local backend (Ollama)
│ └── Timeout: 5s default
└── 3. Fall through to 6-tier intelligence
↓
Response Processing
├── Clean LLM artifacts
├── Apply personality filter
└── Format for terminal
↓
Output (colored, formatted) Parrot supports three backend types with automatic fallback:
Cloud-based LLM providers for highest quality responses. Supports OpenAI, Anthropic Claude, and any OpenAI-compatible API endpoint.
Ollama-based local LLM for 100% private, offline operation. No data leaves your machine.
Built-in 6-tier intelligence system. Guaranteed to work with zero external dependencies. Uses ML-inspired algorithms for context-aware selection from ~3000 insults.
The LLM manager (internal/llm/manager.go) coordinates
backend selection and handles the fallback chain:
func (m *Manager) Generate(ctx Context) Response {
// 1. Try API backend if enabled and configured
if m.config.API.Enabled && m.apiBackend.Ready() {
resp, err := m.apiBackend.Generate(ctx, m.config.API.Timeout)
if err == nil {
return m.clean(resp, "api")
}
}
// 2. Try local Ollama if enabled and running
if m.config.Local.Enabled && m.localBackend.Ready() {
timeout := m.getLocalTimeout()
resp, err := m.localBackend.Generate(ctx, timeout)
if err == nil {
return m.clean(resp, "local")
}
}
// 3. Fall through to 6-tier intelligence
return m.fallbackIntelligence.Generate(ctx)
} LLM responses are processed before display:
Parrot caches responses to avoid repeated LLM calls for the same failure patterns:
The cache uses LRU (Least Recently Used) eviction to maintain a reasonable size. Default capacity is 1000 entries.
Parrot integrates with your shell to automatically intercept failed commands.
The parrot install command auto-detects your shell
and installs the appropriate hooks.
For Bash, parrot uses PROMPT_COMMAND to check
the exit status of the previous command:
# Bash hook mechanism
__parrot_last_command=""
__parrot_last_exit_code=0
parrot_debug_trap() {
__parrot_last_command="$BASH_COMMAND"
}
parrot_prompt_command() {
local exit_code=$?
if [[ $exit_code -ne 0 && -n "$__parrot_last_command" ]]; then
if [[ "$PARROT_ASYNC" == "true" ]]; then
parrot mock "$__parrot_last_command" "$exit_code" &
else
parrot mock "$__parrot_last_command" "$exit_code"
fi
fi
__parrot_last_command=""
}
trap 'parrot_debug_trap' DEBUG
PROMPT_COMMAND="parrot_prompt_command; ${PROMPT_COMMAND}"
Add to ~/.bashrc:
export OLLAMA_KEEP_ALIVE="1h"
source /usr/share/parrot/parrot-hook.sh
For Zsh, parrot uses preexec and
precmd hooks:
# Zsh hook mechanism
autoload -Uz add-zsh-hook
__parrot_last_command=""
parrot_preexec() {
__parrot_last_command="$1"
}
parrot_precmd() {
local exit_code=$?
if [[ $exit_code -ne 0 && -n "$__parrot_last_command" ]]; then
if [[ "$PARROT_ASYNC" == "true" ]]; then
parrot mock "$__parrot_last_command" "$exit_code" &
else
parrot mock "$__parrot_last_command" "$exit_code"
fi
fi
__parrot_last_command=""
}
add-zsh-hook preexec parrot_preexec
add-zsh-hook precmd parrot_precmd
Add to ~/.zshrc:
export OLLAMA_KEEP_ALIVE="1h"
source /usr/share/parrot/parrot-hook.sh
For Fish, parrot uses the fish_postexec event:
function parrot_postexec --on-event fish_postexec
set -l last_status $status
set -l last_cmd $argv[1]
if test $last_status -ne 0
if test "$PARROT_ASYNC" = "true"
parrot mock "$last_cmd" "$last_status" &
else
parrot mock "$last_cmd" "$last_status"
end
end
end
The error classifier (internal/llm/error_classifier.go)
recognizes 20+ error categories from exit codes, command patterns, and keywords:
| Category | Exit Codes | Keywords |
|---|---|---|
| permission | 126 | EACCES |
| not_found | 127 | no such file |
| syntax | 2 | syntax error |
| network | - | ECONNREFUSED |
| timeout | 130, 143 | timed out |
| segfault | 139 | SIGSEGV |
| memory | 137 | ENOMEM, OOM |
| dependency | - | module not found |
| authentication | - | 401, unauthorized |
| merge_conflict | - | CONFLICT |
| test_failure | - | test failed |
| build_failure | - | build failed |
| docker | 125 | container |
| npm | - | ERR! |
| git | - | fatal: |
| rust | - | error[E |
| python | - | Traceback |
| go | - | panic: |
| java | - | Exception |
| kubernetes | - | kubectl |
Classification follows a priority order:
Parrot supports three personality levels that affect response tone and severity:
Gentle, constructive feedback. Severity level 1-4. Good for professional environments or users who prefer encouragement over roasting.
🦜 That command didn't work. Maybe check the documentation?
🦜 Hmm, something went wrong. The error message might have clues.
🦜 That didn't go as planned. Double-check the syntax? Witty, clever mocking. Severity level 4-7. Balanced humor and feedback.
🦜 Git rejected your code harder than your last pull request.
🦜 Permission denied. Even your own computer doesn't trust you.
🦜 npm install failed. The node_modules folder is crying somewhere. Brutal, devastating roasts. Severity level 6-10. Not recommended for sensitive souls.
🦜 Your docker build failed harder than your career trajectory.
🦜 Segfault. Your code quality mirrors your life choices.
🦜 Git rejected you. Just like everyone else eventually does. Parrot extracts context from the environment to deliver more relevant responses:
Different responses for late night coding vs. morning:
🦜 It's 3 AM. The bugs aren't the only thing that needs sleep.
🦜 Late night coding? Bold strategy. Let's see if it pays off. Branch-aware responses:
Detects CI environments via environment variables:
Detects project type from configuration files:
Parrot persists data for continuous learning and improvement:
| File | Purpose | Size |
|---|---|---|
| ~/.parrot/history.json | User failure history, streaks, patterns | ~10-50KB |
| ~/.parrot/context_graph.json | Contextual memory graph for Tier 6 | ~20-100KB |
| ~/.parrot/command_history.json | Edit distance matcher data | ~10-30KB |
| ~/.parrot/response_cache | LRU response cache | ~50-200KB |
Parrot implements a sophisticated 6-tier intelligence system that progressively applies more advanced techniques to generate context-aware, relevant insults:
Tier 6: Advanced ML Techniques
├── Contextual Memory Graph
├── GAN-inspired Adversarial Generator
├── Edit Distance Matcher (Levenshtein)
└── Contextual Vector Embeddings (32-dim)
↓ (if no high-quality match)
Tier 5: Hybrid Ensemble ML
├── TF-IDF Scoring
├── Markov Chain Generation
├── Tag-based Scoring
└── 5-Factor Weighted Voting
↓ (if no match > 0.7)
Tier 4: Historical Learning
├── Failure Streak Tracking
├── Pattern-based Insults
└── Dynamic Template Generation
↓ (if no history match)
Tier 3: LLM-like Context
├── CI/CD Detection
├── Error Pattern Recognition
└── Build System Awareness
↓ (if no context match)
Tier 2: Environment Detection
├── Time of Day
├── Git Branch
└── Project Type
↓ (if no env match)
Tier 1: Basic Context Matching
├── Command Type (40+)
├── Exit Code
└── Argument Patterns The foundation tier extracts basic information from the failed command:
| Code | Meaning | Response Focus |
|---|---|---|
| 1 | General error | Generic failure jokes |
| 2 | Misuse of command | Syntax/usage jokes |
| 126 | Permission denied | Permission jokes |
| 127 | Command not found | Typo/install jokes |
| 130 | Interrupted (Ctrl-C) | Impatience jokes |
| 137 | Killed (SIGKILL/OOM) | Memory jokes |
| 139 | Segfault | Pointer/memory jokes |
| 143 | Terminated (SIGTERM) | Kill jokes |
--force / -f: Force-related jokessudo: Privilege escalation jokes-r / --recursive: Recursion jokes--help / --version: RTFM jokesTier 2 considers the broader environment context:
/tmp: Temporary code jokes/root: Root user jokes~/Desktop: Desktop development jokes/var/log: Log analysis jokesDetected from SHELL environment variable: bash, zsh, fish, ksh, etc.
Tier 3 applies more sophisticated context understanding:
Checks environment variables for CI systems:
CI Environment Variables Checked:
- GITHUB_ACTIONS, GITHUB_WORKFLOW
- GITLAB_CI, GITLAB_PIPELINE_ID
- JENKINS_URL, BUILD_NUMBER
- CIRCLECI, CIRCLE_JOB
- TRAVIS, TRAVIS_JOB_ID
- AZURE_PIPELINES, BUILD_BUILDID
- CI (generic) CI responses include team-shame elements: "Everyone on the team got your failure notification."
Parses error output for specific patterns:
Tracks consecutive failures of the same command:
Tier 4 uses persistent history to personalize responses:
Tracks consecutive and total failures per command type:
{
"npm_install": {
"total_failures": 47,
"current_streak": 3,
"max_streak": 7,
"last_failure": "2024-12-15T14:23:00Z"
},
"git_push": {
"total_failures": 23,
"current_streak": 0,
"max_streak": 4,
"last_failure": "2024-12-14T09:15:00Z"
}
} Uses history to generate personalized responses:
🦜 You've failed npm install 47 times. At this point it's not
the dependencies, it's you.
🦜 Git has rejected you 23 times. Take the hint. Combines templates with history data for infinite variations:
Tier 5 implements ML-inspired techniques in pure Go:
Term Frequency-Inverse Document Frequency for semantic matching:
TF-IDF(term, insult, corpus) = TF(term, insult) × IDF(term, corpus)
Where:
- TF = frequency of term in insult / total terms in insult
- IDF = log(total insults / insults containing term)
Context terms are extracted from:
- Command tokens
- Error keywords
- Environment context Generates novel insults from a trained model:
Ensemble combines multiple scoring methods:
Final Score = weighted average of:
1. TF-IDF semantic similarity (weight: 0.25)
2. Tag match score (weight: 0.20)
3. Personality fit (weight: 0.15)
4. Novelty score (weight: 0.25)
5. Context relevance (weight: 0.15) The most sophisticated tier implements cutting-edge ML concepts in pure Go:
Tracks failure sequences as a directed graph to predict and respond to patterns:
Graph Structure:
Nodes = (command_type, exit_code, context)
Edges = transitions between failure states
Weight = frequency × recency
Example graph:
git_add(1) → git_commit(1) → git_push(1)
↑ ↓
└───────────┘ (common cycle)
When git_add fails, if the user typically follows with
git_commit which also fails, proactively mock the pattern. The graph identifies common failure sequences and generates responses like:
🦜 Ah, the classic git-add-commit-push trifecta of failure.
🦜 Let me guess: npm install, npm run build, npm test - all failed? Implements a GAN-inspired generator/critic system for creating novel insults:
Adversarial Generation Loop:
1. Generator produces 5 candidate insults
2. Critic scores each on 5 dimensions
3. Best candidate selected if quality >= 0.6
4. If all fail, fall back to lower tier
Quality Dimensions:
┌─────────────┬────────┬──────────────────────────┐
│ Dimension │ Weight │ Description │
├─────────────┼────────┼──────────────────────────┤
│ Relevance │ 0.30 │ Context match (command, │
│ │ │ error, environment) │
├─────────────┼────────┼──────────────────────────┤
│ Novelty │ 0.25 │ Not used recently │
├─────────────┼────────┼──────────────────────────┤
│ Brutality │ 0.20 │ Matches personality │
├─────────────┼────────┼──────────────────────────┤
│ Coherence │ 0.15 │ Grammatically valid │
├─────────────┼────────┼──────────────────────────┤
│ Length │ 0.10 │ Optimal range (20-150) │
└─────────────┴────────┴──────────────────────────┘ Candidate: "Your docker build failed harder than your last deployment"
Scores:
Relevance: 0.90 (docker context match)
Novelty: 0.85 (first time used today)
Brutality: 0.75 (savage personality match)
Coherence: 1.00 (grammatically perfect)
Length: 1.00 (55 chars, optimal range)
──────────────────────────────────────────────
Weighted: 0.87 (> 0.6 threshold, ACCEPTED) 32-dimensional vector space for semantic similarity:
Embedding Dimensions (32 total):
- [0-7]: Command type (one-hot, 8 categories)
- [8-15]: Error category (one-hot, 8 categories)
- [16-19]: Exit code features (normalized, bucketed)
- [20-23]: Time features (hour, day of week, normalized)
- [24-27]: Context features (CI, git, sudo, etc.)
- [28-31]: History features (streak, total, recency)
Similarity = cosine(context_embedding, insult_embedding)
Insult embeddings are pre-computed at startup.
Context embedding is computed on-the-fly. Uses Levenshtein distance to find similar past commands:
Tracks response effectiveness to improve selection weights:
Weight Update Rule:
w_new = w_old + α × (reward - baseline) × gradient
Where:
- α = learning rate (0.01)
- reward = effectiveness signal (+1, 0, -1)
- baseline = running average reward
- gradient = feature activation
Example:
If "git rejected you" gets positive signal for git push failures,
that insult's weight increases for git-related contexts. Over time, the system learns which insults work best for each user's patterns.
parrot mock <command> <exit_code>The core command - generates a response for a failed command. This is what shell hooks call.
parrot installInstall shell hooks for automatic command failure detection. Auto-detects your shell.
parrot setupComplete guided setup wizard with 6 steps:
parrot configureInteractive configuration wizard for modifying existing settings:
parrot statusShow current configuration, backend health, and statistics:
parrot config init [path]Create a sample configuration file with all options documented:
parrot firstrunQuick first-run experience - minimal setup for immediate use:
| Flag | Description |
|---|---|
--version | Show version number |
--help, -h | Show help text |
--spicy | Use quality mode (richer responses, slower) |
--debug | Show which backend generated the response |
--config <path> | Use specific config file |
| Flag | Description |
|---|---|
--spicy | Use quality mode (longer timeout, richer prompt) |
--fallback | Force fallback mode (skip LLM backends) |
--personality <level> | Override personality (mild/sarcastic/savage) |
--no-color | Disable colored output |
Shell hook says parrot not in PATH.
Parrot uses fallback instead of API.
Common causes: invalid API key, wrong endpoint for provider, network issues, rate limiting.
Local backend unavailable.
Commands fail but no parrot response.
Slow or hanging after command fails.
Async mode runs parrot in the background so your shell never blocks.
Local LLM taking 5+ seconds.
Or use a smaller model: qwen2.5:0.5b or phi3.5:3.8b
LLM responses taking several seconds.
Or use generation_mode = "snappy" in config.
Getting the same responses over and over.
The novelty scoring in Tier 5-6 should prevent this, but clearing the cache resets learning.
Settings in config file not taking effect.
Check config priority: PARROT_CONFIG env → /etc/parrot/config.toml → ~/.config/parrot/config.toml → ~/.parrot.toml → ./parrot.toml
Getting 401 or other errors with Claude.
Ensure correct configuration:
[api]
provider = "anthropic"
endpoint = "https://api.anthropic.com/v1"
model = "claude-3-haiku-20240307"
api_key = "sk-ant-..." # Note: starts with sk-ant- Enable comprehensive debugging to see which backend was used:
| Dependency | Minimum Version | Notes |
|---|---|---|
| Go | 1.21+ | Required for building from source |
| GNU Make | 3.81+ | Optional, for make targets |
| Ollama | 0.1.0+ | Optional, for local LLM backend |
parrot --help for command referenceparrot status for diagnosticsparrot status output