Stable v1.8.9

🦜 parrot

Intelligent CLI assistant that intercepts failed commands with AI-powered feedback

Parrot combines a 6-tier intelligence system with modern generative AI to deliver context-aware, personality-matched responses for every command failure. Works immediately with zero dependencies, seamlessly upgrades with API or local LLM backends. Implements ML-inspired algorithms (TF-IDF, Markov chains, adversarial generation, vector embeddings) in pure Go with no external ML dependencies.

🎯 RPM Available 🏛️ AUR Available

Features

Installation

RPM (Fedora/RHEL/CentOS)

terminal
$sudo dnf config-manager --add-repo https://repos.musicsian.com/musicsian.repo
$sudo dnf install parrot

AUR (Arch Linux)

terminal
$yay -S parrot# or paru -S parrot

Homebrew (macOS/Linux)

terminal
$brew tap tenseleyFlow/parrot
$brew install parrot

From Source

terminal
$git clone https://github.com/tenseleyFlow/parrot.git
$cd parrot
$make build# builds optimized binary
$sudo make install# installs to /usr/local/bin

Build Options

Target Description
make build Production build with optimizations
make dev Development build with debug symbols
make test Run test suite
make rpm Build RPM package
make clean Remove build artifacts

Quick Setup

After installation, run the guided setup wizard for complete configuration:

terminal
$parrot setup

The 6-step wizard walks through system check, intelligence level selection, backend configuration, shell integration, personality choice, and testing.

Manual Setup (Alternative)

terminal
$parrot install# install shell hooks
$source ~/.bashrc# or ~/.zshrc, or restart fish
$parrot mock "git push" "1"# test it works

Optional: Configure AI Backend

Parrot works immediately with its built-in 6-tier intelligence system. For AI-enhanced responses:

OpenAI API

terminal
$export PARROT_API_KEY="sk-proj-..."
$echo 'export PARROT_API_KEY="sk-..."' >> ~/.bashrc

Anthropic Claude

terminal
$export PARROT_API_KEY="sk-ant-..."
$parrot config init# create config file

Then edit ~/.config/parrot/config.toml and set provider = "anthropic" and endpoint = "https://api.anthropic.com/v1".

Local Ollama (100% Private)

terminal
$ollama pull llama3.2:3b# download model (~2GB)
$ollama serve# start the server
$export OLLAMA_KEEP_ALIVE="1h"# keep model hot

Recommended Ollama Models

Model Size Use Case
qwen2.5:0.5b ~500MB Ultra-light, older hardware
phi3.5:3.8b ~2.3GB Best speed/quality balance
llama3.2:3b ~2GB Default, good quality
mistral:7b ~4.1GB Higher quality, slower

Verify Installation

terminal
$parrot --version
parrot version 1.8.9
$parrot status
🦜 Parrot Status API Backend: ✓ enabled Local Backend: ✓ enabled (ollama) Fallback: ✓ always available

CLI Flags

Flag Description
--version Show version number
--help, -h Show help text
--spicy Use quality mode (richer responses, slower)
--debug Show which backend generated the response

← Back to all packages