A command-line interface tool for querying Large Language Models (LLMs) with advanced features like context injection, command suggestions, and streaming output.
- 🤖 Support for multiple LLM providers (OpenAI, etc.)
- 📝 Context injection from various sources:
- Shell history (
--hist
) - Directory listings (
--here
) - File contents (
--file
)
- Shell history (
- 💡 Command suggestions mode (
--cmd
) - 🔄 Optional streaming output (
--stream
) - 🎨 Beautiful progress display and colored output
- 💾 Response caching
- 🔁 Automatic retry with exponential backoff
- 🔒 Secure API key management
Requirements:
- Git
- Rust and Cargo (will be automatically installed if missing)
brew tap rfushimi/tap
brew install q
Alternatively, you can install with a single command:
curl -sSL https://raw.githubusercontent.com/rfushimi/q/refs/heads/main/install.sh | bash
This will:
- Install Rust if needed
- Clone the repository
- Build from source
- Install the binary to
~/.bin/
(default) - Add the directory to your PATH if needed
You can customize the installation directory by setting the BIN_DIR
environment variable:
BIN_DIR=/usr/local/bin curl -sSL https://raw.githubusercontent.com/rfushimi/q/refs/heads/main/install.sh | bash
Basic query:
q "What is Rust?"
With context:
# Include shell history context
q --hist "What did I do wrong in my last command?"
# Include current directory listing
q --here "What are the main source files?"
# Include file content
q --file src/main.rs "What does this code do?"
Command suggestions:
q --cmd "How do I find large files?"
Streaming output:
q --stream "Explain quantum computing"
API keys are stored in configuration files:
# Set OpenAI API key
q set-key openai YOUR_API_KEY
Options:
-H, --hist Include shell history context
-P, --provider Select LLM provider [default: gemini]
-M, --model Select model name (e.g., gemini-2.0-flash, gpt-3.5-turbo)
-D, --here Include current directory listing
-F, --file <FILE> Include file content
-C, --cmd Get command suggestions
--stream Enable streaming output
--no-cache Disable response caching
--retries <N> Maximum retry attempts [default: 3]
--debug Show debug information
-h, --help Print help
-V, --version Print version
Requirements:
- Rust 1.70 or later
- OpenAI API key (or other supported provider)
Build from source:
git clone https://github.com/rfushimi/q.git
cd q
cargo build --release
# install
cargo install --path . q
Run tests:
cargo test
MIT License - see LICENSE for details.