diff --git a/README.md b/README.md
index 1c1c00e5..bc8e1255 100644
--- a/README.md
+++ b/README.md
@@ -5,8 +5,8 @@
Cortex Linux
- Cortex is an AI layer for Linux Debian/Ubuntu
- Instead of memorizing commands, googling errors, and copy-pasting from Stack Overflow — describe what you need.
+ AI-Powered Package Manager for Debian/Ubuntu
+ Install software using natural language. No more memorizing package names.
@@ -14,7 +14,7 @@
-
+
@@ -40,8 +40,8 @@
## What is Cortex?
-Cortex is an AI layer for Linux Debian/Ubuntu
-Instead of memorizing commands, googling errors, and copy-pasting from Stack Overflow — describe what you need.
+Cortex is an AI-native package manager that understands what you want to install, even when you don't know the exact package name.
+
```bash
# Instead of googling "what's the package name for PDF editing on Ubuntu?"
cortex install "something to edit PDFs"
@@ -64,15 +64,11 @@ cortex install "tools for video compression"
| Feature | Description |
|---------|-------------|
| **Natural Language** | Describe what you need in plain English |
-| **Voice Input** | Hands-free mode with Whisper speech recognition ([F9 to speak](docs/VOICE_INPUT.md)) |
| **Dry-Run Default** | Preview all commands before execution |
| **Sandboxed Execution** | Commands run in Firejail isolation |
| **Full Rollback** | Undo any installation with `cortex rollback` |
-| **Role Management** | AI-driven system personality detection and tailored recommendations |
-| **Docker Permission Fixer** | Fix root-owned bind mount issues automatically |
| **Audit Trail** | Complete history in `~/.cortex/history.db` |
| **Hardware-Aware** | Detects GPU, CPU, memory for optimized packages |
-| **Predictive Error Prevention** | AI-driven checks for potential installation failures |
| **Multi-LLM Support** | Works with Claude, GPT-4, or local Ollama models |
---
@@ -97,12 +93,8 @@ python3 -m venv venv
source venv/bin/activate
# 3. Install Cortex
-# Using pyproject.toml (recommended)
pip install -e .
-# Or install with dev dependencies
-pip install -e ".[dev]"
-
# 4. Configure AI Provider (choose one):
## Option A: Ollama (FREE - Local LLM, no API key needed)
@@ -118,8 +110,6 @@ echo 'OPENAI_API_KEY=your-key-here' > .env
cortex --version
```
-> **💡 Zero-Config:** If you already have API keys from Claude CLI (`~/.config/anthropic/`) or OpenAI CLI (`~/.config/openai/`), Cortex will auto-detect them! Environment variables work immediately without prompting. See [Zero Config API Keys](docs/ZERO_CONFIG_API_KEYS.md).
-
### First Run
```bash
@@ -130,26 +120,60 @@ cortex install nginx --dry-run
cortex install nginx --execute
```
----
+### AI Command Execution Setup (`ask --do`)
+
+For the full AI-powered command execution experience, run the setup script:
-## 🚀 Upgrade to Pro
+```bash
+# Full setup (Ollama + Watch Service + Shell Hooks)
+./scripts/setup_ask_do.sh
-Unlock advanced features with Cortex Pro:
+# Or use Python directly
+python scripts/setup_ask_do.py
-| Feature | Community (Free) | Pro ($20/mo) | Enterprise ($99/mo) |
-|---------|------------------|--------------|---------------------|
-| Natural language commands | ✅ | ✅ | ✅ |
-| Hardware detection | ✅ | ✅ | ✅ |
-| Installation history | 7 days | 90 days | Unlimited |
-| GPU/CUDA optimization | Basic | Advanced | Advanced |
-| Systems per license | 1 | 5 | 100 |
-| Cloud LLM connectors | ❌ | ✅ | ✅ |
-| Priority support | ❌ | ✅ | ✅ |
-| SSO/SAML | ❌ | ❌ | ✅ |
-| Compliance reports | ❌ | ❌ | ✅ |
-| Support | Community | Priority | Dedicated |
+# Options:
+# --no-docker Skip Docker/Ollama setup (use cloud LLM only)
+# --model phi Use a smaller model (2GB instead of 4GB)
+# --skip-watch Skip watch service installation
+# --uninstall Remove all components
+```
+
+This script will:
+1. **Set up Ollama** with a local LLM (Mistral by default) in Docker
+2. **Install the Watch Service** for terminal monitoring
+3. **Configure Shell Hooks** for command logging
+4. **Verify everything works**
+
+#### Quick Start After Setup
+
+```bash
+# Start an interactive AI session
+cortex ask --do
+
+# Or with a specific task
+cortex ask --do "install nginx and configure it for reverse proxy"
+
+# Check watch service status
+cortex watch --status
+```
-**[Compare Plans →](https://cortexlinux.com/pricing)** | **[Start Free Trial →](https://cortexlinux.com/pricing)**
+#### Manual Setup (Alternative)
+
+If you prefer manual setup:
+
+```bash
+# Install the Cortex Watch service (runs automatically on login)
+cortex watch --install --service
+
+# Check status
+cortex watch --status
+
+# For Ollama (optional - for local LLM)
+docker run -d --name ollama -p 11434:11434 -v ollama:/root/.ollama ollama/ollama
+docker exec ollama ollama pull mistral
+```
+
+This enables Cortex to monitor your terminal activity during manual intervention mode, providing real-time AI feedback and error detection.
---
@@ -169,16 +193,6 @@ cortex history
cortex rollback
```
-### Role Management
-
-```bash
-# Auto-detect your system role using AI analysis of local context and patterns
-cortex role detect
-
-# Manually set your system role to receive specific AI recommendations
-cortex role set
-```
-
### Command Reference
| Command | Description |
@@ -186,35 +200,27 @@ cortex role set
| `cortex install ` | Install packages matching natural language query |
| `cortex install --dry-run` | Preview installation plan (default) |
| `cortex install --execute` | Execute the installation |
-| `cortex docker permissions` | Fix file ownership for Docker bind mounts |
-| `cortex role detect` | Automatically identifies the system's purpose |
-| `cortex role set ` | Manually declare a system role |
+| `cortex ask ` | Ask questions about your system |
+| `cortex ask --do` | Interactive AI command execution mode |
| `cortex sandbox ` | Test packages in Docker sandbox |
| `cortex history` | View all past installations |
| `cortex rollback ` | Undo a specific installation |
+| `cortex watch --install --service` | Install terminal monitoring service |
+| `cortex watch --status` | Check terminal monitoring status |
| `cortex --version` | Show version information |
| `cortex --help` | Display help message |
-#### Daemon Commands
-
-| Command | Description |
-|---------|-------------|
-| `cortex daemon install --execute` | Install and enable the cortexd daemon |
-| `cortex daemon uninstall --execute` | Stop and remove the daemon |
-| `cortex daemon ping` | Test daemon connectivity |
-| `cortex daemon version` | Show daemon version |
-| `cortex daemon config` | Show daemon configuration |
-| `cortex daemon reload-config` | Reload daemon configuration |
-
### Configuration
Cortex stores configuration in `~/.cortex/`:
```
~/.cortex/
-├── config.yaml # User preferences
-├── history.db # Installation history (SQLite)
-└── audit.log # Detailed audit trail
+├── config.yaml # User preferences
+├── history.db # Installation history (SQLite)
+├── audit.log # Detailed audit trail
+├── terminal_watch.log # Terminal monitoring log
+└── watch_service.log # Watch service logs
```
---
@@ -238,10 +244,10 @@ Cortex stores configuration in `~/.cortex/`:
│ LLM Router │
│ Claude / GPT-4 / Ollama │
│ │
-│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
-│ │ Anthropic │ │ OpenAI │ │ Ollama │ │
-│ │ Claude │ │ GPT-4 │ │ Local │ │
-│ └─────────────┘ └─────────────┘ └─────────────┘ │
+│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
+│ │ Anthropic │ │ OpenAI │ │ Ollama │ │
+│ │ Claude │ │ GPT-4 │ │ Local │ │
+│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
@@ -269,45 +275,31 @@ Cortex stores configuration in `~/.cortex/`:
```
cortex/
-├── cortex/ # Main Python package
+├── cortex/ # Main package
│ ├── cli.py # Command-line interface
+│ ├── ask.py # AI Q&A and command execution
│ ├── coordinator.py # Installation orchestration
│ ├── llm_router.py # Multi-LLM routing
-│ ├── daemon_client.py # IPC client for cortexd
│ ├── packages.py # Package manager wrapper
│ ├── hardware_detection.py
│ ├── installation_history.py
+│ ├── watch_service.py # Terminal monitoring service
+│ ├── do_runner/ # AI command execution
+│ │ ├── handler.py # Main execution handler
+│ │ ├── terminal.py # Terminal monitoring
+│ │ ├── diagnosis.py # Error diagnosis & auto-fix
+│ │ └── verification.py # Conflict detection
│ └── utils/ # Utility modules
-├── daemon/ # C++ background daemon (cortexd)
-│ ├── src/ # Daemon source code
-│ ├── include/ # Header files
-│ ├── tests/ # Unit & integration tests
-│ ├── scripts/ # Build and setup scripts
-│ └── README.md # Daemon documentation
-├── tests/ # Python test suite
+├── tests/ # Test suite
├── docs/ # Documentation
+│ └── ASK_DO_ARCHITECTURE.md # ask --do deep dive
├── examples/ # Example scripts
└── scripts/ # Utility scripts
+ ├── setup_ask_do.py # Full ask --do setup
+ ├── setup_ask_do.sh # Bash setup alternative
+ └── setup_ollama.py # Ollama-only setup
```
-### Background Daemon (cortexd)
-
-Cortex includes an optional C++ background daemon for system-level operations:
-
-```bash
-# Install the daemon
-cortex daemon install --execute
-
-# Check daemon status
-cortex daemon ping
-cortex daemon version
-
-# Run daemon tests (no installation required)
-cortex daemon run-tests
-```
-
-See [daemon/README.md](daemon/README.md) for full documentation.
-
---
## Safety & Security
@@ -334,27 +326,16 @@ Found a vulnerability? Please report it responsibly:
## Troubleshooting
-"No API key found"
-
-Cortex auto-detects API keys from multiple locations. If none are found:
+"ANTHROPIC_API_KEY not set"
```bash
-# Option 1: Set environment variables (used immediately, no save needed)
-export ANTHROPIC_API_KEY=sk-ant-your-key
-cortex install nginx --dry-run
-
-# Option 2: Save directly to Cortex config
-echo 'ANTHROPIC_API_KEY=sk-ant-your-key' > ~/.cortex/.env
-
-# Option 3: Use Ollama (free, local, no key needed)
-export CORTEX_PROVIDER=ollama
-python scripts/setup_ollama.py
+# Verify .env file exists
+cat .env
+# Should show: ANTHROPIC_API_KEY=sk-ant-...
-# Option 4: If you have Claude CLI installed, Cortex will find it automatically
-# Just run: cortex install nginx --dry-run
+# If missing, create it:
+echo 'ANTHROPIC_API_KEY=your-actual-key' > .env
```
-
-See [Zero Config API Keys](docs/ZERO_CONFIG_API_KEYS.md) for details.
@@ -414,9 +395,6 @@ pip install -e .
- [x] Hardware detection (GPU/CPU/Memory)
- [x] Firejail sandboxing
- [x] Dry-run preview mode
-- [x] Docker bind-mount permission fixer
-- [x] Automatic Role Discovery (AI-driven system context sensing)
-- [x] Predictive Error Prevention (pre-install compatibility checks)
### In Progress
- [ ] Conflict resolution UI
@@ -472,37 +450,11 @@ pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
-```
-
-### Running Tests
-**Python Tests:**
-
-```bash
-# Run all Python tests
+# Run tests
pytest tests/ -v
-
-# Run with coverage
-pytest tests/ -v --cov=cortex
```
-**Daemon Tests (C++):**
-
-```bash
-# Build daemon with tests
-cd daemon && ./scripts/build.sh Release --with-tests
-
-# Run all daemon tests (no daemon installation required)
-cortex daemon run-tests
-
-# Run specific test types
-cortex daemon run-tests --unit # Unit tests only
-cortex daemon run-tests --integration # Integration tests only
-cortex daemon run-tests -t config # Specific test
-```
-
-> **Note:** Daemon tests run against a static library and don't require the daemon to be installed as a systemd service. They test the code directly.
-
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
---
@@ -524,7 +476,7 @@ See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
## License
-BUSL-1.1 (Business Source License 1.1) - Free for personal use on 1 system. See [LICENSE](LICENSE) for details.
+Apache 2.0 - See [LICENSE](LICENSE) for details.
---
diff --git a/cortex/ask.py b/cortex/ask.py
index 6af08362..d9f81ee0 100644
--- a/cortex/ask.py
+++ b/cortex/ask.py
@@ -5,6 +5,7 @@
educational content and tracks learning progress.
"""
+import atexit
import json
import logging
import os
@@ -13,6 +14,7 @@
import shutil
import sqlite3
import subprocess
+import sys
from datetime import datetime, timezone
from pathlib import Path
from typing import Any
@@ -25,6 +27,335 @@
# Maximum number of tokens to request from LLM
MAX_TOKENS = 2000
+# Cortex Terminal Theme Colors - Dracula-Inspired
+_CORTEX_BG = "#282a36" # Dracula background
+_CORTEX_FG = "#f8f8f2" # Dracula foreground
+_CORTEX_PURPLE = "#bd93f9" # Dracula purple
+_CORTEX_CURSOR = "#ff79c6" # Dracula pink for cursor
+_ORIGINAL_COLORS_SAVED = False
+
+# Available Themes
+THEMES = {
+ "dracula": {
+ "name": "Dracula",
+ "bg": "#282a36",
+ "fg": "#f8f8f2",
+ "cursor": "#ff79c6",
+ "primary": "#bd93f9",
+ "secondary": "#ff79c6",
+ "success": "#50fa7b",
+ "warning": "#f1fa8c",
+ "error": "#ff5555",
+ "info": "#8be9fd",
+ "muted": "#6272a4",
+ },
+ "nord": {
+ "name": "Nord",
+ "bg": "#2e3440",
+ "fg": "#eceff4",
+ "cursor": "#88c0d0",
+ "primary": "#81a1c1",
+ "secondary": "#88c0d0",
+ "success": "#a3be8c",
+ "warning": "#ebcb8b",
+ "error": "#bf616a",
+ "info": "#5e81ac",
+ "muted": "#4c566a",
+ },
+ "monokai": {
+ "name": "Monokai",
+ "bg": "#272822",
+ "fg": "#f8f8f2",
+ "cursor": "#f92672",
+ "primary": "#ae81ff",
+ "secondary": "#f92672",
+ "success": "#a6e22e",
+ "warning": "#e6db74",
+ "error": "#f92672",
+ "info": "#66d9ef",
+ "muted": "#75715e",
+ },
+ "gruvbox": {
+ "name": "Gruvbox",
+ "bg": "#282828",
+ "fg": "#ebdbb2",
+ "cursor": "#fe8019",
+ "primary": "#b8bb26",
+ "secondary": "#fe8019",
+ "success": "#b8bb26",
+ "warning": "#fabd2f",
+ "error": "#fb4934",
+ "info": "#83a598",
+ "muted": "#928374",
+ },
+ "catppuccin": {
+ "name": "Catppuccin Mocha",
+ "bg": "#1e1e2e",
+ "fg": "#cdd6f4",
+ "cursor": "#f5c2e7",
+ "primary": "#cba6f7",
+ "secondary": "#f5c2e7",
+ "success": "#a6e3a1",
+ "warning": "#f9e2af",
+ "error": "#f38ba8",
+ "info": "#89b4fa",
+ "muted": "#6c7086",
+ },
+ "tokyo-night": {
+ "name": "Tokyo Night",
+ "bg": "#1a1b26",
+ "fg": "#c0caf5",
+ "cursor": "#bb9af7",
+ "primary": "#7aa2f7",
+ "secondary": "#bb9af7",
+ "success": "#9ece6a",
+ "warning": "#e0af68",
+ "error": "#f7768e",
+ "info": "#7dcfff",
+ "muted": "#565f89",
+ },
+}
+
+# Current active theme
+_CURRENT_THEME = "dracula"
+
+
+def get_current_theme() -> dict:
+ """Get the current active theme colors."""
+ return THEMES.get(_CURRENT_THEME, THEMES["dracula"])
+
+
+def set_theme(theme_name: str) -> bool:
+ """Set the active theme by name."""
+ global _CURRENT_THEME, _CORTEX_BG, _CORTEX_FG, _CORTEX_CURSOR
+
+ if theme_name not in THEMES:
+ return False
+
+ _CURRENT_THEME = theme_name
+ theme = THEMES[theme_name]
+ _CORTEX_BG = theme["bg"]
+ _CORTEX_FG = theme["fg"]
+ _CORTEX_CURSOR = theme["cursor"]
+
+ # Apply theme to terminal
+ _set_terminal_theme()
+ return True
+
+
+def show_theme_selector() -> str | None:
+ """Show interactive theme selector with arrow key navigation.
+
+ Returns:
+ Selected theme name or None if cancelled
+ """
+ import sys
+ import termios
+ import tty
+
+ theme_list = list(THEMES.keys())
+ current_idx = theme_list.index(_CURRENT_THEME) if _CURRENT_THEME in theme_list else 0
+ num_themes = len(theme_list)
+
+ # ANSI codes
+ CLEAR_LINE = "\033[2K"
+ MOVE_UP = "\033[A"
+ HIDE_CURSOR = "\033[?25l"
+ SHOW_CURSOR = "\033[?25h"
+
+ # Colors (ANSI)
+ PURPLE = "\033[38;2;189;147;249m"
+ PINK = "\033[38;2;255;121;198m"
+ GRAY = "\033[38;2;108;112;134m"
+ WHITE = "\033[38;2;248;248;242m"
+ RESET = "\033[0m"
+ BOLD = "\033[1m"
+
+ def get_key():
+ """Get a single keypress."""
+ fd = sys.stdin.fileno()
+ old_settings = termios.tcgetattr(fd)
+ try:
+ tty.setraw(fd)
+ ch = sys.stdin.read(1)
+ if ch == "\x1b":
+ ch2 = sys.stdin.read(1)
+ if ch2 == "[":
+ ch3 = sys.stdin.read(1)
+ if ch3 == "A":
+ return "up"
+ elif ch3 == "B":
+ return "down"
+ return "esc"
+ elif ch == "\r" or ch == "\n":
+ return "enter"
+ elif ch == "q" or ch == "\x03":
+ return "quit"
+ return ch
+ finally:
+ termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
+
+ def get_theme_color(theme, color_type):
+ """Get ANSI color code for theme color."""
+ hex_color = theme.get(color_type, "#ffffff")
+ r = int(hex_color[1:3], 16)
+ g = int(hex_color[3:5], 16)
+ b = int(hex_color[5:7], 16)
+ return f"\033[38;2;{r};{g};{b}m"
+
+ def draw_menu():
+ """Draw the menu."""
+ # Header
+ print(
+ f"\r{CLEAR_LINE} {PURPLE}{BOLD}◉{RESET} {PINK}{BOLD}Select Theme{RESET} {GRAY}(↑/↓ navigate, Enter select, q cancel){RESET}"
+ )
+ print(f"\r{CLEAR_LINE}")
+
+ # Theme options
+ for idx, theme_key in enumerate(theme_list):
+ theme = THEMES[theme_key]
+ primary = get_theme_color(theme, "primary")
+ secondary = get_theme_color(theme, "secondary")
+ success = get_theme_color(theme, "success")
+ error = get_theme_color(theme, "error")
+
+ if idx == current_idx:
+ print(
+ f"\r{CLEAR_LINE} {primary}→ {BOLD}{theme['name']}{RESET} {primary}■{secondary}■{success}■{error}■{RESET}"
+ )
+ else:
+ print(f"\r{CLEAR_LINE} {GRAY}{theme['name']}{RESET}")
+
+ print(f"\r{CLEAR_LINE}")
+
+ def clear_menu():
+ """Move cursor up and clear all menu lines."""
+ total_lines = num_themes + 3 # header + blank + themes + blank
+ for _ in range(total_lines):
+ sys.stdout.write(f"{MOVE_UP}{CLEAR_LINE}")
+ sys.stdout.flush()
+
+ # Initial draw
+ try:
+ sys.stdout.write(HIDE_CURSOR)
+ sys.stdout.flush()
+ print() # Initial blank line
+ draw_menu()
+
+ while True:
+ key = get_key()
+
+ if key == "up":
+ current_idx = (current_idx - 1) % num_themes
+ clear_menu()
+ draw_menu()
+ elif key == "down":
+ current_idx = (current_idx + 1) % num_themes
+ clear_menu()
+ draw_menu()
+ elif key == "enter":
+ clear_menu()
+ sys.stdout.write(SHOW_CURSOR)
+ sys.stdout.flush()
+ return theme_list[current_idx]
+ elif key == "quit" or key == "esc":
+ clear_menu()
+ sys.stdout.write(SHOW_CURSOR)
+ sys.stdout.flush()
+ return None
+ except Exception:
+ sys.stdout.write(SHOW_CURSOR)
+ sys.stdout.flush()
+ return None
+ finally:
+ sys.stdout.write(SHOW_CURSOR)
+ sys.stdout.flush()
+
+
+def _set_terminal_theme():
+ """Set terminal colors to Dracula-inspired theme using OSC escape sequences."""
+ global _ORIGINAL_COLORS_SAVED
+
+ # Only works on terminals that support OSC sequences (most modern terminals)
+ if not sys.stdout.isatty():
+ return
+
+ try:
+ # Set background color (OSC 11) - Dracula dark
+ sys.stdout.write(f"\033]11;{_CORTEX_BG}\007")
+ # Set foreground color (OSC 10) - Dracula light
+ sys.stdout.write(f"\033]10;{_CORTEX_FG}\007")
+ # Set cursor color to pink (OSC 12)
+ sys.stdout.write(f"\033]12;{_CORTEX_CURSOR}\007")
+ sys.stdout.flush()
+
+ _ORIGINAL_COLORS_SAVED = True
+
+ # Register cleanup to restore colors on exit
+ atexit.register(_restore_terminal_theme)
+ except Exception:
+ pass # Silently fail if terminal doesn't support escape sequences
+
+
+def _restore_terminal_theme():
+ """Restore terminal to default colors."""
+ global _ORIGINAL_COLORS_SAVED
+
+ if not _ORIGINAL_COLORS_SAVED or not sys.stdout.isatty():
+ return
+
+ try:
+ # Reset to terminal defaults
+ sys.stdout.write("\033]110\007") # Reset foreground to default
+ sys.stdout.write("\033]111\007") # Reset background to default
+ sys.stdout.write("\033]112\007") # Reset cursor to default
+ sys.stdout.flush()
+
+ _ORIGINAL_COLORS_SAVED = False
+ except Exception:
+ pass
+
+
+def _print_cortex_banner():
+ """Print a Cortex session banner with Dracula-inspired styling."""
+ if not sys.stdout.isatty():
+ return
+
+ from rich.console import Console
+ from rich.padding import Padding
+ from rich.panel import Panel
+ from rich.text import Text
+
+ # Dracula colors
+ PURPLE = "#bd93f9"
+ PINK = "#ff79c6"
+ CYAN = "#8be9fd"
+ GRAY = "#6272a4"
+
+ console = Console()
+
+ # Build banner content
+ banner_text = Text()
+ banner_text.append("◉ CORTEX", style=f"bold {PINK}")
+ banner_text.append(" AI-Powered Terminal Session\n", style=CYAN)
+ banner_text.append("Type your request • ", style=GRAY)
+ banner_text.append("Ctrl+C to exit", style=GRAY)
+
+ # Create panel with fixed width
+ panel = Panel(
+ banner_text,
+ border_style=PURPLE,
+ padding=(0, 1),
+ width=45, # Fixed width
+ )
+
+ # Fixed left margin of 3
+ padded_panel = Padding(panel, (0, 0, 0, 3))
+
+ console.print()
+ console.print(padded_panel)
+ console.print()
+
class SystemInfoGatherer:
"""Gathers local system information for context-aware responses."""
@@ -366,6 +697,7 @@ def __init__(
api_key: str,
provider: str = "claude",
model: str | None = None,
+ do_mode: bool = False,
):
"""Initialize the ask handler.
@@ -373,12 +705,25 @@ def __init__(
api_key: API key for the LLM provider
provider: Provider name ("openai", "claude", or "ollama")
model: Optional model name override
+ do_mode: If True, enable execution mode with do_runner
"""
self.api_key = api_key
self.provider = provider.lower()
self.model = model or self._default_model()
self.info_gatherer = SystemInfoGatherer()
self.learning_tracker = LearningTracker()
+ self.do_mode = do_mode
+
+ # Initialize do_handler if in do_mode
+ self._do_handler = None
+ if do_mode:
+ try:
+ from cortex.do_runner.handler import DoHandler
+
+ # Create LLM callback for DoHandler
+ self._do_handler = DoHandler(llm_callback=self._do_llm_callback)
+ except ImportError:
+ pass
# Initialize cache
try:
@@ -431,6 +776,524 @@ def _initialize_client(self):
else:
raise ValueError(f"Unsupported provider: {self.provider}")
+ def _do_llm_callback(self, request: str, context: dict | None = None) -> dict:
+ """LLM callback for DoHandler - generates structured do_commands responses."""
+ system_prompt = self._get_do_system_prompt()
+
+ # Build context string if provided
+ context_str = ""
+ if context:
+ context_str = f"\n\nContext:\n{json.dumps(context, indent=2)}"
+
+ full_prompt = f"{request}{context_str}"
+
+ try:
+ if self.provider == "openai":
+ response = self.client.chat.completions.create(
+ model=self.model,
+ messages=[
+ {"role": "system", "content": system_prompt},
+ {"role": "user", "content": full_prompt},
+ ],
+ temperature=0.3,
+ max_tokens=MAX_TOKENS,
+ )
+ content = response.choices[0].message.content or ""
+ elif self.provider == "claude":
+ response = self.client.messages.create(
+ model=self.model,
+ max_tokens=MAX_TOKENS,
+ temperature=0.3,
+ system=system_prompt,
+ messages=[{"role": "user", "content": full_prompt}],
+ )
+ content = response.content[0].text or ""
+ elif self.provider == "ollama":
+ import urllib.request
+
+ url = f"{self.ollama_url}/api/generate"
+ prompt = f"{system_prompt}\n\nRequest: {full_prompt}"
+ data = json.dumps(
+ {
+ "model": self.model,
+ "prompt": prompt,
+ "stream": False,
+ "options": {"temperature": 0.3, "num_predict": MAX_TOKENS},
+ }
+ ).encode("utf-8")
+ req = urllib.request.Request(
+ url, data=data, headers={"Content-Type": "application/json"}
+ )
+ with urllib.request.urlopen(req, timeout=60) as resp:
+ result = json.loads(resp.read().decode("utf-8"))
+ content = result.get("response", "")
+ else:
+ return {"response_type": "error", "error": f"Unsupported provider: {self.provider}"}
+
+ # Parse JSON from response
+ json_match = re.search(r"\{[\s\S]*\}", content)
+ if json_match:
+ return json.loads(json_match.group())
+
+ # If no JSON, return as answer
+ return {"response_type": "answer", "answer": content.strip()}
+
+ except Exception as e:
+ return {"response_type": "error", "error": str(e)}
+
+ def _get_do_system_prompt(self) -> str:
+ """Get system prompt for do_mode - generates structured commands."""
+ return """You are a Linux system automation assistant. Your job is to translate user requests into executable shell commands.
+
+RESPONSE FORMAT - You MUST respond with valid JSON in one of these formats:
+
+For actionable requests (install, configure, run, etc.):
+{
+ "response_type": "do_commands",
+ "reasoning": "Brief explanation of what you're going to do",
+ "do_commands": [
+ {
+ "command": "the actual shell command",
+ "purpose": "what this command does",
+ "requires_sudo": true/false
+ }
+ ]
+}
+
+For informational requests or when you cannot generate commands:
+{
+ "response_type": "answer",
+ "answer": "Your response text here"
+}
+
+RULES:
+1. Generate safe, well-tested commands
+2. Set requires_sudo: true for commands that need root privileges
+3. Break complex tasks into multiple commands
+4. For package installation, use apt on Debian/Ubuntu
+5. Include verification commands when appropriate
+6. NEVER include dangerous commands (rm -rf /, etc.)
+7. Always respond with valid JSON only - no extra text"""
+
+ def _handle_do_request(self, question: str) -> str:
+ """Handle a request in do_mode - generates and executes commands.
+
+ Args:
+ question: The user's request
+
+ Returns:
+ Summary of what was done or the answer
+ """
+ from rich.console import Console
+ from rich.padding import Padding
+ from rich.panel import Panel
+ from rich.prompt import Confirm
+ from rich.text import Text
+
+ # Dracula-Inspired Theme Colors
+ PURPLE = "#bd93f9" # Dracula purple
+ PURPLE_LIGHT = "#ff79c6" # Dracula pink
+ PURPLE_DARK = "#6272a4" # Dracula comment
+ WHITE = "#f8f8f2" # Dracula foreground
+ GRAY = "#6272a4" # Dracula comment
+ GREEN = "#50fa7b" # Dracula green
+ RED = "#ff5555" # Dracula red
+ YELLOW = "#f1fa8c" # Dracula yellow
+ CYAN = "#8be9fd" # Dracula cyan
+ ORANGE = "#ffb86c" # Dracula orange
+
+ # Icons (round/circle based)
+ ICON_THINKING = "◐"
+ ICON_PLAN = "◉"
+ ICON_CMD = "❯"
+ ICON_SUCCESS = "●"
+ ICON_ERROR = "●"
+ ICON_ARROW = "→"
+ ICON_LOCK = "◉"
+
+ console = Console()
+
+ # Fixed layout constants
+ LEFT_MARGIN = 3
+ INDENT = " "
+ BOX_WIDTH = 70 # Fixed box width
+
+ def print_padded(text: str) -> None:
+ """Print text with left margin."""
+ console.print(f"{INDENT}{text}")
+
+ def print_panel(content, **kwargs) -> None:
+ """Print a panel with fixed width and left margin."""
+ panel = Panel(content, width=BOX_WIDTH, **kwargs)
+ padded = Padding(panel, (0, 0, 0, LEFT_MARGIN))
+ console.print(padded)
+
+ # Processing indicator
+ console.print()
+ print_padded(
+ f"[{PURPLE}]{ICON_THINKING}[/{PURPLE}] [{PURPLE_LIGHT}]Analyzing results... Step 1[/{PURPLE_LIGHT}]"
+ )
+ console.print()
+
+ llm_response = self._do_llm_callback(question)
+
+ if not llm_response:
+ print_panel(
+ f"[{RED}]{ICON_ERROR} I couldn't process that request. Please try again.[/{RED}]",
+ border_style=RED,
+ padding=(0, 2),
+ )
+ return ""
+
+ response_type = llm_response.get("response_type", "")
+
+ # If it's just an answer (informational), return it in a panel
+ if response_type == "answer":
+ answer = llm_response.get("answer", "No response generated.")
+ print_panel(
+ f"[{WHITE}]{answer}[/{WHITE}]",
+ border_style=PURPLE,
+ title=f"[bold {PURPLE_LIGHT}]{ICON_SUCCESS} Answer[/bold {PURPLE_LIGHT}]",
+ title_align="left",
+ padding=(0, 2),
+ )
+ return ""
+
+ # If it's an error, return the error message in a panel
+ if response_type == "error":
+ print_panel(
+ f"[{RED}]{llm_response.get('error', 'Unknown error')}[/{RED}]",
+ border_style=RED,
+ title=f"[bold {RED}]{ICON_ERROR} Error[/bold {RED}]",
+ title_align="left",
+ padding=(0, 2),
+ )
+ return ""
+
+ # Handle do_commands - execute with confirmation
+ if response_type == "do_commands" and llm_response.get("do_commands"):
+ do_commands = llm_response["do_commands"]
+ reasoning = llm_response.get("reasoning", "")
+
+ # Show reasoning
+ if reasoning:
+ print_panel(
+ f"[{WHITE}]{reasoning}[/{WHITE}]",
+ border_style=PURPLE,
+ title=f"[bold {PURPLE_LIGHT}]{ICON_PLAN} Gathering info[/bold {PURPLE_LIGHT}]",
+ title_align="left",
+ padding=(0, 2),
+ )
+ console.print()
+
+ # Build commands list
+ commands_text = Text()
+ for i, cmd_info in enumerate(do_commands, 1):
+ cmd = cmd_info.get("command", "")
+ purpose = cmd_info.get("purpose", "")
+ needs_sudo = cmd_info.get("requires_sudo", False)
+
+ # Number and lock icon
+ if needs_sudo:
+ commands_text.append(f" {ICON_LOCK} ", style=YELLOW)
+ else:
+ commands_text.append(f" {ICON_CMD} ", style=PURPLE)
+
+ commands_text.append(f"{i}. ", style=f"bold {WHITE}")
+ commands_text.append(f"{cmd}\n", style=f"bold {PURPLE_LIGHT}")
+ if purpose:
+ commands_text.append(f" {ICON_ARROW} ", style=GRAY)
+ commands_text.append(f"{purpose}\n", style=GRAY)
+ commands_text.append("\n")
+
+ print_panel(
+ commands_text,
+ border_style=PURPLE,
+ title=f"[bold {PURPLE_LIGHT}]Commands[/bold {PURPLE_LIGHT}]",
+ title_align="left",
+ padding=(0, 2),
+ )
+ console.print()
+
+ if not Confirm.ask(
+ f"{INDENT}[{PURPLE}]Execute these commands?[/{PURPLE}]", default=True
+ ):
+ console.print()
+ print_padded(f"[{YELLOW}]{ICON_ERROR} Skipped by user[/{YELLOW}]")
+ return ""
+
+ # Execute header
+ console.print()
+ print_padded(f"[{PURPLE_LIGHT}]Executing...[/{PURPLE_LIGHT}]")
+
+ results = []
+ for idx, cmd_info in enumerate(do_commands, 1):
+ cmd = cmd_info.get("command", "")
+ purpose = cmd_info.get("purpose", "Execute command")
+ needs_sudo = cmd_info.get("requires_sudo", False)
+
+ # Show step indicator
+ console.print()
+ print_padded(
+ f"[{PURPLE_LIGHT}]Analyzing results... Step {idx + 1}[/{PURPLE_LIGHT}]"
+ )
+ console.print()
+
+ # Build command display
+ cmd_text = Text()
+ cmd_text.append(f" {ICON_CMD} ", style=PURPLE)
+ cmd_text.append(f"{cmd}", style=f"bold {PURPLE_LIGHT}")
+
+ # Show command panel
+ print_panel(
+ cmd_text,
+ border_style=PURPLE_DARK,
+ title=f"[{GRAY}]{ICON_PLAN} Gathering info[/{GRAY}]",
+ title_align="left",
+ padding=(0, 2),
+ )
+
+ # Show spinner while executing
+ with console.status(f"[{PURPLE}]Running...[/{PURPLE}]", spinner="dots") as status:
+ # Execute via DoHandler if available
+ if self._do_handler:
+ success, stdout, stderr = self._do_handler._execute_single_command(
+ cmd, needs_sudo
+ )
+ else:
+ # Fallback to direct subprocess
+ import subprocess
+
+ try:
+ exec_cmd = cmd
+ if needs_sudo and not cmd.startswith("sudo"):
+ exec_cmd = f"sudo {cmd}"
+ result = subprocess.run(
+ exec_cmd, shell=True, capture_output=True, text=True, timeout=120
+ )
+ success = result.returncode == 0
+ stdout = result.stdout.strip()
+ stderr = result.stderr.strip()
+ except Exception as e:
+ success = False
+ stdout = ""
+ stderr = str(e)
+
+ if success:
+ if stdout:
+ output_lines = stdout.split("\n")
+ line_count = len(output_lines)
+ truncated_lines = output_lines[:8] # Show up to 8 lines
+
+ # Build output text
+ output_text = Text()
+ for line in truncated_lines:
+ if line.strip():
+ # Truncate long lines at 80 chars
+ display_line = line[:80] + ("..." if len(line) > 80 else "")
+ output_text.append(f"{display_line}\n", style=WHITE)
+
+ console.print()
+ print_padded(
+ f"[{GREEN}]{ICON_SUCCESS} Got {line_count} lines of output[/{GREEN}]"
+ )
+ console.print()
+
+ print_panel(
+ output_text,
+ border_style=PURPLE_DARK,
+ title=f"[{GRAY}]Output[/{GRAY}]",
+ title_align="left",
+ padding=(0, 2),
+ )
+ else:
+ print_padded(f"[{GREEN}]{ICON_SUCCESS} Command succeeded[/{GREEN}]")
+
+ results.append(("success", cmd, stdout))
+ else:
+ console.print()
+ print_padded(f"[{YELLOW}]⚠ Command failed:[/{YELLOW}]")
+ if stderr:
+ # Wrap error message
+ error_text = stderr[:200] + ("..." if len(stderr) > 200 else "")
+ print_padded(f" [{GRAY}]{error_text}[/{GRAY}]")
+ results.append(("failed", cmd, stderr))
+
+ # Generate LLM-based summary
+ console.print()
+ return self._generate_execution_summary(question, results, do_commands)
+
+ # Default fallback
+ return self._format_answer(llm_response.get("answer", "Request processed."))
+
+ def _generate_execution_summary(
+ self, question: str, results: list, commands: list[dict]
+ ) -> str:
+ """Generate a comprehensive summary after command execution.
+
+ Args:
+ question: The original user request
+ results: List of (status, command, output/error) tuples
+ commands: The original command list with purposes
+
+ Returns:
+ Formatted summary with answer
+ """
+ success_count = sum(1 for r in results if r[0] == "success")
+ fail_count = len(results) - success_count
+
+ # Build execution results for LLM - include actual outputs!
+ execution_results = []
+ for i, result in enumerate(results):
+ cmd_info = commands[i] if i < len(commands) else {}
+ status = "✓" if result[0] == "success" else "✗"
+ entry = {
+ "command": result[1],
+ "purpose": cmd_info.get("purpose", ""),
+ "status": status,
+ }
+ # Include command output (truncate to reasonable size)
+ if len(result) > 2 and result[2]:
+ if result[0] == "success":
+ entry["output"] = result[2][:1000] # Include successful output
+ else:
+ entry["error"] = result[2][:500] # Include error message
+ execution_results.append(entry)
+
+ # Generate LLM summary with command outputs
+ summary_prompt = f"""The user asked: "{question}"
+
+The following commands were executed with their outputs:
+"""
+ for i, entry in enumerate(execution_results, 1):
+ summary_prompt += f"\n{i}. [{entry['status']}] {entry['command']}"
+ if entry.get("purpose"):
+ summary_prompt += f"\n Purpose: {entry['purpose']}"
+ if entry.get("output"):
+ summary_prompt += f"\n Output:\n{entry['output']}"
+ if entry.get("error"):
+ summary_prompt += f"\n Error: {entry['error']}"
+
+ summary_prompt += f"""
+
+Execution Summary: {success_count} succeeded, {fail_count} failed.
+
+IMPORTANT: You MUST extract and report the ACTUAL DATA from the command outputs above.
+DO NOT give a generic summary like "commands ran successfully" or "I checked your disk usage".
+
+Instead, give the user the REAL ANSWER with SPECIFIC VALUES from the output. For example:
+- If they asked about disk usage, tell them: "Your root partition is 45% full (67GB used of 150GB)"
+- If they asked about memory, tell them: "You have 8.2GB RAM in use out of 16GB total"
+- If they asked about a service status, tell them: "nginx is running (active) since 2 days ago"
+
+Extract the key numbers, percentages, sizes, statuses from the command outputs and present them clearly.
+
+Respond with just the answer containing the actual data, no JSON."""
+
+ try:
+ # Call LLM for summary
+ if self.provider == "openai":
+ response = self.client.chat.completions.create(
+ model=self.model,
+ messages=[
+ {
+ "role": "system",
+ "content": "You are a helpful assistant that summarizes command execution results concisely.",
+ },
+ {"role": "user", "content": summary_prompt},
+ ],
+ temperature=0.3,
+ max_tokens=500,
+ )
+ summary = response.choices[0].message.content or ""
+ elif self.provider == "claude":
+ response = self.client.messages.create(
+ model=self.model,
+ max_tokens=500,
+ temperature=0.3,
+ system="You are a helpful assistant that summarizes command execution results concisely.",
+ messages=[{"role": "user", "content": summary_prompt}],
+ )
+ summary = response.content[0].text or ""
+ elif self.provider == "ollama":
+ import urllib.request
+
+ url = f"{self.ollama_url}/api/generate"
+ data = json.dumps(
+ {
+ "model": self.model,
+ "prompt": f"Summarize concisely:\n{summary_prompt}",
+ "stream": False,
+ "options": {"temperature": 0.3, "num_predict": 500},
+ }
+ ).encode("utf-8")
+ req = urllib.request.Request(
+ url, data=data, headers={"Content-Type": "application/json"}
+ )
+ with urllib.request.urlopen(req, timeout=30) as resp:
+ result = json.loads(resp.read().decode("utf-8"))
+ summary = result.get("response", "")
+ else:
+ summary = ""
+
+ if summary.strip():
+ return self._format_answer(summary.strip())
+ except Exception:
+ pass # Fall back to basic summary
+
+ # Fallback basic summary
+ if fail_count == 0:
+ return self._format_answer(
+ f"✅ Successfully completed your request. All {success_count} command(s) executed successfully."
+ )
+ else:
+ return self._format_answer(
+ f"Completed with issues: {success_count} command(s) succeeded, {fail_count} failed. Check the output above for details."
+ )
+
+ def _format_answer(self, answer: str) -> str:
+ """Format the final answer with a clear summary section in Dracula theme.
+
+ Args:
+ answer: The answer text
+
+ Returns:
+ Empty string (output is printed directly to console)
+ """
+ from rich.console import Console
+ from rich.padding import Padding
+ from rich.panel import Panel
+
+ # Dracula Theme Colors
+ PURPLE = "#bd93f9"
+ WHITE = "#f8f8f2"
+ GREEN = "#50fa7b"
+ ICON_SUCCESS = "●"
+
+ console = Console()
+
+ if not answer:
+ answer = "Request completed."
+
+ # Create panel with fixed width
+ panel = Panel(
+ f"[{WHITE}]{answer}[/{WHITE}]",
+ border_style=PURPLE,
+ title=f"[bold {GREEN}]{ICON_SUCCESS} Summary[/bold {GREEN}]",
+ title_align="left",
+ padding=(0, 2),
+ width=70, # Fixed width
+ )
+
+ # Add left margin
+ padded = Padding(panel, (0, 0, 0, 3))
+
+ console.print()
+ console.print(padded)
+ console.print()
+
+ return ""
+
def _get_system_prompt(self, context: dict[str, Any]) -> str:
return f"""You are a helpful Linux system assistant and tutor. You help users with both system-specific questions AND educational queries about Linux, packages, and best practices.
@@ -577,6 +1440,10 @@ def ask(self, question: str, system_prompt: str | None = None) -> str:
question = question.strip()
+ # In do_mode, use DoHandler for command execution
+ if self.do_mode and self._do_handler:
+ return self._handle_do_request(question)
+
# Use provided system prompt or generate default
if system_prompt is None:
context = self.info_gatherer.gather_context()
diff --git a/cortex/cli.py b/cortex/cli.py
index 267228b0..c35bb4bc 100644
--- a/cortex/cli.py
+++ b/cortex/cli.py
@@ -852,8 +852,13 @@ def _confirm_risky_operation(self, prediction: FailurePrediction) -> bool:
# --- End Sandbox Commands ---
- def ask(self, question: str) -> int:
- """Answer a natural language question about the system."""
+ def ask(self, question: str, do_mode: bool = False) -> int:
+ """Answer a natural language question about the system.
+
+ Args:
+ question: The natural language question to answer
+ do_mode: If True, enable execution mode where AI can run commands
+ """
api_key = self._get_api_key()
if not api_key:
return 1
@@ -865,11 +870,18 @@ def ask(self, question: str) -> int:
handler = AskHandler(
api_key=api_key,
provider=provider,
+ do_mode=do_mode,
)
- answer = handler.ask(question)
- # Render as markdown for proper formatting in terminal
- console.print(Markdown(answer))
- return 0
+
+ if do_mode:
+ # Interactive execution mode
+ return self._run_interactive_do_session(handler, question)
+ else:
+ # Standard ask mode
+ answer = handler.ask(question)
+ # Render as markdown for proper formatting in terminal
+ console.print(Markdown(answer))
+ return 0
except ImportError as e:
# Provide a helpful message if provider SDK is missing
self._print_error(str(e))
@@ -884,6 +896,83 @@ def ask(self, question: str) -> int:
self._print_error(str(e))
return 1
+ def _run_interactive_do_session(self, handler: AskHandler, initial_question: str | None) -> int:
+ """Run an interactive session with execution capabilities.
+
+ Args:
+ handler: The AskHandler configured for do_mode
+ initial_question: Optional initial question to start with
+ """
+ from rich.prompt import Prompt
+
+ # Import and apply Cortex terminal theme at session start
+ from cortex.ask import _print_cortex_banner, _restore_terminal_theme, _set_terminal_theme
+
+ try:
+ _set_terminal_theme()
+ _print_cortex_banner()
+ except Exception:
+ pass # Silently continue if theming fails
+
+ question = initial_question
+
+ # Dracula theme colors for prompt
+ PURPLE_LIGHT = "#ff79c6" # Dracula pink
+ GRAY = "#6272a4" # Dracula comment
+ INDENT = " " # Fixed 3-space indent
+
+ try:
+ while True:
+ try:
+ if not question:
+ question = Prompt.ask(
+ f"{INDENT}[bold {PURPLE_LIGHT}]What would you like to do?[/bold {PURPLE_LIGHT}]"
+ )
+
+ if not question or question.lower() in ["exit", "quit", "q"]:
+ console.print(f"{INDENT}[{GRAY}]Goodbye![/{GRAY}]")
+ return 0
+
+ # Handle /theme command
+ if question.strip().lower() == "/theme":
+ from cortex.ask import get_current_theme, set_theme, show_theme_selector
+
+ selected = show_theme_selector()
+ if selected:
+ set_theme(selected)
+ theme = get_current_theme()
+ console.print(
+ f"{INDENT}[{theme['success']}]● Theme changed to {theme['name']}[/{theme['success']}]"
+ )
+ else:
+ console.print(f"{INDENT}[{GRAY}]Theme selection cancelled[/{GRAY}]")
+
+ # Re-print banner with new theme
+ _print_cortex_banner()
+ question = None
+ continue
+
+ # Process the question
+ result = handler.ask(question)
+ if result:
+ console.print(Markdown(result))
+
+ # Reset for next iteration
+ question = None
+
+ except KeyboardInterrupt:
+ console.print(f"\n{INDENT}[{GRAY}]Session ended.[/{GRAY}]")
+ return 0
+ except Exception as e:
+ self._print_error(f"Error: {e}")
+ question = None
+ finally:
+ # Always restore terminal theme when session ends
+ try:
+ _restore_terminal_theme()
+ except Exception:
+ pass
+
def _ask_with_session_key(self, question: str, api_key: str, provider: str) -> int:
"""Answer a question using provided session API key without re-prompting.
@@ -4766,6 +4855,11 @@ def main():
action="store_true",
help="Use voice input (press F9 to record)",
)
+ ask_parser.add_argument(
+ "--do",
+ action="store_true",
+ help="Enable execution mode - AI can execute commands with your approval",
+ )
# Voice command - continuous voice mode
voice_parser = subparsers.add_parser(
@@ -5496,6 +5590,7 @@ def main():
model = getattr(args, "model", None)
return cli.voice(continuous=not getattr(args, "single", False), model=model)
elif args.command == "ask":
+ do_mode = getattr(args, "do", False)
# Handle --mic flag for voice input
if getattr(args, "mic", False):
try:
@@ -5510,7 +5605,7 @@ def main():
return 1
cx_print(f"Question: {transcript}", "info")
- return cli.ask(transcript)
+ return cli.ask(transcript, do_mode=do_mode)
except ImportError:
cli._print_error("Voice dependencies not installed.")
cx_print("Install with: pip install cortex-linux[voice]", "info")
@@ -5518,10 +5613,11 @@ def main():
except VoiceInputError as e:
cli._print_error(f"Voice input error: {e}")
return 1
- if not args.question:
+ # In do_mode, question is optional (interactive mode)
+ if not args.question and not do_mode:
cli._print_error("Please provide a question or use --mic for voice input")
return 1
- return cli.ask(args.question)
+ return cli.ask(args.question, do_mode=do_mode)
elif args.command == "install":
# Handle --mic flag for voice input
if getattr(args, "mic", False):
diff --git a/cortex/demo.py b/cortex/demo.py
index 730d4805..808e9646 100644
--- a/cortex/demo.py
+++ b/cortex/demo.py
@@ -1,601 +1,137 @@
-"""
-Cortex Interactive Demo
-Interactive 5-minute tutorial showcasing all major Cortex features
-"""
+"""Interactive demo for Cortex Linux."""
-import secrets
+import subprocess
import sys
-import time
-from datetime import datetime, timedelta
from rich.console import Console
+from rich.markdown import Markdown
from rich.panel import Panel
-from rich.table import Table
+from rich.prompt import Confirm, Prompt
from cortex.branding import show_banner
-from cortex.hardware_detection import SystemInfo, detect_hardware
-
-
-class CortexDemo:
- """Interactive Cortex demonstration"""
-
- def __init__(self) -> None:
- self.console = Console()
- self.hw: SystemInfo | None = None
- self.is_interactive = sys.stdin.isatty()
- self.installation_id = self._generate_id()
-
- def clear_screen(self) -> None:
- """Clears the terminal screen"""
- self.console.clear()
-
- def _generate_id(self) -> str:
- """Generate a fake installation ID for demo"""
- return secrets.token_hex(8)
-
- def _generate_past_date(self, days_ago: int, hours: int = 13, minutes: int = 11) -> str:
- """Generate a date string for N days ago"""
- past = datetime.now() - timedelta(days=days_ago)
- past = past.replace(hour=hours, minute=minutes, second=51)
- return past.strftime("%Y-%m-%d %H:%M:%S")
-
- def _is_gpu_vendor(self, model: str, keywords: list[str]) -> bool:
- """Check if GPU model matches any vendor keywords."""
- model_upper = str(model).upper()
- return any(kw in model_upper for kw in keywords)
-
- def run(self) -> int:
- """Main demo entry point"""
- try:
- self.clear_screen()
- show_banner()
-
- self.console.print("\n[bold cyan]🎬 Cortex Interactive Demo[/bold cyan]")
- self.console.print("[dim]Learn Cortex by typing real commands (~5 minutes)[/dim]\n")
-
- intro_text = """
-Cortex is an AI-powered universal package manager that:
-
- • 🧠 [cyan]Understands natural language[/cyan] - No exact syntax needed
- • 🔍 [cyan]Plans before installing[/cyan] - Shows you what it will do first
- • 🔒 [cyan]Checks hardware compatibility[/cyan] - Prevents bad installs
- • 📦 [cyan]Works with all package managers[/cyan] - apt, brew, npm, pip...
- • 🎯 [cyan]Smart stacks[/cyan] - Pre-configured tool bundles
- • 🔄 [cyan]Safe rollback[/cyan] - Undo any installation
-
-[bold]This is interactive - you'll type real commands![/bold]
-[dim](Just type commands as shown - any input works for learning!)[/dim]
- """
-
- self.console.print(Panel(intro_text, border_style="cyan"))
-
- if not self._wait_for_user("\nPress Enter to start..."):
- return 0
-
- # Detect hardware for smart demos
- self.hw = detect_hardware()
-
- # Run all sections (now consolidated to 3)
- sections = [
- ("AI Intelligence & Understanding", self._section_ai_intelligence),
- ("Smart Stacks & Workflows", self._section_smart_stacks),
- ("History & Safety Features", self._section_history_safety),
- ]
-
- for i, (name, section_func) in enumerate(sections, 1):
- self.clear_screen()
- self.console.print(f"\n[dim]━━━ Section {i} of {len(sections)}: {name} ━━━[/dim]\n")
-
- if not section_func():
- self.console.print(
- "\n[yellow]Demo interrupted. Thanks for trying Cortex![/yellow]"
- )
- return 1
-
- # Show finale
- self.clear_screen()
- self._show_finale()
-
- return 0
-
- except (KeyboardInterrupt, EOFError):
- self.console.print(
- "\n\n[yellow]Demo interrupted. Thank you for trying Cortex![/yellow]"
- )
- return 1
-
- def _wait_for_user(self, message: str = "\nPress Enter to continue...") -> bool:
- """Wait for user input"""
- try:
- if self.is_interactive:
- self.console.print(f"[dim]{message}[/dim]")
- input()
- else:
- time.sleep(2) # Auto-advance in non-interactive mode
- return True
- except (KeyboardInterrupt, EOFError):
- return False
-
- def _prompt_command(self, command: str) -> bool:
- """
- Prompt user to type a command.
- Re-prompts on empty input to ensure user provides something.
- """
- try:
- if self.is_interactive:
- while True:
- self.console.print(f"\n[yellow]Try:[/yellow] [bold]{command}[/bold]")
- self.console.print("\n[bold green]$[/bold green] ", end="")
- user_input = input()
-
- # If empty, re-prompt and give hint
- if not user_input.strip():
- self.console.print(
- "[dim]Type the command above or anything else to continue[/dim]"
- )
- continue
-
- break
-
- self.console.print("[green]✓[/green] [dim]Let's see what Cortex does...[/dim]\n")
- else:
- self.console.print(f"\n[yellow]Command:[/yellow] [bold]{command}[/bold]\n")
- time.sleep(1)
-
- return True
- except (KeyboardInterrupt, EOFError):
- return False
-
- def _simulate_cortex_output(self, packages: list[str], show_execution: bool = False) -> None:
- """Simulate real Cortex output with CX branding"""
-
- # Understanding phase
- with self.console.status("[cyan]CX[/cyan] Understanding request...", spinner="dots"):
- time.sleep(0.8)
-
- # Planning phase
- with self.console.status("[cyan]CX[/cyan] Planning installation...", spinner="dots"):
- time.sleep(1.0)
-
- pkg_str = " ".join(packages)
- self.console.print(f" [cyan]CX[/cyan] │ Installing {pkg_str}...\n")
- time.sleep(0.5)
-
- # Show generated commands
- self.console.print("[bold]Generated commands:[/bold]")
- self.console.print(" 1. [dim]sudo apt update[/dim]")
-
- for i, pkg in enumerate(packages, 2):
- self.console.print(f" {i}. [dim]sudo apt install -y {pkg}[/dim]")
-
- if not show_execution:
- self.console.print(
- "\n[yellow]To execute these commands, run with --execute flag[/yellow]"
- )
- self.console.print("[dim]Example: cortex install docker --execute[/dim]\n")
- else:
- # Simulate execution
- self.console.print("\n[cyan]Executing commands...[/cyan]\n")
- time.sleep(0.5)
-
- total_steps = len(packages) + 1
- for step in range(1, total_steps + 1):
- self.console.print(f"[{step}/{total_steps}] ⏳ Step {step}")
- if step == 1:
- self.console.print(" Command: [dim]sudo apt update[/dim]")
- else:
- self.console.print(
- f" Command: [dim]sudo apt install -y {packages[step - 2]}[/dim]"
- )
- time.sleep(0.8)
- self.console.print()
-
- self.console.print(
- f" [cyan]CX[/cyan] [green]✓[/green] {pkg_str} installed successfully!\n"
- )
-
- # Show installation ID
- self.console.print(f"📝 Installation recorded (ID: {self.installation_id})")
- self.console.print(
- f" To rollback: [cyan]cortex rollback {self.installation_id}[/cyan]\n"
- )
-
- def _section_ai_intelligence(self) -> bool:
- """Section 1: AI Intelligence - NLP, Planning, and Hardware Awareness"""
- self.console.print("[bold cyan]🧠 AI Intelligence & Understanding[/bold cyan]\n")
-
- # Part 1: Natural Language Understanding
- self.console.print("[bold]Part 1: Natural Language Understanding[/bold]")
- self.console.print(
- "Cortex understands what you [italic]mean[/italic], not just exact syntax."
- )
- self.console.print("Ask questions in plain English:\n")
-
- if not self._prompt_command('cortex ask "I need tools for Python web development"'):
- return False
-
- # Simulate AI response
- with self.console.status("[cyan]CX[/cyan] Understanding your request...", spinner="dots"):
- time.sleep(1.0)
- with self.console.status("[cyan]CX[/cyan] Analyzing requirements...", spinner="dots"):
- time.sleep(1.2)
-
- self.console.print(" [cyan]CX[/cyan] [green]✓[/green] [dim]Recommendations ready[/dim]\n")
- time.sleep(0.5)
-
- # Show AI response
- response = """For Python web development on your system, here are the essential tools:
-
-[bold]Web Frameworks:[/bold]
- • [cyan]FastAPI[/cyan] - Modern, fast framework with automatic API documentation
- • [cyan]Flask[/cyan] - Lightweight, flexible microframework
- • [cyan]Django[/cyan] - Full-featured framework with ORM and admin interface
-
-[bold]Development Tools:[/bold]
- • [cyan]uvicorn[/cyan] - ASGI server for FastAPI
- • [cyan]gunicorn[/cyan] - WSGI server for production
- • [cyan]python3-venv[/cyan] - Virtual environments
-
-Install a complete stack with: [cyan]cortex stack webdev[/cyan]
- """
-
- self.console.print(Panel(response, border_style="cyan", title="AI Response"))
- self.console.print()
-
- self.console.print("[bold green]💡 Key Feature:[/bold green]")
- self.console.print(
- "Cortex's AI [bold]understands intent[/bold] and provides smart recommendations.\n"
- )
-
- if not self._wait_for_user():
- return False
-
- # Part 2: Smart Planning
- self.console.print("\n[bold]Part 2: Transparent Planning[/bold]")
- self.console.print("Let's install Docker and Node.js together.")
- self.console.print("[dim]Cortex will show you the plan before executing anything.[/dim]")
-
- if not self._prompt_command('cortex install "docker nodejs"'):
- return False
-
- # Simulate the actual output
- self._simulate_cortex_output(["docker.io", "nodejs"], show_execution=False)
-
- self.console.print("[bold green]🔒 Transparency & Safety:[/bold green]")
- self.console.print(
- "Cortex [bold]shows you exactly what it will do[/bold] before making any changes."
- )
- self.console.print("[dim]No surprises, no unwanted modifications to your system.[/dim]\n")
-
- if not self._wait_for_user():
- return False
-
- # Part 3: Hardware-Aware Intelligence
- self.console.print("\n[bold]Part 3: Hardware-Aware Intelligence[/bold]")
- self.console.print(
- "Cortex detects your hardware and prevents incompatible installations.\n"
- )
-
- # Detect GPU (check both dedicated and integrated)
- gpu = getattr(self.hw, "gpu", None) if self.hw else None
- gpu_info = gpu[0] if (gpu and len(gpu) > 0) else None
-
- # Check for NVIDIA
- nvidia_keywords = ["NVIDIA", "GTX", "RTX", "GEFORCE", "QUADRO", "TESLA"]
- has_nvidia = gpu_info and self._is_gpu_vendor(gpu_info.model, nvidia_keywords)
-
- # Check for AMD (dedicated or integrated Radeon)
- amd_keywords = ["AMD", "RADEON", "RENOIR", "VEGA", "NAVI", "RX "]
- has_amd = gpu_info and self._is_gpu_vendor(gpu_info.model, amd_keywords)
-
- if has_nvidia:
- # NVIDIA GPU - show successful CUDA install
- self.console.print(f"[cyan]Detected GPU:[/cyan] {gpu_info.model}")
- self.console.print("Let's install CUDA for GPU acceleration:")
-
- if not self._prompt_command("cortex install cuda"):
- return False
-
- with self.console.status("[cyan]CX[/cyan] Understanding request...", spinner="dots"):
- time.sleep(0.8)
- with self.console.status(
- "[cyan]CX[/cyan] Checking hardware compatibility...", spinner="dots"
- ):
- time.sleep(1.0)
-
- self.console.print(
- " [cyan]CX[/cyan] [green]✓[/green] NVIDIA GPU detected - CUDA compatible!\n"
- )
- time.sleep(0.5)
-
- self.console.print("[bold]Generated commands:[/bold]")
- self.console.print(" 1. [dim]sudo apt update[/dim]")
- self.console.print(" 2. [dim]sudo apt install -y nvidia-cuda-toolkit[/dim]\n")
-
- self.console.print(
- "[green]✅ Perfect! CUDA will work great on your NVIDIA GPU.[/green]\n"
- )
-
- elif has_amd:
- # AMD GPU - show Cortex catching the mistake
- self.console.print(f"[cyan]Detected GPU:[/cyan] {gpu_info.model}")
- self.console.print("Let's try to install CUDA...")
-
- if not self._prompt_command("cortex install cuda"):
- return False
-
- with self.console.status("[cyan]CX[/cyan] Understanding request...", spinner="dots"):
- time.sleep(0.8)
- with self.console.status(
- "[cyan]CX[/cyan] Checking hardware compatibility...", spinner="dots"
- ):
- time.sleep(1.2)
-
- self.console.print("\n[yellow]⚠️ Hardware Compatibility Warning:[/yellow]")
- time.sleep(0.8)
- self.console.print(f"[cyan]Your GPU:[/cyan] {gpu_info.model}")
- self.console.print("[red]NVIDIA CUDA will not work on AMD hardware![/red]\n")
- time.sleep(1.0)
-
- self.console.print(
- "[cyan]🤖 Cortex suggests:[/cyan] Install ROCm instead (AMD's GPU framework)"
- )
- time.sleep(0.8)
- self.console.print("\n[bold]Recommended alternative:[/bold]")
- self.console.print(" [cyan]cortex install rocm[/cyan]\n")
-
- self.console.print("[green]✅ Cortex prevented an incompatible installation![/green]\n")
-
- else:
- # No GPU - show Python dev tools
- self.console.print("[cyan]No dedicated GPU detected - CPU mode[/cyan]")
- self.console.print("Let's install Python development tools:")
-
- if not self._prompt_command("cortex install python-dev"):
- return False
-
- with self.console.status("[cyan]CX[/cyan] Understanding request...", spinner="dots"):
- time.sleep(0.8)
- with self.console.status("[cyan]CX[/cyan] Planning installation...", spinner="dots"):
- time.sleep(1.0)
-
- self.console.print("[bold]Generated commands:[/bold]")
- self.console.print(" 1. [dim]sudo apt update[/dim]")
- self.console.print(" 2. [dim]sudo apt install -y python3-dev[/dim]")
- self.console.print(" 3. [dim]sudo apt install -y python3-pip[/dim]")
- self.console.print(" 4. [dim]sudo apt install -y python3-venv[/dim]\n")
-
- self.console.print("[bold green]💡 The Difference:[/bold green]")
- self.console.print("Traditional package managers install whatever you ask for.")
- self.console.print(
- "Cortex [bold]checks compatibility FIRST[/bold] and prevents problems!\n"
- )
-
- return self._wait_for_user()
-
- def _section_smart_stacks(self) -> bool:
- """Section 2: Smart Stacks & Complete Workflows"""
- self.console.print("[bold cyan]📚 Smart Stacks - Complete Workflows[/bold cyan]\n")
-
- self.console.print("Stacks are pre-configured bundles of tools for common workflows.")
- self.console.print("Install everything you need with one command.\n")
-
- # List stacks
- if not self._prompt_command("cortex stack --list"):
- return False
-
- self.console.print() # Visual spacing before stacks table
-
- # Show stacks table
- stacks_table = Table(title="📦 Available Stacks", show_header=True)
- stacks_table.add_column("Stack", style="cyan", width=12)
- stacks_table.add_column("Description", style="white", width=22)
- stacks_table.add_column("Packages", style="dim", width=35)
-
- stacks_table.add_row("ml", "Machine Learning (GPU)", "PyTorch, CUDA, Jupyter, pandas...")
- stacks_table.add_row("ml-cpu", "Machine Learning (CPU)", "PyTorch CPU-only version")
- stacks_table.add_row("webdev", "Web Development", "Node, npm, nginx, postgres")
- stacks_table.add_row("devops", "DevOps Tools", "Docker, kubectl, terraform, ansible")
- stacks_table.add_row("data", "Data Science", "Python, pandas, jupyter, postgres")
-
- self.console.print(stacks_table)
- self.console.print(
- "\n [cyan]CX[/cyan] │ Use: [cyan]cortex stack [/cyan] to install a stack\n"
- )
-
- if not self._wait_for_user():
- return False
-
- # Install webdev stack
- self.console.print("\nLet's install the Web Development stack:")
-
- if not self._prompt_command("cortex stack webdev"):
- return False
-
- self.console.print(" [cyan]CX[/cyan] [green]✓[/green] ")
- self.console.print("🚀 Installing stack: [bold]Web Development[/bold]\n")
-
- # Simulate full stack installation
- self._simulate_cortex_output(["nodejs", "npm", "nginx", "postgresql"], show_execution=True)
-
- self.console.print(" [cyan]CX[/cyan] [green]✓[/green] ")
- self.console.print("[green]✅ Stack 'Web Development' installed successfully![/green]")
- self.console.print("[green]Installed 4 packages[/green]\n")
-
- self.console.print("[bold green]💡 Benefit:[/bold green]")
- self.console.print(
- "One command sets up your [bold]entire development environment[/bold].\n"
- )
-
- self.console.print("\n[cyan]💡 Tip:[/cyan] Create custom stacks for your team's workflow!")
- self.console.print(' [dim]cortex stack create "mystack" package1 package2...[/dim]\n')
-
- return self._wait_for_user()
-
- def _section_history_safety(self) -> bool:
- """Section 3: History Tracking & Safety Features"""
- self.console.print("[bold cyan]🔒 History & Safety Features[/bold cyan]\n")
-
- # Part 1: Installation History
- self.console.print("[bold]Part 1: Installation History[/bold]")
- self.console.print("Cortex keeps a complete record of all installations.")
- self.console.print("Review what you've installed anytime:\n")
-
- if not self._prompt_command("cortex history"):
- return False
-
- self.console.print()
-
- # Show history table
- history_table = Table(show_header=True)
- history_table.add_column("ID", style="dim", width=18)
- history_table.add_column("Date", style="cyan", width=20)
- history_table.add_column("Operation", style="white", width=12)
- history_table.add_column("Packages", style="yellow", width=25)
- history_table.add_column("Status", style="green", width=10)
-
- history_table.add_row(
- self.installation_id,
- self._generate_past_date(0),
- "install",
- "nginx, nodejs +2",
- "success",
- )
- history_table.add_row(
- self._generate_id(),
- self._generate_past_date(1, 13, 13),
- "install",
- "docker",
- "success",
- )
- history_table.add_row(
- self._generate_id(),
- self._generate_past_date(1, 14, 25),
- "install",
- "python3-dev",
- "success",
- )
- history_table.add_row(
- self._generate_id(),
- self._generate_past_date(2, 18, 29),
- "install",
- "postgresql",
- "success",
- )
-
- self.console.print(history_table)
- self.console.print()
-
- self.console.print("[bold green]💡 Tracking Feature:[/bold green]")
- self.console.print(
- "Every installation is tracked. You can [bold]review or undo[/bold] any operation.\n"
- )
-
- if not self._wait_for_user():
- return False
-
- # Part 2: Rollback Functionality
- self.console.print("\n[bold]Part 2: Safe Rollback[/bold]")
- self.console.print("Made a mistake? Installed something wrong?")
- self.console.print("Cortex can [bold]roll back any installation[/bold].\n")
-
- self.console.print(
- f"Let's undo our webdev stack installation (ID: {self.installation_id}):"
- )
-
- if not self._prompt_command(f"cortex rollback {self.installation_id}"):
- return False
-
- self.console.print()
- with self.console.status("[cyan]CX[/cyan] Loading installation record...", spinner="dots"):
- time.sleep(0.8)
- with self.console.status("[cyan]CX[/cyan] Planning rollback...", spinner="dots"):
- time.sleep(1.0)
- with self.console.status("[cyan]CX[/cyan] Removing packages...", spinner="dots"):
- time.sleep(1.2)
-
- rollback_id = self._generate_id()
- self.console.print(
- f" [cyan]CX[/cyan] [green]✓[/green] Rollback successful (ID: {rollback_id})\n"
- )
-
- self.console.print(
- "[green]✅ All packages from that installation have been removed.[/green]\n"
- )
-
- self.console.print("[bold green]💡 Peace of Mind:[/bold green]")
- self.console.print(
- "Try anything fearlessly - you can always [bold]roll back[/bold] to a clean state.\n"
- )
-
- return self._wait_for_user()
-
- def _show_finale(self) -> None:
- """Show finale with comparison table and next steps"""
- self.console.print("\n" + "=" * 70)
- self.console.print(
- "[bold green]🎉 Demo Complete - You've Mastered Cortex Basics![/bold green]"
- )
- self.console.print("=" * 70 + "\n")
-
- # Show comparison table (THE WOW FACTOR)
- self.console.print("\n[bold]Why Cortex is Different:[/bold]\n")
-
- comparison_table = Table(
- title="Cortex vs Traditional Package Managers", show_header=True, border_style="cyan"
- )
- comparison_table.add_column("Feature", style="cyan", width=20)
- comparison_table.add_column("Traditional (apt/brew)", style="yellow", width=25)
- comparison_table.add_column("Cortex", style="green", width=25)
-
- comparison_table.add_row("Planning", "Installs immediately", "Shows plan first")
- comparison_table.add_row("Search", "Exact string match", "Semantic/Intent based")
- comparison_table.add_row(
- "Hardware Aware", "Installs anything", "Checks compatibility first"
- )
- comparison_table.add_row("Natural Language", "Strict syntax only", "AI understands intent")
- comparison_table.add_row("Stacks", "Manual script creation", "One-command workflows")
- comparison_table.add_row("Safety", "Manual backups", "Automatic rollback")
- comparison_table.add_row("Multi-Manager", "Choose apt/brew/npm", "One tool, all managers")
-
- self.console.print(comparison_table)
- self.console.print()
-
- # Key takeaways
- summary = """
-[bold]What You've Learned:[/bold]
-
- ✓ [cyan]AI-Powered Understanding[/cyan] - Natural language queries
- ✓ [cyan]Transparent Planning[/cyan] - See commands before execution
- ✓ [cyan]Hardware-Aware[/cyan] - Prevents incompatible installations
- ✓ [cyan]Smart Stacks[/cyan] - Complete workflows in one command
- ✓ [cyan]Full History[/cyan] - Track every installation
- ✓ [cyan]Safe Rollback[/cyan] - Undo anything, anytime
-
-[bold cyan]Ready to use Cortex?[/bold cyan]
-
-Essential commands:
- $ [cyan]cortex wizard[/cyan] # Configure your API key (recommended first step!)
- $ [cyan]cortex install "package"[/cyan] # Install packages
- $ [cyan]cortex ask "question"[/cyan] # Get AI recommendations
- $ [cyan]cortex stack --list[/cyan] # See available stacks
- $ [cyan]cortex stack [/cyan] # Install a complete stack
- $ [cyan]cortex history[/cyan] # View installation history
- $ [cyan]cortex rollback [/cyan] # Undo an installation
- $ [cyan]cortex doctor[/cyan] # Check system health
- $ [cyan]cortex --help[/cyan] # See all commands
-
-[dim]GitHub: github.com/cortexlinux/cortex[/dim]
- """
-
- self.console.print(Panel(summary, border_style="green", title="🚀 Next Steps"))
- self.console.print("\n[bold]Thank you for trying Cortex! Happy installing! 🎉[/bold]\n")
+
+console = Console()
+
+
+def _run_cortex_command(args: list[str], capture: bool = False) -> tuple[int, str]:
+ """Run a cortex command and return exit code and output."""
+ cmd = ["cortex"] + args
+ if capture:
+ result = subprocess.run(cmd, capture_output=True, text=True)
+ return result.returncode, result.stdout + result.stderr
+ else:
+ result = subprocess.run(cmd)
+ return result.returncode, ""
+
+
+def _wait_for_enter():
+ """Wait for user to press enter."""
+ console.print("\n[dim]Press Enter to continue...[/dim]")
+ input()
+
+
+def _section(title: str, problem: str):
+ """Display a compact section header."""
+ console.print(f"\n[bold cyan]{'─' * 50}[/bold cyan]")
+ console.print(f"[bold white]{title}[/bold white]")
+ console.print(f"[dim]{problem}[/dim]\n")
def run_demo() -> int:
- """
- Entry point for the interactive Cortex demo.
- Teaches users Cortex through hands-on practice.
- """
- demo = CortexDemo()
- return demo.run()
+ """Run the interactive Cortex demo."""
+ console.clear()
+ show_banner()
+
+ # ─────────────────────────────────────────────────────────────────
+ # INTRODUCTION
+ # ─────────────────────────────────────────────────────────────────
+
+ intro = """
+**Cortex** - The AI-native package manager for Linux.
+
+In this demo you'll try:
+• **Ask** - Query your system in natural language
+• **Install** - Install packages with AI interpretation
+• **Rollback** - Undo installations safely
+"""
+ console.print(Panel(Markdown(intro), title="[cyan]Demo[/cyan]", border_style="cyan"))
+ _wait_for_enter()
+
+ # ─────────────────────────────────────────────────────────────────
+ # ASK COMMAND
+ # ─────────────────────────────────────────────────────────────────
+
+ _section("🔍 Ask Command", "Query your system without memorizing Linux commands.")
+
+ console.print("[dim]Examples: 'What Python version?', 'How much disk space?'[/dim]\n")
+
+ user_question = Prompt.ask(
+ "[cyan]What would you like to ask?[/cyan]", default="What version of Python is installed?"
+ )
+
+ console.print(f'\n[yellow]$[/yellow] cortex ask "{user_question}"\n')
+ _run_cortex_command(["ask", user_question])
+
+ _wait_for_enter()
+
+ # ─────────────────────────────────────────────────────────────────
+ # INSTALL COMMAND
+ # ─────────────────────────────────────────────────────────────────
+
+ _section("📦 Install Command", "Describe what you want - Cortex finds the right packages.")
+
+ console.print("[dim]Examples: 'a web server', 'python dev tools', 'docker'[/dim]\n")
+
+ user_install = Prompt.ask(
+ "[cyan]What would you like to install?[/cyan]", default="a lightweight text editor"
+ )
+
+ console.print(f'\n[yellow]$[/yellow] cortex install "{user_install}" --dry-run\n')
+ _run_cortex_command(["install", user_install, "--dry-run"])
+
+ console.print()
+ if Confirm.ask("Actually install this?", default=False):
+ console.print(f'\n[yellow]$[/yellow] cortex install "{user_install}" --execute\n')
+ _run_cortex_command(["install", user_install, "--execute"])
+
+ _wait_for_enter()
+
+ # ─────────────────────────────────────────────────────────────────
+ # ROLLBACK COMMAND
+ # ─────────────────────────────────────────────────────────────────
+
+ _section("⏪ Rollback Command", "Undo any installation by reverting to the previous state.")
+
+ console.print("[dim]First, let's see your installation history with IDs:[/dim]\n")
+ console.print("[yellow]$[/yellow] cortex history --limit 5\n")
+ _run_cortex_command(["history", "--limit", "5"])
+
+ _wait_for_enter()
+
+ if Confirm.ask("Preview a rollback?", default=False):
+ console.print("\n[cyan]Copy an installation ID from the history above:[/cyan]")
+ console.print("[dim]$ cortex rollback [/dim]", end="")
+ rollback_id = input().strip()
+
+ if rollback_id:
+ console.print(f"\n[yellow]$[/yellow] cortex rollback {rollback_id} --dry-run\n")
+ _run_cortex_command(["rollback", rollback_id, "--dry-run"])
+
+ if Confirm.ask("Actually rollback?", default=False):
+ console.print(f"\n[yellow]$[/yellow] cortex rollback {rollback_id}\n")
+ _run_cortex_command(["rollback", rollback_id])
+
+ # ─────────────────────────────────────────────────────────────────
+ # SUMMARY
+ # ─────────────────────────────────────────────────────────────────
+
+ console.print(f"\n[bold cyan]{'─' * 50}[/bold cyan]")
+ console.print("[bold green]✓ Demo Complete![/bold green]\n")
+ console.print("[dim]Commands: ask, install, history, rollback, stack, status[/dim]")
+ console.print("[dim]Run 'cortex --help' for more.[/dim]\n")
+
+ return 0
+
+
+if __name__ == "__main__":
+ sys.exit(run_demo())
diff --git a/cortex/do_runner.py b/cortex/do_runner.py
new file mode 100644
index 00000000..ea56ecc4
--- /dev/null
+++ b/cortex/do_runner.py
@@ -0,0 +1,55 @@
+"""Do Runner Module for Cortex.
+
+This file provides backward compatibility by re-exporting all classes
+from the modular do_runner package.
+
+For new code, prefer importing directly from the package:
+ from cortex.do_runner import DoHandler, CommandStatus, etc.
+"""
+
+# Re-export everything from the modular package
+from cortex.do_runner import ( # Diagnosis; Models; Verification; Managers; Handler; Database; Executor; Terminal
+ AutoFixer,
+ CommandLog,
+ CommandStatus,
+ ConflictDetector,
+ CortexUserManager,
+ DoHandler,
+ DoRun,
+ DoRunDatabase,
+ ErrorDiagnoser,
+ FileUsefulnessAnalyzer,
+ ProtectedPathsManager,
+ RunMode,
+ TaskNode,
+ TaskTree,
+ TaskTreeExecutor,
+ TaskType,
+ TerminalMonitor,
+ VerificationRunner,
+ get_do_handler,
+ setup_cortex_user,
+)
+
+__all__ = [
+ "CommandLog",
+ "CommandStatus",
+ "DoRun",
+ "RunMode",
+ "TaskNode",
+ "TaskTree",
+ "TaskType",
+ "DoRunDatabase",
+ "CortexUserManager",
+ "ProtectedPathsManager",
+ "TerminalMonitor",
+ "TaskTreeExecutor",
+ "AutoFixer",
+ "ErrorDiagnoser",
+ "ConflictDetector",
+ "FileUsefulnessAnalyzer",
+ "VerificationRunner",
+ "DoHandler",
+ "get_do_handler",
+ "setup_cortex_user",
+]
diff --git a/cortex/do_runner/Untitled b/cortex/do_runner/Untitled
new file mode 100644
index 00000000..597a6db2
--- /dev/null
+++ b/cortex/do_runner/Untitled
@@ -0,0 +1 @@
+i
\ No newline at end of file
diff --git a/cortex/do_runner/__init__.py b/cortex/do_runner/__init__.py
new file mode 100644
index 00000000..135159b2
--- /dev/null
+++ b/cortex/do_runner/__init__.py
@@ -0,0 +1,121 @@
+"""
+Do Runner Module for Cortex.
+
+Enables the ask command to write, read, and execute commands to solve problems.
+Manages privilege escalation, command logging, and user confirmation flows.
+
+This module is organized into the following submodules:
+- models: Data classes and enums (CommandStatus, RunMode, TaskType, etc.)
+- database: DoRunDatabase for storing run history
+- managers: CortexUserManager, ProtectedPathsManager
+- terminal: TerminalMonitor for watching terminal activity
+- executor: TaskTreeExecutor for advanced command execution
+- diagnosis: ErrorDiagnoser, AutoFixer for error handling
+- verification: ConflictDetector, VerificationRunner, FileUsefulnessAnalyzer
+- handler: Main DoHandler class
+"""
+
+from .database import DoRunDatabase
+from .diagnosis import (
+ ALL_ERROR_PATTERNS,
+ LOGIN_REQUIREMENTS,
+ UBUNTU_PACKAGE_MAP,
+ UBUNTU_SERVICE_MAP,
+ AutoFixer,
+ ErrorDiagnoser,
+ LoginHandler,
+ LoginRequirement,
+ get_error_category,
+ get_severity,
+ is_critical_error,
+)
+
+# New structured diagnosis engine
+from .diagnosis_v2 import (
+ ERROR_PATTERNS,
+ DiagnosisEngine,
+ DiagnosisResult,
+ ErrorCategory,
+ ErrorStackEntry,
+ ExecutionResult,
+ FixCommand,
+ FixPlan,
+ VariableResolution,
+ get_diagnosis_engine,
+)
+from .executor import TaskTreeExecutor
+from .handler import (
+ DoHandler,
+ get_do_handler,
+ setup_cortex_user,
+)
+from .managers import (
+ CortexUserManager,
+ ProtectedPathsManager,
+)
+from .models import (
+ CommandLog,
+ CommandStatus,
+ DoRun,
+ RunMode,
+ TaskNode,
+ TaskTree,
+ TaskType,
+)
+from .terminal import TerminalMonitor
+from .verification import (
+ ConflictDetector,
+ FileUsefulnessAnalyzer,
+ VerificationRunner,
+)
+
+__all__ = [
+ # Models
+ "CommandLog",
+ "CommandStatus",
+ "DoRun",
+ "RunMode",
+ "TaskNode",
+ "TaskTree",
+ "TaskType",
+ # Database
+ "DoRunDatabase",
+ # Managers
+ "CortexUserManager",
+ "ProtectedPathsManager",
+ # Terminal
+ "TerminalMonitor",
+ # Executor
+ "TaskTreeExecutor",
+ # Diagnosis (legacy)
+ "AutoFixer",
+ "ErrorDiagnoser",
+ "LoginHandler",
+ "LoginRequirement",
+ "LOGIN_REQUIREMENTS",
+ "UBUNTU_PACKAGE_MAP",
+ "UBUNTU_SERVICE_MAP",
+ "ALL_ERROR_PATTERNS",
+ "get_error_category",
+ "get_severity",
+ "is_critical_error",
+ # Diagnosis v2 (structured)
+ "DiagnosisEngine",
+ "ErrorCategory",
+ "DiagnosisResult",
+ "FixCommand",
+ "FixPlan",
+ "VariableResolution",
+ "ExecutionResult",
+ "ErrorStackEntry",
+ "ERROR_PATTERNS",
+ "get_diagnosis_engine",
+ # Verification
+ "ConflictDetector",
+ "FileUsefulnessAnalyzer",
+ "VerificationRunner",
+ # Handler
+ "DoHandler",
+ "get_do_handler",
+ "setup_cortex_user",
+]
diff --git a/cortex/do_runner/database.py b/cortex/do_runner/database.py
new file mode 100644
index 00000000..d5defe5a
--- /dev/null
+++ b/cortex/do_runner/database.py
@@ -0,0 +1,498 @@
+"""Database module for storing do run history."""
+
+import datetime
+import hashlib
+import json
+import os
+import sqlite3
+from pathlib import Path
+from typing import Any
+
+from rich.console import Console
+
+from .models import CommandLog, CommandStatus, DoRun, RunMode
+
+console = Console()
+
+
+class DoRunDatabase:
+ """SQLite database for storing do run history."""
+
+ def __init__(self, db_path: Path | None = None):
+ self.db_path = db_path or Path.home() / ".cortex" / "do_runs.db"
+ self._ensure_directory()
+ self._init_db()
+
+ def _ensure_directory(self):
+ """Ensure the database directory exists with proper permissions."""
+ try:
+ self.db_path.parent.mkdir(parents=True, exist_ok=True)
+ if not os.access(self.db_path.parent, os.W_OK):
+ raise OSError(f"Directory {self.db_path.parent} is not writable")
+ except OSError:
+ alt_path = Path("/tmp") / ".cortex" / "do_runs.db"
+ alt_path.parent.mkdir(parents=True, exist_ok=True)
+ self.db_path = alt_path
+ console.print(
+ f"[yellow]Warning: Using alternate database path: {self.db_path}[/yellow]"
+ )
+
+ def _init_db(self):
+ """Initialize the database schema."""
+ try:
+ with sqlite3.connect(str(self.db_path)) as conn:
+ conn.execute("""
+ CREATE TABLE IF NOT EXISTS do_runs (
+ run_id TEXT PRIMARY KEY,
+ session_id TEXT,
+ summary TEXT NOT NULL,
+ commands_log TEXT NOT NULL,
+ commands_list TEXT,
+ mode TEXT NOT NULL,
+ user_query TEXT,
+ started_at TEXT,
+ completed_at TEXT,
+ files_accessed TEXT,
+ privileges_granted TEXT,
+ full_data TEXT,
+ total_commands INTEGER DEFAULT 0,
+ successful_commands INTEGER DEFAULT 0,
+ failed_commands INTEGER DEFAULT 0,
+ skipped_commands INTEGER DEFAULT 0
+ )
+ """)
+
+ # Create sessions table
+ conn.execute("""
+ CREATE TABLE IF NOT EXISTS do_sessions (
+ session_id TEXT PRIMARY KEY,
+ started_at TEXT,
+ ended_at TEXT,
+ total_runs INTEGER DEFAULT 0,
+ total_queries TEXT
+ )
+ """)
+
+ conn.execute("""
+ CREATE TABLE IF NOT EXISTS do_run_commands (
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
+ run_id TEXT NOT NULL,
+ command_index INTEGER NOT NULL,
+ command TEXT NOT NULL,
+ purpose TEXT,
+ status TEXT NOT NULL,
+ output_truncated TEXT,
+ error_truncated TEXT,
+ duration_seconds REAL DEFAULT 0,
+ timestamp TEXT,
+ useful INTEGER DEFAULT 1,
+ FOREIGN KEY (run_id) REFERENCES do_runs(run_id)
+ )
+ """)
+
+ conn.execute("""
+ CREATE INDEX IF NOT EXISTS idx_do_runs_started
+ ON do_runs(started_at DESC)
+ """)
+
+ conn.execute("""
+ CREATE INDEX IF NOT EXISTS idx_do_run_commands_run_id
+ ON do_run_commands(run_id)
+ """)
+
+ self._migrate_schema(conn)
+ conn.commit()
+ except sqlite3.OperationalError as e:
+ raise OSError(f"Failed to initialize database at {self.db_path}: {e}")
+
+ def _migrate_schema(self, conn: sqlite3.Connection):
+ """Add new columns to existing tables if they don't exist."""
+ cursor = conn.execute("PRAGMA table_info(do_runs)")
+ existing_columns = {row[1] for row in cursor.fetchall()}
+
+ new_columns = [
+ ("total_commands", "INTEGER DEFAULT 0"),
+ ("successful_commands", "INTEGER DEFAULT 0"),
+ ("failed_commands", "INTEGER DEFAULT 0"),
+ ("skipped_commands", "INTEGER DEFAULT 0"),
+ ("commands_list", "TEXT"),
+ ("session_id", "TEXT"),
+ ]
+
+ for col_name, col_type in new_columns:
+ if col_name not in existing_columns:
+ try:
+ conn.execute(f"ALTER TABLE do_runs ADD COLUMN {col_name} {col_type}")
+ except sqlite3.OperationalError:
+ pass
+
+ cursor = conn.execute("""
+ SELECT run_id, full_data FROM do_runs
+ WHERE total_commands IS NULL OR total_commands = 0 OR commands_list IS NULL
+ """)
+
+ for row in cursor.fetchall():
+ run_id = row[0]
+ try:
+ full_data = json.loads(row[1]) if row[1] else {}
+ commands = full_data.get("commands", [])
+ total = len(commands)
+ success = sum(1 for c in commands if c.get("status") == "success")
+ failed = sum(1 for c in commands if c.get("status") == "failed")
+ skipped = sum(1 for c in commands if c.get("status") == "skipped")
+
+ commands_list = json.dumps([c.get("command", "") for c in commands])
+
+ conn.execute(
+ """
+ UPDATE do_runs SET
+ total_commands = ?,
+ successful_commands = ?,
+ failed_commands = ?,
+ skipped_commands = ?,
+ commands_list = ?
+ WHERE run_id = ?
+ """,
+ (total, success, failed, skipped, commands_list, run_id),
+ )
+
+ for idx, cmd in enumerate(commands):
+ exists = conn.execute(
+ "SELECT 1 FROM do_run_commands WHERE run_id = ? AND command_index = ?",
+ (run_id, idx),
+ ).fetchone()
+
+ if not exists:
+ output = cmd.get("output", "")[:250] if cmd.get("output") else ""
+ error = cmd.get("error", "")[:250] if cmd.get("error") else ""
+ conn.execute(
+ """
+ INSERT INTO do_run_commands
+ (run_id, command_index, command, purpose, status,
+ output_truncated, error_truncated, duration_seconds, timestamp, useful)
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
+ """,
+ (
+ run_id,
+ idx,
+ cmd.get("command", ""),
+ cmd.get("purpose", ""),
+ cmd.get("status", "pending"),
+ output,
+ error,
+ cmd.get("duration_seconds", 0),
+ cmd.get("timestamp", ""),
+ 1 if cmd.get("useful", True) else 0,
+ ),
+ )
+ except (json.JSONDecodeError, KeyError):
+ pass
+
+ def _generate_run_id(self) -> str:
+ """Generate a unique run ID."""
+ timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S%f")
+ random_part = hashlib.sha256(os.urandom(16)).hexdigest()[:8]
+ return f"do_{timestamp}_{random_part}"
+
+ def _truncate_output(self, text: str, max_length: int = 250) -> str:
+ """Truncate output to specified length."""
+ if not text:
+ return ""
+ if len(text) <= max_length:
+ return text
+ return text[:max_length] + "... [truncated]"
+
+ def save_run(self, run: DoRun) -> str:
+ """Save a do run to the database with detailed command information."""
+ if not run.run_id:
+ run.run_id = self._generate_run_id()
+
+ commands_log = run.get_commands_log_string()
+
+ total_commands = len(run.commands)
+ successful_commands = sum(1 for c in run.commands if c.status == CommandStatus.SUCCESS)
+ failed_commands = sum(1 for c in run.commands if c.status == CommandStatus.FAILED)
+ skipped_commands = sum(1 for c in run.commands if c.status == CommandStatus.SKIPPED)
+
+ commands_list = json.dumps([cmd.command for cmd in run.commands])
+
+ with sqlite3.connect(str(self.db_path)) as conn:
+ conn.execute(
+ """
+ INSERT OR REPLACE INTO do_runs
+ (run_id, session_id, summary, commands_log, commands_list, mode, user_query, started_at,
+ completed_at, files_accessed, privileges_granted, full_data,
+ total_commands, successful_commands, failed_commands, skipped_commands)
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
+ """,
+ (
+ run.run_id,
+ run.session_id or None,
+ run.summary,
+ commands_log,
+ commands_list,
+ run.mode.value,
+ run.user_query,
+ run.started_at,
+ run.completed_at,
+ json.dumps(run.files_accessed),
+ json.dumps(run.privileges_granted),
+ json.dumps(run.to_dict()),
+ total_commands,
+ successful_commands,
+ failed_commands,
+ skipped_commands,
+ ),
+ )
+
+ conn.execute("DELETE FROM do_run_commands WHERE run_id = ?", (run.run_id,))
+
+ for idx, cmd in enumerate(run.commands):
+ conn.execute(
+ """
+ INSERT INTO do_run_commands
+ (run_id, command_index, command, purpose, status,
+ output_truncated, error_truncated, duration_seconds, timestamp, useful)
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
+ """,
+ (
+ run.run_id,
+ idx,
+ cmd.command,
+ cmd.purpose,
+ cmd.status.value,
+ self._truncate_output(cmd.output, 250),
+ self._truncate_output(cmd.error, 250),
+ cmd.duration_seconds,
+ cmd.timestamp,
+ 1 if cmd.useful else 0,
+ ),
+ )
+
+ conn.commit()
+
+ return run.run_id
+
+ def get_run(self, run_id: str) -> DoRun | None:
+ """Get a specific run by ID."""
+ with sqlite3.connect(str(self.db_path)) as conn:
+ conn.row_factory = sqlite3.Row
+ cursor = conn.execute("SELECT * FROM do_runs WHERE run_id = ?", (run_id,))
+ row = cursor.fetchone()
+
+ if row:
+ full_data = json.loads(row["full_data"])
+ run = DoRun(
+ run_id=full_data["run_id"],
+ summary=full_data["summary"],
+ mode=RunMode(full_data["mode"]),
+ commands=[CommandLog.from_dict(c) for c in full_data["commands"]],
+ started_at=full_data.get("started_at", ""),
+ completed_at=full_data.get("completed_at", ""),
+ user_query=full_data.get("user_query", ""),
+ files_accessed=full_data.get("files_accessed", []),
+ privileges_granted=full_data.get("privileges_granted", []),
+ session_id=row["session_id"] if "session_id" in row.keys() else "",
+ )
+ return run
+ return None
+
+ def get_run_commands(self, run_id: str) -> list[dict[str, Any]]:
+ """Get detailed command information for a run."""
+ with sqlite3.connect(str(self.db_path)) as conn:
+ conn.row_factory = sqlite3.Row
+ cursor = conn.execute(
+ """
+ SELECT command_index, command, purpose, status,
+ output_truncated, error_truncated, duration_seconds, timestamp, useful
+ FROM do_run_commands
+ WHERE run_id = ?
+ ORDER BY command_index
+ """,
+ (run_id,),
+ )
+
+ commands = []
+ for row in cursor:
+ commands.append(
+ {
+ "index": row["command_index"],
+ "command": row["command"],
+ "purpose": row["purpose"],
+ "status": row["status"],
+ "output": row["output_truncated"],
+ "error": row["error_truncated"],
+ "duration": row["duration_seconds"],
+ "timestamp": row["timestamp"],
+ "useful": bool(row["useful"]),
+ }
+ )
+ return commands
+
+ def get_run_stats(self, run_id: str) -> dict[str, Any] | None:
+ """Get command statistics for a run."""
+ with sqlite3.connect(str(self.db_path)) as conn:
+ conn.row_factory = sqlite3.Row
+ cursor = conn.execute(
+ """
+ SELECT run_id, summary, total_commands, successful_commands,
+ failed_commands, skipped_commands, started_at, completed_at
+ FROM do_runs WHERE run_id = ?
+ """,
+ (run_id,),
+ )
+ row = cursor.fetchone()
+
+ if row:
+ return {
+ "run_id": row["run_id"],
+ "summary": row["summary"],
+ "total_commands": row["total_commands"] or 0,
+ "successful_commands": row["successful_commands"] or 0,
+ "failed_commands": row["failed_commands"] or 0,
+ "skipped_commands": row["skipped_commands"] or 0,
+ "started_at": row["started_at"],
+ "completed_at": row["completed_at"],
+ }
+ return None
+
+ def get_commands_list(self, run_id: str) -> list[str]:
+ """Get just the list of commands for a run."""
+ with sqlite3.connect(str(self.db_path)) as conn:
+ conn.row_factory = sqlite3.Row
+ cursor = conn.execute("SELECT commands_list FROM do_runs WHERE run_id = ?", (run_id,))
+ row = cursor.fetchone()
+
+ if row and row["commands_list"]:
+ try:
+ return json.loads(row["commands_list"])
+ except (json.JSONDecodeError, TypeError):
+ pass
+
+ cursor = conn.execute(
+ "SELECT command FROM do_run_commands WHERE run_id = ? ORDER BY command_index",
+ (run_id,),
+ )
+ return [row["command"] for row in cursor.fetchall()]
+
+ def get_recent_runs(self, limit: int = 20) -> list[DoRun]:
+ """Get recent do runs."""
+ with sqlite3.connect(str(self.db_path)) as conn:
+ conn.row_factory = sqlite3.Row
+ cursor = conn.execute(
+ "SELECT full_data, session_id FROM do_runs ORDER BY started_at DESC LIMIT ?",
+ (limit,),
+ )
+ runs = []
+ for row in cursor:
+ full_data = json.loads(row["full_data"])
+ run = DoRun(
+ run_id=full_data["run_id"],
+ summary=full_data["summary"],
+ mode=RunMode(full_data["mode"]),
+ commands=[CommandLog.from_dict(c) for c in full_data["commands"]],
+ started_at=full_data.get("started_at", ""),
+ completed_at=full_data.get("completed_at", ""),
+ user_query=full_data.get("user_query", ""),
+ files_accessed=full_data.get("files_accessed", []),
+ privileges_granted=full_data.get("privileges_granted", []),
+ )
+ run.session_id = row["session_id"]
+ runs.append(run)
+ return runs
+
+ def create_session(self) -> str:
+ """Create a new session and return the session ID."""
+ session_id = f"session_{datetime.datetime.now().strftime('%Y%m%d%H%M%S%f')}_{hashlib.md5(str(datetime.datetime.now().timestamp()).encode()).hexdigest()[:8]}"
+
+ with sqlite3.connect(str(self.db_path)) as conn:
+ conn.execute(
+ """INSERT INTO do_sessions (session_id, started_at, total_runs, total_queries)
+ VALUES (?, ?, 0, '[]')""",
+ (session_id, datetime.datetime.now().isoformat()),
+ )
+ conn.commit()
+
+ return session_id
+
+ def update_session(
+ self, session_id: str, query: str | None = None, increment_runs: bool = False
+ ):
+ """Update a session with new query or run count."""
+ with sqlite3.connect(str(self.db_path)) as conn:
+ if increment_runs:
+ conn.execute(
+ "UPDATE do_sessions SET total_runs = total_runs + 1 WHERE session_id = ?",
+ (session_id,),
+ )
+
+ if query:
+ # Get current queries
+ cursor = conn.execute(
+ "SELECT total_queries FROM do_sessions WHERE session_id = ?", (session_id,)
+ )
+ row = cursor.fetchone()
+ if row:
+ queries = json.loads(row[0]) if row[0] else []
+ queries.append(query)
+ conn.execute(
+ "UPDATE do_sessions SET total_queries = ? WHERE session_id = ?",
+ (json.dumps(queries), session_id),
+ )
+
+ conn.commit()
+
+ def end_session(self, session_id: str):
+ """Mark a session as ended."""
+ with sqlite3.connect(str(self.db_path)) as conn:
+ conn.execute(
+ "UPDATE do_sessions SET ended_at = ? WHERE session_id = ?",
+ (datetime.datetime.now().isoformat(), session_id),
+ )
+ conn.commit()
+
+ def get_session_runs(self, session_id: str) -> list[DoRun]:
+ """Get all runs in a session."""
+ with sqlite3.connect(str(self.db_path)) as conn:
+ conn.row_factory = sqlite3.Row
+ cursor = conn.execute(
+ "SELECT full_data FROM do_runs WHERE session_id = ? ORDER BY started_at ASC",
+ (session_id,),
+ )
+ runs = []
+ for row in cursor:
+ full_data = json.loads(row["full_data"])
+ run = DoRun(
+ run_id=full_data["run_id"],
+ summary=full_data["summary"],
+ mode=RunMode(full_data["mode"]),
+ commands=[CommandLog.from_dict(c) for c in full_data["commands"]],
+ started_at=full_data.get("started_at", ""),
+ completed_at=full_data.get("completed_at", ""),
+ user_query=full_data.get("user_query", ""),
+ )
+ run.session_id = session_id
+ runs.append(run)
+ return runs
+
+ def get_recent_sessions(self, limit: int = 10) -> list[dict]:
+ """Get recent sessions with their run counts."""
+ with sqlite3.connect(str(self.db_path)) as conn:
+ conn.row_factory = sqlite3.Row
+ cursor = conn.execute(
+ """SELECT session_id, started_at, ended_at, total_runs, total_queries
+ FROM do_sessions ORDER BY started_at DESC LIMIT ?""",
+ (limit,),
+ )
+ sessions = []
+ for row in cursor:
+ sessions.append(
+ {
+ "session_id": row["session_id"],
+ "started_at": row["started_at"],
+ "ended_at": row["ended_at"],
+ "total_runs": row["total_runs"],
+ "queries": json.loads(row["total_queries"]) if row["total_queries"] else [],
+ }
+ )
+ return sessions
diff --git a/cortex/do_runner/diagnosis.py b/cortex/do_runner/diagnosis.py
new file mode 100644
index 00000000..4df91adc
--- /dev/null
+++ b/cortex/do_runner/diagnosis.py
@@ -0,0 +1,4090 @@
+"""
+Comprehensive Error Diagnosis and Auto-Fix for Cortex Do Runner.
+
+Handles all categories of Linux system errors:
+1. Command & Shell Errors
+2. File & Directory Errors
+3. Permission & Ownership Errors
+4. Process & Execution Errors
+5. Memory & Resource Errors
+6. Disk & Filesystem Errors
+7. Networking Errors
+8. Package Manager Errors
+9. User & Authentication Errors
+10. Device & Hardware Errors
+11. Compilation & Build Errors
+12. Archive & Compression Errors
+13. Shell Script Errors
+14. Environment & PATH Errors
+15. Miscellaneous System Errors
+"""
+
+import os
+import re
+import shutil
+import subprocess
+from collections.abc import Callable
+from dataclasses import dataclass, field
+from typing import Any
+
+from rich.console import Console
+
+console = Console()
+
+
+# ============================================================================
+# Error Pattern Definitions by Category
+# ============================================================================
+
+
+@dataclass
+class ErrorPattern:
+ """Defines an error pattern and its fix strategy."""
+
+ pattern: str
+ error_type: str
+ category: str
+ description: str
+ can_auto_fix: bool = False
+ fix_strategy: str = ""
+ severity: str = "error" # error, warning, critical
+
+
+# Category 1: Command & Shell Errors
+COMMAND_SHELL_ERRORS = [
+ # Timeout errors (check first for our specific message)
+ ErrorPattern(
+ r"[Cc]ommand timed out after \d+ seconds",
+ "command_timeout",
+ "timeout",
+ "Command timed out - operation took too long",
+ True,
+ "retry_with_longer_timeout",
+ ),
+ ErrorPattern(
+ r"[Tt]imed out",
+ "timeout",
+ "timeout",
+ "Operation timed out",
+ True,
+ "retry_with_longer_timeout",
+ ),
+ ErrorPattern(
+ r"[Tt]imeout",
+ "timeout",
+ "timeout",
+ "Operation timed out",
+ True,
+ "retry_with_longer_timeout",
+ ),
+ # Standard command errors
+ ErrorPattern(
+ r"command not found",
+ "command_not_found",
+ "command_shell",
+ "Command not installed",
+ True,
+ "install_package",
+ ),
+ ErrorPattern(
+ r"No such file or directory",
+ "not_found",
+ "command_shell",
+ "File or directory not found",
+ True,
+ "create_path",
+ ),
+ ErrorPattern(
+ r"Permission denied",
+ "permission_denied",
+ "command_shell",
+ "Permission denied",
+ True,
+ "use_sudo",
+ ),
+ ErrorPattern(
+ r"Operation not permitted",
+ "operation_not_permitted",
+ "command_shell",
+ "Operation not permitted (may need root)",
+ True,
+ "use_sudo",
+ ),
+ ErrorPattern(
+ r"Not a directory",
+ "not_a_directory",
+ "command_shell",
+ "Expected directory but found file",
+ False,
+ "check_path",
+ ),
+ ErrorPattern(
+ r"Is a directory",
+ "is_a_directory",
+ "command_shell",
+ "Expected file but found directory",
+ False,
+ "check_path",
+ ),
+ ErrorPattern(
+ r"Invalid argument",
+ "invalid_argument",
+ "command_shell",
+ "Invalid argument passed",
+ False,
+ "check_args",
+ ),
+ ErrorPattern(
+ r"Too many arguments",
+ "too_many_args",
+ "command_shell",
+ "Too many arguments provided",
+ False,
+ "check_args",
+ ),
+ ErrorPattern(
+ r"[Mm]issing operand",
+ "missing_operand",
+ "command_shell",
+ "Required argument missing",
+ False,
+ "check_args",
+ ),
+ ErrorPattern(
+ r"[Aa]mbiguous redirect",
+ "ambiguous_redirect",
+ "command_shell",
+ "Shell redirect is ambiguous",
+ False,
+ "fix_redirect",
+ ),
+ ErrorPattern(
+ r"[Bb]ad substitution",
+ "bad_substitution",
+ "command_shell",
+ "Shell variable substitution error",
+ False,
+ "fix_syntax",
+ ),
+ ErrorPattern(
+ r"[Uu]nbound variable",
+ "unbound_variable",
+ "command_shell",
+ "Variable not set",
+ True,
+ "set_variable",
+ ),
+ ErrorPattern(
+ r"[Ss]yntax error near unexpected token",
+ "syntax_error_token",
+ "command_shell",
+ "Shell syntax error",
+ False,
+ "fix_syntax",
+ ),
+ ErrorPattern(
+ r"[Uu]nexpected EOF",
+ "unexpected_eof",
+ "command_shell",
+ "Unclosed quote or bracket",
+ False,
+ "fix_syntax",
+ ),
+ ErrorPattern(
+ r"[Cc]annot execute binary file",
+ "cannot_execute_binary",
+ "command_shell",
+ "Binary incompatible with system",
+ False,
+ "check_architecture",
+ ),
+ ErrorPattern(
+ r"[Ee]xec format error",
+ "exec_format_error",
+ "command_shell",
+ "Invalid executable format",
+ False,
+ "check_architecture",
+ ),
+ ErrorPattern(
+ r"[Ii]llegal option",
+ "illegal_option",
+ "command_shell",
+ "Unrecognized command option",
+ False,
+ "check_help",
+ ),
+ ErrorPattern(
+ r"[Ii]nvalid option",
+ "invalid_option",
+ "command_shell",
+ "Invalid command option",
+ False,
+ "check_help",
+ ),
+ ErrorPattern(
+ r"[Rr]ead-only file ?system",
+ "readonly_fs",
+ "command_shell",
+ "Filesystem is read-only",
+ True,
+ "remount_rw",
+ ),
+ ErrorPattern(
+ r"[Ii]nput/output error",
+ "io_error",
+ "command_shell",
+ "I/O error (disk issue)",
+ False,
+ "check_disk",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Tt]ext file busy",
+ "text_file_busy",
+ "command_shell",
+ "File is being executed",
+ True,
+ "wait_retry",
+ ),
+ ErrorPattern(
+ r"[Aa]rgument list too long",
+ "arg_list_too_long",
+ "command_shell",
+ "Too many arguments for command",
+ True,
+ "use_xargs",
+ ),
+ ErrorPattern(
+ r"[Bb]roken pipe",
+ "broken_pipe",
+ "command_shell",
+ "Pipe closed unexpectedly",
+ False,
+ "check_pipe",
+ ),
+]
+
+# Category 2: File & Directory Errors
+FILE_DIRECTORY_ERRORS = [
+ ErrorPattern(
+ r"[Ff]ile exists",
+ "file_exists",
+ "file_directory",
+ "File already exists",
+ True,
+ "backup_overwrite",
+ ),
+ ErrorPattern(
+ r"[Ff]ile name too long",
+ "filename_too_long",
+ "file_directory",
+ "Filename exceeds limit",
+ False,
+ "shorten_name",
+ ),
+ ErrorPattern(
+ r"[Tt]oo many.*symbolic links",
+ "symlink_loop",
+ "file_directory",
+ "Symbolic link loop detected",
+ True,
+ "fix_symlink",
+ ),
+ ErrorPattern(
+ r"[Ss]tale file handle",
+ "stale_handle",
+ "file_directory",
+ "NFS file handle stale",
+ True,
+ "remount_nfs",
+ ),
+ ErrorPattern(
+ r"[Dd]irectory not empty",
+ "dir_not_empty",
+ "file_directory",
+ "Directory has contents",
+ True,
+ "rm_recursive",
+ ),
+ ErrorPattern(
+ r"[Cc]ross-device link",
+ "cross_device_link",
+ "file_directory",
+ "Cannot link across filesystems",
+ True,
+ "copy_instead",
+ ),
+ ErrorPattern(
+ r"[Tt]oo many open files",
+ "too_many_files",
+ "file_directory",
+ "File descriptor limit reached",
+ True,
+ "increase_ulimit",
+ ),
+ ErrorPattern(
+ r"[Qq]uota exceeded",
+ "quota_exceeded",
+ "file_directory",
+ "Disk quota exceeded",
+ False,
+ "check_quota",
+ ),
+ ErrorPattern(
+ r"[Oo]peration timed out",
+ "operation_timeout",
+ "file_directory",
+ "Operation timed out",
+ True,
+ "increase_timeout",
+ ),
+]
+
+# Category 3: Permission & Ownership Errors
+PERMISSION_ERRORS = [
+ ErrorPattern(
+ r"[Aa]ccess denied", "access_denied", "permission", "Access denied", True, "use_sudo"
+ ),
+ ErrorPattern(
+ r"[Aa]uthentication fail",
+ "auth_failure",
+ "permission",
+ "Authentication failed",
+ False,
+ "check_credentials",
+ ),
+ ErrorPattern(
+ r"[Ii]nvalid user", "invalid_user", "permission", "User does not exist", True, "create_user"
+ ),
+ ErrorPattern(
+ r"[Ii]nvalid group",
+ "invalid_group",
+ "permission",
+ "Group does not exist",
+ True,
+ "create_group",
+ ),
+ ErrorPattern(
+ r"[Nn]ot owner", "not_owner", "permission", "Not the owner of file", True, "use_sudo"
+ ),
+]
+
+# Category 4: Process & Execution Errors
+PROCESS_ERRORS = [
+ ErrorPattern(
+ r"[Nn]o such process",
+ "no_such_process",
+ "process",
+ "Process does not exist",
+ False,
+ "check_pid",
+ ),
+ ErrorPattern(
+ r"[Pp]rocess already running",
+ "already_running",
+ "process",
+ "Process already running",
+ True,
+ "kill_existing",
+ ),
+ ErrorPattern(
+ r"[Pp]rocess terminated",
+ "process_terminated",
+ "process",
+ "Process was terminated",
+ False,
+ "check_logs",
+ ),
+ ErrorPattern(
+ r"[Kk]illed",
+ "killed",
+ "process",
+ "Process was killed (OOM?)",
+ False,
+ "check_memory",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Ss]egmentation fault",
+ "segfault",
+ "process",
+ "Memory access violation",
+ False,
+ "debug_crash",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Bb]us error",
+ "bus_error",
+ "process",
+ "Bus error (memory alignment)",
+ False,
+ "debug_crash",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Ff]loating point exception",
+ "fpe",
+ "process",
+ "Floating point exception",
+ False,
+ "debug_crash",
+ ),
+ ErrorPattern(
+ r"[Ii]llegal instruction",
+ "illegal_instruction",
+ "process",
+ "CPU instruction error",
+ False,
+ "check_architecture",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Tt]race.*trap", "trace_trap", "process", "Debugger trap", False, "check_debugger"
+ ),
+ ErrorPattern(
+ r"[Rr]esource temporarily unavailable",
+ "resource_unavailable",
+ "process",
+ "Resource busy",
+ True,
+ "wait_retry",
+ ),
+ ErrorPattern(
+ r"[Tt]oo many processes",
+ "too_many_processes",
+ "process",
+ "Process limit reached",
+ True,
+ "increase_ulimit",
+ ),
+ ErrorPattern(
+ r"[Oo]peration canceled",
+ "operation_canceled",
+ "process",
+ "Operation was canceled",
+ False,
+ "check_timeout",
+ ),
+]
+
+# Category 5: Memory & Resource Errors
+MEMORY_ERRORS = [
+ ErrorPattern(
+ r"[Oo]ut of memory", "oom", "memory", "Out of memory", True, "free_memory", "critical"
+ ),
+ ErrorPattern(
+ r"[Cc]annot allocate memory",
+ "cannot_allocate",
+ "memory",
+ "Memory allocation failed",
+ True,
+ "free_memory",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Mm]emory exhausted",
+ "memory_exhausted",
+ "memory",
+ "Memory exhausted",
+ True,
+ "free_memory",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Ss]tack overflow",
+ "stack_overflow",
+ "memory",
+ "Stack overflow",
+ False,
+ "increase_stack",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Dd]evice or resource busy",
+ "device_busy",
+ "memory",
+ "Device or resource busy",
+ True,
+ "wait_retry",
+ ),
+ ErrorPattern(
+ r"[Nn]o space left on device",
+ "no_space",
+ "memory",
+ "Disk full",
+ True,
+ "free_disk",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Dd]isk quota exceeded",
+ "disk_quota",
+ "memory",
+ "Disk quota exceeded",
+ False,
+ "check_quota",
+ ),
+ ErrorPattern(
+ r"[Ff]ile table overflow",
+ "file_table_overflow",
+ "memory",
+ "System file table full",
+ True,
+ "increase_ulimit",
+ "critical",
+ ),
+]
+
+# Category 6: Disk & Filesystem Errors
+FILESYSTEM_ERRORS = [
+ ErrorPattern(
+ r"[Ww]rong fs type",
+ "wrong_fs_type",
+ "filesystem",
+ "Wrong filesystem type",
+ False,
+ "check_fstype",
+ ),
+ ErrorPattern(
+ r"[Ff]ilesystem.*corrupt",
+ "fs_corrupt",
+ "filesystem",
+ "Filesystem corrupted",
+ False,
+ "fsck",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Ss]uperblock invalid",
+ "superblock_invalid",
+ "filesystem",
+ "Superblock invalid",
+ False,
+ "fsck",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Mm]ount point does not exist",
+ "mount_point_missing",
+ "filesystem",
+ "Mount point missing",
+ True,
+ "create_mountpoint",
+ ),
+ ErrorPattern(
+ r"[Dd]evice is busy",
+ "device_busy_mount",
+ "filesystem",
+ "Device busy (in use)",
+ True,
+ "lazy_umount",
+ ),
+ ErrorPattern(
+ r"[Nn]ot mounted", "not_mounted", "filesystem", "Filesystem not mounted", True, "mount_fs"
+ ),
+ ErrorPattern(
+ r"[Aa]lready mounted",
+ "already_mounted",
+ "filesystem",
+ "Already mounted",
+ False,
+ "check_mount",
+ ),
+ ErrorPattern(
+ r"[Bb]ad magic number",
+ "bad_magic",
+ "filesystem",
+ "Bad magic number in superblock",
+ False,
+ "fsck",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Ss]tructure needs cleaning",
+ "needs_cleaning",
+ "filesystem",
+ "Filesystem needs fsck",
+ False,
+ "fsck",
+ ),
+ ErrorPattern(
+ r"[Jj]ournal has aborted",
+ "journal_aborted",
+ "filesystem",
+ "Journal aborted",
+ False,
+ "fsck",
+ "critical",
+ ),
+]
+
+# Category 7: Networking Errors
+NETWORK_ERRORS = [
+ ErrorPattern(
+ r"[Nn]etwork is unreachable",
+ "network_unreachable",
+ "network",
+ "Network unreachable",
+ True,
+ "check_network",
+ ),
+ ErrorPattern(
+ r"[Nn]o route to host", "no_route", "network", "No route to host", True, "check_routing"
+ ),
+ ErrorPattern(
+ r"[Cc]onnection refused",
+ "connection_refused",
+ "network",
+ "Connection refused",
+ True,
+ "check_service",
+ ),
+ ErrorPattern(
+ r"[Cc]onnection timed out",
+ "connection_timeout",
+ "network",
+ "Connection timed out",
+ True,
+ "check_firewall",
+ ),
+ ErrorPattern(
+ r"[Cc]onnection reset by peer",
+ "connection_reset",
+ "network",
+ "Connection reset",
+ False,
+ "check_remote",
+ ),
+ ErrorPattern(
+ r"[Hh]ost is down", "host_down", "network", "Remote host down", False, "check_host"
+ ),
+ ErrorPattern(
+ r"[Tt]emporary failure in name resolution",
+ "dns_temp_fail",
+ "network",
+ "DNS temporary failure",
+ True,
+ "retry_dns",
+ ),
+ ErrorPattern(
+ r"[Nn]ame or service not known",
+ "dns_unknown",
+ "network",
+ "DNS lookup failed",
+ True,
+ "check_dns",
+ ),
+ ErrorPattern(
+ r"[Dd]NS lookup failed", "dns_failed", "network", "DNS lookup failed", True, "check_dns"
+ ),
+ ErrorPattern(
+ r"[Aa]ddress already in use",
+ "address_in_use",
+ "network",
+ "Port already in use",
+ True,
+ "find_port_user",
+ ),
+ ErrorPattern(
+ r"[Cc]annot assign requested address",
+ "cannot_assign_addr",
+ "network",
+ "Address not available",
+ False,
+ "check_interface",
+ ),
+ ErrorPattern(
+ r"[Pp]rotocol not supported",
+ "protocol_not_supported",
+ "network",
+ "Protocol not supported",
+ False,
+ "check_protocol",
+ ),
+ ErrorPattern(
+ r"[Ss]ocket operation on non-socket",
+ "not_socket",
+ "network",
+ "Invalid socket operation",
+ False,
+ "check_fd",
+ ),
+]
+
+# Category 8: Package Manager Errors (Ubuntu/Debian apt)
+PACKAGE_ERRORS = [
+ ErrorPattern(
+ r"[Uu]nable to locate package",
+ "package_not_found",
+ "package",
+ "Package not found",
+ True,
+ "update_repos",
+ ),
+ ErrorPattern(
+ r"[Pp]ackage.*not found",
+ "package_not_found",
+ "package",
+ "Package not found",
+ True,
+ "update_repos",
+ ),
+ ErrorPattern(
+ r"[Ff]ailed to fetch",
+ "fetch_failed",
+ "package",
+ "Failed to download package",
+ True,
+ "change_mirror",
+ ),
+ ErrorPattern(
+ r"[Hh]ash [Ss]um mismatch",
+ "hash_mismatch",
+ "package",
+ "Package checksum mismatch",
+ True,
+ "clean_apt",
+ ),
+ ErrorPattern(
+ r"[Rr]epository.*not signed",
+ "repo_not_signed",
+ "package",
+ "Repository not signed",
+ True,
+ "add_key",
+ ),
+ ErrorPattern(
+ r"[Gg][Pp][Gg] error", "gpg_error", "package", "GPG signature error", True, "fix_gpg"
+ ),
+ ErrorPattern(
+ r"[Dd]ependency problems",
+ "dependency_problems",
+ "package",
+ "Dependency issues",
+ True,
+ "fix_dependencies",
+ ),
+ ErrorPattern(
+ r"[Uu]nmet dependencies",
+ "unmet_dependencies",
+ "package",
+ "Unmet dependencies",
+ True,
+ "fix_dependencies",
+ ),
+ ErrorPattern(
+ r"[Bb]roken packages", "broken_packages", "package", "Broken packages", True, "fix_broken"
+ ),
+ ErrorPattern(
+ r"[Vv]ery bad inconsistent state",
+ "inconsistent_state",
+ "package",
+ "Package in bad state",
+ True,
+ "force_reinstall",
+ ),
+ ErrorPattern(
+ r"[Cc]onflicts with",
+ "package_conflict",
+ "package",
+ "Package conflict",
+ True,
+ "resolve_conflict",
+ ),
+ ErrorPattern(
+ r"dpkg.*lock", "dpkg_lock", "package", "Package manager locked", True, "clear_lock"
+ ),
+ ErrorPattern(r"apt.*lock", "apt_lock", "package", "APT locked", True, "clear_lock"),
+ ErrorPattern(
+ r"E: Could not get lock",
+ "could_not_get_lock",
+ "package",
+ "Package manager locked",
+ True,
+ "clear_lock",
+ ),
+]
+
+# Category 9: User & Authentication Errors
+USER_AUTH_ERRORS = [
+ ErrorPattern(
+ r"[Uu]ser does not exist",
+ "user_not_exist",
+ "user_auth",
+ "User does not exist",
+ True,
+ "create_user",
+ ),
+ ErrorPattern(
+ r"[Gg]roup does not exist",
+ "group_not_exist",
+ "user_auth",
+ "Group does not exist",
+ True,
+ "create_group",
+ ),
+ ErrorPattern(
+ r"[Aa]ccount expired",
+ "account_expired",
+ "user_auth",
+ "Account expired",
+ False,
+ "renew_account",
+ ),
+ ErrorPattern(
+ r"[Pp]assword expired",
+ "password_expired",
+ "user_auth",
+ "Password expired",
+ False,
+ "change_password",
+ ),
+ ErrorPattern(
+ r"[Ii]ncorrect password",
+ "wrong_password",
+ "user_auth",
+ "Wrong password",
+ False,
+ "check_password",
+ ),
+ ErrorPattern(
+ r"[Aa]ccount locked",
+ "account_locked",
+ "user_auth",
+ "Account locked",
+ False,
+ "unlock_account",
+ ),
+]
+
+# Category 16: Docker/Container Errors
+DOCKER_ERRORS = [
+ # Container name conflicts
+ ErrorPattern(
+ r"[Cc]onflict.*container name.*already in use",
+ "container_name_conflict",
+ "docker",
+ "Container name already in use",
+ True,
+ "remove_or_rename_container",
+ ),
+ ErrorPattern(
+ r"is already in use by container",
+ "container_name_conflict",
+ "docker",
+ "Container name already in use",
+ True,
+ "remove_or_rename_container",
+ ),
+ # Container not found
+ ErrorPattern(
+ r"[Nn]o such container",
+ "container_not_found",
+ "docker",
+ "Container does not exist",
+ True,
+ "check_container_name",
+ ),
+ ErrorPattern(
+ r"[Ee]rror: No such container",
+ "container_not_found",
+ "docker",
+ "Container does not exist",
+ True,
+ "check_container_name",
+ ),
+ # Image not found
+ ErrorPattern(
+ r"[Uu]nable to find image",
+ "image_not_found",
+ "docker",
+ "Docker image not found locally",
+ True,
+ "pull_image",
+ ),
+ ErrorPattern(
+ r"[Rr]epository.*not found",
+ "image_not_found",
+ "docker",
+ "Docker image repository not found",
+ True,
+ "check_image_name",
+ ),
+ ErrorPattern(
+ r"manifest.*not found",
+ "manifest_not_found",
+ "docker",
+ "Image manifest not found",
+ True,
+ "check_image_tag",
+ ),
+ # Container already running/stopped
+ ErrorPattern(
+ r"is already running",
+ "container_already_running",
+ "docker",
+ "Container is already running",
+ True,
+ "stop_or_use_existing",
+ ),
+ ErrorPattern(
+ r"is not running",
+ "container_not_running",
+ "docker",
+ "Container is not running",
+ True,
+ "start_container",
+ ),
+ # Port conflicts
+ ErrorPattern(
+ r"[Pp]ort.*already allocated",
+ "port_in_use",
+ "docker",
+ "Port is already in use",
+ True,
+ "free_port_or_use_different",
+ ),
+ ErrorPattern(
+ r"[Bb]ind.*address already in use",
+ "port_in_use",
+ "docker",
+ "Port is already in use",
+ True,
+ "free_port_or_use_different",
+ ),
+ # Volume errors
+ ErrorPattern(
+ r"[Vv]olume.*not found",
+ "volume_not_found",
+ "docker",
+ "Docker volume not found",
+ True,
+ "create_volume",
+ ),
+ ErrorPattern(
+ r"[Mm]ount.*denied",
+ "mount_denied",
+ "docker",
+ "Mount point access denied",
+ True,
+ "check_mount_permissions",
+ ),
+ # Network errors
+ ErrorPattern(
+ r"[Nn]etwork.*not found",
+ "network_not_found",
+ "docker",
+ "Docker network not found",
+ True,
+ "create_network",
+ ),
+ # Daemon errors
+ ErrorPattern(
+ r"[Cc]annot connect to the Docker daemon",
+ "docker_daemon_not_running",
+ "docker",
+ "Docker daemon is not running",
+ True,
+ "start_docker_daemon",
+ ),
+ ErrorPattern(
+ r"[Ii]s the docker daemon running",
+ "docker_daemon_not_running",
+ "docker",
+ "Docker daemon is not running",
+ True,
+ "start_docker_daemon",
+ ),
+ # OOM errors
+ ErrorPattern(
+ r"[Oo]ut of memory",
+ "container_oom",
+ "docker",
+ "Container ran out of memory",
+ True,
+ "increase_memory_limit",
+ ),
+ # Exec errors
+ ErrorPattern(
+ r"[Oo]CI runtime.*not found",
+ "runtime_not_found",
+ "docker",
+ "Container runtime not found",
+ False,
+ "check_docker_installation",
+ ),
+]
+
+# Category 17: Login/Credential Required Errors
+LOGIN_REQUIRED_ERRORS = [
+ # Docker/Container registry login errors
+ ErrorPattern(
+ r"[Uu]sername.*[Rr]equired",
+ "docker_username_required",
+ "login_required",
+ "Docker username required",
+ True,
+ "prompt_docker_login",
+ ),
+ ErrorPattern(
+ r"[Nn]on-null [Uu]sername",
+ "docker_username_required",
+ "login_required",
+ "Docker username required",
+ True,
+ "prompt_docker_login",
+ ),
+ ErrorPattern(
+ r"unauthorized.*authentication required",
+ "docker_auth_required",
+ "login_required",
+ "Docker authentication required",
+ True,
+ "prompt_docker_login",
+ ),
+ ErrorPattern(
+ r"denied.*requested access",
+ "docker_access_denied",
+ "login_required",
+ "Docker registry access denied",
+ True,
+ "prompt_docker_login",
+ ),
+ ErrorPattern(
+ r"denied:.*access",
+ "docker_access_denied",
+ "login_required",
+ "Docker registry access denied",
+ True,
+ "prompt_docker_login",
+ ),
+ ErrorPattern(
+ r"access.*denied",
+ "docker_access_denied",
+ "login_required",
+ "Docker registry access denied",
+ True,
+ "prompt_docker_login",
+ ),
+ ErrorPattern(
+ r"no basic auth credentials",
+ "docker_no_credentials",
+ "login_required",
+ "Docker credentials not found",
+ True,
+ "prompt_docker_login",
+ ),
+ ErrorPattern(
+ r"docker login",
+ "docker_login_needed",
+ "login_required",
+ "Docker login required",
+ True,
+ "prompt_docker_login",
+ ),
+ # ghcr.io (GitHub Container Registry) specific errors
+ ErrorPattern(
+ r"ghcr\.io.*denied",
+ "ghcr_access_denied",
+ "login_required",
+ "GitHub Container Registry access denied - login required",
+ True,
+ "prompt_docker_login",
+ ),
+ ErrorPattern(
+ r"Head.*ghcr\.io.*denied",
+ "ghcr_access_denied",
+ "login_required",
+ "GitHub Container Registry access denied - login required",
+ True,
+ "prompt_docker_login",
+ ),
+ # Generic registry denied patterns
+ ErrorPattern(
+ r"Error response from daemon.*denied",
+ "registry_access_denied",
+ "login_required",
+ "Container registry access denied - login may be required",
+ True,
+ "prompt_docker_login",
+ ),
+ ErrorPattern(
+ r"pull access denied",
+ "pull_access_denied",
+ "login_required",
+ "Pull access denied - login required or image doesn't exist",
+ True,
+ "prompt_docker_login",
+ ),
+ ErrorPattern(
+ r"requested resource.*denied",
+ "resource_access_denied",
+ "login_required",
+ "Resource access denied - authentication required",
+ True,
+ "prompt_docker_login",
+ ),
+ # Git credential errors
+ ErrorPattern(
+ r"[Cc]ould not read.*[Uu]sername",
+ "git_username_required",
+ "login_required",
+ "Git username required",
+ True,
+ "prompt_git_login",
+ ),
+ ErrorPattern(
+ r"[Ff]atal:.*[Aa]uthentication failed",
+ "git_auth_failed",
+ "login_required",
+ "Git authentication failed",
+ True,
+ "prompt_git_login",
+ ),
+ ErrorPattern(
+ r"[Pp]assword.*authentication.*removed",
+ "git_token_required",
+ "login_required",
+ "Git token required (password auth disabled)",
+ True,
+ "prompt_git_token",
+ ),
+ ErrorPattern(
+ r"[Pp]ermission denied.*publickey",
+ "git_ssh_required",
+ "login_required",
+ "Git SSH key required",
+ True,
+ "setup_git_ssh",
+ ),
+ # npm login errors
+ ErrorPattern(
+ r"npm ERR!.*E401",
+ "npm_auth_required",
+ "login_required",
+ "npm authentication required",
+ True,
+ "prompt_npm_login",
+ ),
+ ErrorPattern(
+ r"npm ERR!.*ENEEDAUTH",
+ "npm_need_auth",
+ "login_required",
+ "npm authentication needed",
+ True,
+ "prompt_npm_login",
+ ),
+ ErrorPattern(
+ r"You must be logged in",
+ "npm_login_required",
+ "login_required",
+ "npm login required",
+ True,
+ "prompt_npm_login",
+ ),
+ # AWS credential errors
+ ErrorPattern(
+ r"[Uu]nable to locate credentials",
+ "aws_no_credentials",
+ "login_required",
+ "AWS credentials not configured",
+ True,
+ "prompt_aws_configure",
+ ),
+ ErrorPattern(
+ r"[Ii]nvalid[Aa]ccess[Kk]ey",
+ "aws_invalid_key",
+ "login_required",
+ "AWS access key invalid",
+ True,
+ "prompt_aws_configure",
+ ),
+ ErrorPattern(
+ r"[Ss]ignature.*[Dd]oes[Nn]ot[Mm]atch",
+ "aws_secret_invalid",
+ "login_required",
+ "AWS secret key invalid",
+ True,
+ "prompt_aws_configure",
+ ),
+ ErrorPattern(
+ r"[Ee]xpired[Tt]oken",
+ "aws_token_expired",
+ "login_required",
+ "AWS token expired",
+ True,
+ "prompt_aws_configure",
+ ),
+ # PyPI/pip login errors
+ ErrorPattern(
+ r"HTTPError: 403.*upload",
+ "pypi_auth_required",
+ "login_required",
+ "PyPI authentication required",
+ True,
+ "prompt_pypi_login",
+ ),
+ # Generic credential prompts
+ ErrorPattern(
+ r"[Ee]nter.*[Uu]sername",
+ "username_prompt",
+ "login_required",
+ "Username required",
+ True,
+ "prompt_credentials",
+ ),
+ ErrorPattern(
+ r"[Ee]nter.*[Pp]assword",
+ "password_prompt",
+ "login_required",
+ "Password required",
+ True,
+ "prompt_credentials",
+ ),
+ ErrorPattern(
+ r"[Aa]ccess [Tt]oken.*[Rr]equired",
+ "token_required",
+ "login_required",
+ "Access token required",
+ True,
+ "prompt_token",
+ ),
+ ErrorPattern(
+ r"[Aa][Pp][Ii].*[Kk]ey.*[Rr]equired",
+ "api_key_required",
+ "login_required",
+ "API key required",
+ True,
+ "prompt_api_key",
+ ),
+]
+
+# Category 10: Device & Hardware Errors
+DEVICE_ERRORS = [
+ ErrorPattern(
+ r"[Nn]o such device", "no_device", "device", "Device not found", False, "check_device"
+ ),
+ ErrorPattern(
+ r"[Dd]evice not configured",
+ "device_not_configured",
+ "device",
+ "Device not configured",
+ False,
+ "configure_device",
+ ),
+ ErrorPattern(
+ r"[Hh]ardware error",
+ "hardware_error",
+ "device",
+ "Hardware error",
+ False,
+ "check_hardware",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Dd]evice offline", "device_offline", "device", "Device offline", False, "bring_online"
+ ),
+ ErrorPattern(
+ r"[Mm]edia not present", "no_media", "device", "No media in device", False, "insert_media"
+ ),
+ ErrorPattern(
+ r"[Rr]ead error",
+ "read_error",
+ "device",
+ "Device read error",
+ False,
+ "check_disk",
+ "critical",
+ ),
+ ErrorPattern(
+ r"[Ww]rite error",
+ "write_error",
+ "device",
+ "Device write error",
+ False,
+ "check_disk",
+ "critical",
+ ),
+]
+
+# Category 11: Compilation & Build Errors
+BUILD_ERRORS = [
+ ErrorPattern(
+ r"[Nn]o rule to make target",
+ "no_make_rule",
+ "build",
+ "Make target not found",
+ False,
+ "check_makefile",
+ ),
+ ErrorPattern(
+ r"[Mm]issing separator",
+ "missing_separator",
+ "build",
+ "Makefile syntax error",
+ False,
+ "fix_makefile",
+ ),
+ ErrorPattern(
+ r"[Uu]ndefined reference",
+ "undefined_reference",
+ "build",
+ "Undefined symbol",
+ True,
+ "add_library",
+ ),
+ ErrorPattern(
+ r"[Ss]ymbol lookup error", "symbol_lookup", "build", "Symbol not found", True, "fix_ldpath"
+ ),
+ ErrorPattern(
+ r"[Ll]ibrary not found",
+ "library_not_found",
+ "build",
+ "Library not found",
+ True,
+ "install_lib",
+ ),
+ ErrorPattern(
+ r"[Hh]eader.*not found",
+ "header_not_found",
+ "build",
+ "Header file not found",
+ True,
+ "install_dev",
+ ),
+ ErrorPattern(
+ r"[Rr]elocation error", "relocation_error", "build", "Relocation error", True, "fix_ldpath"
+ ),
+ ErrorPattern(
+ r"[Cc]ompilation terminated",
+ "compilation_failed",
+ "build",
+ "Compilation failed",
+ False,
+ "check_errors",
+ ),
+]
+
+# Category 12: Archive & Compression Errors
+ARCHIVE_ERRORS = [
+ ErrorPattern(
+ r"[Uu]nexpected end of file",
+ "unexpected_eof_archive",
+ "archive",
+ "Archive truncated",
+ False,
+ "redownload",
+ ),
+ ErrorPattern(
+ r"[Cc]orrupt archive",
+ "corrupt_archive",
+ "archive",
+ "Archive corrupted",
+ False,
+ "redownload",
+ ),
+ ErrorPattern(
+ r"[Ii]nvalid tar magic",
+ "invalid_tar",
+ "archive",
+ "Invalid tar archive",
+ False,
+ "check_format",
+ ),
+ ErrorPattern(
+ r"[Cc]hecksum error", "checksum_error", "archive", "Checksum mismatch", False, "redownload"
+ ),
+ ErrorPattern(
+ r"[Nn]ot in gzip format", "not_gzip", "archive", "Not gzip format", False, "check_format"
+ ),
+ ErrorPattern(
+ r"[Dd]ecompression failed",
+ "decompress_failed",
+ "archive",
+ "Decompression failed",
+ False,
+ "check_format",
+ ),
+]
+
+# Category 13: Shell Script Errors
+SCRIPT_ERRORS = [
+ ErrorPattern(
+ r"[Bb]ad interpreter",
+ "bad_interpreter",
+ "script",
+ "Interpreter not found",
+ True,
+ "fix_shebang",
+ ),
+ ErrorPattern(
+ r"[Ll]ine \d+:.*command not found",
+ "script_cmd_not_found",
+ "script",
+ "Command in script not found",
+ True,
+ "install_dependency",
+ ),
+ ErrorPattern(
+ r"[Ii]nteger expression expected",
+ "integer_expected",
+ "script",
+ "Expected integer",
+ False,
+ "fix_syntax",
+ ),
+ ErrorPattern(
+ r"[Cc]onditional binary operator expected",
+ "conditional_expected",
+ "script",
+ "Expected conditional",
+ False,
+ "fix_syntax",
+ ),
+]
+
+# Category 14: Environment & PATH Errors
+ENVIRONMENT_ERRORS = [
+ ErrorPattern(
+ r"[Vv]ariable not set",
+ "var_not_set",
+ "environment",
+ "Environment variable not set",
+ True,
+ "set_variable",
+ ),
+ ErrorPattern(
+ r"[Pp][Aa][Tt][Hh] not set",
+ "path_not_set",
+ "environment",
+ "PATH not configured",
+ True,
+ "set_path",
+ ),
+ ErrorPattern(
+ r"[Ee]nvironment corrupt",
+ "env_corrupt",
+ "environment",
+ "Environment corrupted",
+ True,
+ "reset_env",
+ ),
+ ErrorPattern(
+ r"[Ll]ibrary path not found",
+ "lib_path_missing",
+ "environment",
+ "Library path missing",
+ True,
+ "set_ldpath",
+ ),
+ ErrorPattern(
+ r"LD_LIBRARY_PATH", "ld_path_issue", "environment", "Library path issue", True, "set_ldpath"
+ ),
+]
+
+# Category 15: Service & System Errors
+# Category 16: Config File Errors (Nginx, Apache, etc.)
+CONFIG_ERRORS = [
+ # Nginx errors
+ ErrorPattern(
+ r"nginx:.*\[emerg\]",
+ "nginx_config_error",
+ "config",
+ "Nginx configuration error",
+ True,
+ "fix_nginx_config",
+ ),
+ ErrorPattern(
+ r"nginx.*syntax.*error",
+ "nginx_syntax_error",
+ "config",
+ "Nginx syntax error",
+ True,
+ "fix_nginx_config",
+ ),
+ ErrorPattern(
+ r"nginx.*unexpected",
+ "nginx_unexpected",
+ "config",
+ "Nginx unexpected token",
+ True,
+ "fix_nginx_config",
+ ),
+ ErrorPattern(
+ r"nginx.*unknown directive",
+ "nginx_unknown_directive",
+ "config",
+ "Nginx unknown directive",
+ True,
+ "fix_nginx_config",
+ ),
+ ErrorPattern(
+ r"nginx.*test failed",
+ "nginx_test_failed",
+ "config",
+ "Nginx config test failed",
+ True,
+ "fix_nginx_config",
+ ),
+ ErrorPattern(
+ r"nginx.*could not open",
+ "nginx_file_error",
+ "config",
+ "Nginx could not open file",
+ True,
+ "fix_nginx_permissions",
+ ),
+ # Apache errors
+ ErrorPattern(
+ r"apache.*syntax error",
+ "apache_syntax_error",
+ "config",
+ "Apache syntax error",
+ True,
+ "fix_apache_config",
+ ),
+ ErrorPattern(
+ r"apache2?ctl.*configtest",
+ "apache_config_error",
+ "config",
+ "Apache config test failed",
+ True,
+ "fix_apache_config",
+ ),
+ ErrorPattern(
+ r"[Ss]yntax error on line \d+",
+ "config_line_error",
+ "config",
+ "Config syntax error at line",
+ True,
+ "fix_config_line",
+ ),
+ # MySQL/MariaDB errors
+ ErrorPattern(
+ r"mysql.*error.*config",
+ "mysql_config_error",
+ "config",
+ "MySQL configuration error",
+ True,
+ "fix_mysql_config",
+ ),
+ # PostgreSQL errors
+ ErrorPattern(
+ r"postgres.*error.*config",
+ "postgres_config_error",
+ "config",
+ "PostgreSQL configuration error",
+ True,
+ "fix_postgres_config",
+ ),
+ # Generic config errors
+ ErrorPattern(
+ r"configuration.*syntax",
+ "generic_config_syntax",
+ "config",
+ "Configuration syntax error",
+ True,
+ "fix_config_syntax",
+ ),
+ ErrorPattern(
+ r"invalid.*configuration",
+ "invalid_config",
+ "config",
+ "Invalid configuration",
+ True,
+ "fix_config_syntax",
+ ),
+ ErrorPattern(
+ r"[Cc]onfig.*parse error",
+ "config_parse_error",
+ "config",
+ "Config parse error",
+ True,
+ "fix_config_syntax",
+ ),
+]
+
+SERVICE_ERRORS = [
+ ErrorPattern(
+ r"[Ss]ervice failed to start",
+ "service_failed",
+ "service",
+ "Service failed to start",
+ True,
+ "check_service_logs",
+ ),
+ ErrorPattern(
+ r"[Uu]nit.*failed",
+ "unit_failed",
+ "service",
+ "Systemd unit failed",
+ True,
+ "check_service_logs",
+ ),
+ ErrorPattern(
+ r"[Jj]ob for.*\.service failed",
+ "job_failed",
+ "service",
+ "Service job failed",
+ True,
+ "check_service_logs",
+ ),
+ ErrorPattern(
+ r"[Ff]ailed to start.*\.service",
+ "start_failed",
+ "service",
+ "Failed to start service",
+ True,
+ "check_service_logs",
+ ),
+ ErrorPattern(
+ r"[Dd]ependency failed",
+ "dependency_failed",
+ "service",
+ "Service dependency failed",
+ True,
+ "start_dependency",
+ ),
+ ErrorPattern(
+ r"[Ii]nactive.*dead",
+ "service_inactive",
+ "service",
+ "Service not running",
+ True,
+ "start_service",
+ ),
+ ErrorPattern(
+ r"[Mm]asked", "service_masked", "service", "Service is masked", True, "unmask_service"
+ ),
+ ErrorPattern(
+ r"[Ee]nabled-runtime",
+ "service_enabled_runtime",
+ "service",
+ "Service enabled at runtime",
+ False,
+ "check_service",
+ ),
+ ErrorPattern(
+ r"[Cc]ontrol process exited with error",
+ "control_process_error",
+ "service",
+ "Service control process failed",
+ True,
+ "check_service_logs",
+ ),
+ ErrorPattern(
+ r"[Aa]ctivation.*timed out",
+ "activation_timeout",
+ "service",
+ "Service activation timed out",
+ True,
+ "check_service_logs",
+ ),
+]
+
+# Combine all error patterns
+ALL_ERROR_PATTERNS = (
+ DOCKER_ERRORS # Check Docker errors first (common)
+ + LOGIN_REQUIRED_ERRORS # Check login errors (interactive)
+ + CONFIG_ERRORS # Check config errors (more specific)
+ + COMMAND_SHELL_ERRORS
+ + FILE_DIRECTORY_ERRORS
+ + PERMISSION_ERRORS
+ + PROCESS_ERRORS
+ + MEMORY_ERRORS
+ + FILESYSTEM_ERRORS
+ + NETWORK_ERRORS
+ + PACKAGE_ERRORS
+ + USER_AUTH_ERRORS
+ + DEVICE_ERRORS
+ + BUILD_ERRORS
+ + ARCHIVE_ERRORS
+ + SCRIPT_ERRORS
+ + ENVIRONMENT_ERRORS
+ + SERVICE_ERRORS
+)
+
+
+# ============================================================================
+# Login/Credential Requirements Configuration
+# ============================================================================
+
+
+@dataclass
+class LoginRequirement:
+ """Defines credentials required for a service login."""
+
+ service: str
+ display_name: str
+ command_pattern: str # Regex to match commands that need this login
+ required_fields: list # List of field names needed
+ field_prompts: dict # Field name -> prompt text
+ field_secret: dict # Field name -> whether to hide input
+ login_command_template: str # Template for login command
+ env_vars: dict = field(default_factory=dict) # Optional env var alternatives
+ signup_url: str = ""
+ docs_url: str = ""
+
+
+# Login requirements for various services
+LOGIN_REQUIREMENTS = {
+ "docker": LoginRequirement(
+ service="docker",
+ display_name="Docker Registry",
+ command_pattern=r"docker\s+(login|push|pull)",
+ required_fields=["registry", "username", "password"],
+ field_prompts={
+ "registry": "Registry URL (press Enter for Docker Hub)",
+ "username": "Username",
+ "password": "Password or Access Token",
+ },
+ field_secret={"registry": False, "username": False, "password": True},
+ login_command_template="docker login {registry} -u {username} -p {password}",
+ env_vars={"username": "DOCKER_USERNAME", "password": "DOCKER_PASSWORD"},
+ signup_url="https://hub.docker.com/signup",
+ docs_url="https://docs.docker.com/docker-hub/access-tokens/",
+ ),
+ "ghcr": LoginRequirement(
+ service="ghcr",
+ display_name="GitHub Container Registry",
+ command_pattern=r"docker.*ghcr\.io",
+ required_fields=["username", "token"],
+ field_prompts={
+ "username": "GitHub Username",
+ "token": "GitHub Personal Access Token (with packages scope)",
+ },
+ field_secret={"username": False, "token": True},
+ login_command_template="echo {token} | docker login ghcr.io -u {username} --password-stdin",
+ env_vars={"token": "GITHUB_TOKEN", "username": "GITHUB_USER"},
+ signup_url="https://github.com/join",
+ docs_url="https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry",
+ ),
+ "git_https": LoginRequirement(
+ service="git_https",
+ display_name="Git (HTTPS)",
+ command_pattern=r"git\s+(clone|push|pull|fetch).*https://",
+ required_fields=["username", "token"],
+ field_prompts={
+ "username": "Git Username",
+ "token": "Personal Access Token",
+ },
+ field_secret={"username": False, "token": True},
+ login_command_template="git config --global credential.helper store && echo 'https://{username}:{token}@github.com' >> ~/.git-credentials",
+ env_vars={"token": "GIT_TOKEN", "username": "GIT_USER"},
+ docs_url="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token",
+ ),
+ "npm": LoginRequirement(
+ service="npm",
+ display_name="npm Registry",
+ command_pattern=r"npm\s+(login|publish|adduser)",
+ required_fields=["username", "password", "email"],
+ field_prompts={
+ "username": "npm Username",
+ "password": "npm Password",
+ "email": "Email Address",
+ },
+ field_secret={"username": False, "password": True, "email": False},
+ login_command_template="npm login", # npm login is interactive
+ signup_url="https://www.npmjs.com/signup",
+ docs_url="https://docs.npmjs.com/creating-and-viewing-access-tokens",
+ ),
+ "aws": LoginRequirement(
+ service="aws",
+ display_name="AWS",
+ command_pattern=r"aws\s+",
+ required_fields=["access_key_id", "secret_access_key", "region"],
+ field_prompts={
+ "access_key_id": "AWS Access Key ID",
+ "secret_access_key": "AWS Secret Access Key",
+ "region": "Default Region (e.g., us-east-1)",
+ },
+ field_secret={"access_key_id": False, "secret_access_key": True, "region": False},
+ login_command_template="aws configure set aws_access_key_id {access_key_id} && aws configure set aws_secret_access_key {secret_access_key} && aws configure set region {region}",
+ env_vars={
+ "access_key_id": "AWS_ACCESS_KEY_ID",
+ "secret_access_key": "AWS_SECRET_ACCESS_KEY",
+ "region": "AWS_DEFAULT_REGION",
+ },
+ docs_url="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html",
+ ),
+ "pypi": LoginRequirement(
+ service="pypi",
+ display_name="PyPI",
+ command_pattern=r"(twine|pip).*upload",
+ required_fields=["username", "token"],
+ field_prompts={
+ "username": "PyPI Username (use __token__ for API token)",
+ "token": "PyPI Password or API Token",
+ },
+ field_secret={"username": False, "token": True},
+ login_command_template="", # Uses ~/.pypirc
+ signup_url="https://pypi.org/account/register/",
+ docs_url="https://pypi.org/help/#apitoken",
+ ),
+ "gcloud": LoginRequirement(
+ service="gcloud",
+ display_name="Google Cloud",
+ command_pattern=r"gcloud\s+",
+ required_fields=[], # Interactive browser auth
+ field_prompts={},
+ field_secret={},
+ login_command_template="gcloud auth login",
+ docs_url="https://cloud.google.com/sdk/docs/authorizing",
+ ),
+ "kubectl": LoginRequirement(
+ service="kubectl",
+ display_name="Kubernetes",
+ command_pattern=r"kubectl\s+",
+ required_fields=["kubeconfig"],
+ field_prompts={
+ "kubeconfig": "Path to kubeconfig file (or press Enter for ~/.kube/config)",
+ },
+ field_secret={"kubeconfig": False},
+ login_command_template="export KUBECONFIG={kubeconfig}",
+ env_vars={"kubeconfig": "KUBECONFIG"},
+ docs_url="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/",
+ ),
+ "heroku": LoginRequirement(
+ service="heroku",
+ display_name="Heroku",
+ command_pattern=r"heroku\s+",
+ required_fields=["api_key"],
+ field_prompts={
+ "api_key": "Heroku API Key",
+ },
+ field_secret={"api_key": True},
+ login_command_template="heroku auth:token", # Interactive
+ env_vars={"api_key": "HEROKU_API_KEY"},
+ signup_url="https://signup.heroku.com/",
+ docs_url="https://devcenter.heroku.com/articles/authentication",
+ ),
+}
+
+
+# ============================================================================
+# Ubuntu Package Mappings
+# ============================================================================
+
+UBUNTU_PACKAGE_MAP = {
+ # Commands to packages
+ "nginx": "nginx",
+ "apache2": "apache2",
+ "httpd": "apache2",
+ "mysql": "mysql-server",
+ "mysqld": "mysql-server",
+ "postgres": "postgresql",
+ "psql": "postgresql-client",
+ "redis": "redis-server",
+ "redis-server": "redis-server",
+ "mongo": "mongodb",
+ "mongod": "mongodb",
+ "node": "nodejs",
+ "npm": "npm",
+ "yarn": "yarnpkg",
+ "python": "python3",
+ "python3": "python3",
+ "pip": "python3-pip",
+ "pip3": "python3-pip",
+ "docker": "docker.io",
+ "docker-compose": "docker-compose",
+ "git": "git",
+ "curl": "curl",
+ "wget": "wget",
+ "vim": "vim",
+ "nano": "nano",
+ "emacs": "emacs",
+ "gcc": "gcc",
+ "g++": "g++",
+ "make": "make",
+ "cmake": "cmake",
+ "java": "default-jdk",
+ "javac": "default-jdk",
+ "ruby": "ruby",
+ "gem": "ruby",
+ "go": "golang-go",
+ "cargo": "cargo",
+ "rustc": "rustc",
+ "php": "php",
+ "composer": "composer",
+ "ffmpeg": "ffmpeg",
+ "imagemagick": "imagemagick",
+ "convert": "imagemagick",
+ "htop": "htop",
+ "tree": "tree",
+ "jq": "jq",
+ "nc": "netcat-openbsd",
+ "netcat": "netcat-openbsd",
+ "ss": "iproute2",
+ "ip": "iproute2",
+ "dig": "dnsutils",
+ "nslookup": "dnsutils",
+ "zip": "zip",
+ "unzip": "unzip",
+ "tar": "tar",
+ "gzip": "gzip",
+ "rsync": "rsync",
+ "ssh": "openssh-client",
+ "sshd": "openssh-server",
+ "screen": "screen",
+ "tmux": "tmux",
+ "awk": "gawk",
+ "sed": "sed",
+ "grep": "grep",
+ "setfacl": "acl",
+ "getfacl": "acl",
+ "lsof": "lsof",
+ "strace": "strace",
+ # System monitoring tools
+ "sensors": "lm-sensors",
+ "sensors-detect": "lm-sensors",
+ "iotop": "iotop",
+ "iftop": "iftop",
+ "nmap": "nmap",
+ "netstat": "net-tools",
+ "ifconfig": "net-tools",
+ "smartctl": "smartmontools",
+ "hdparm": "hdparm",
+ # Optional tools (may not be in all repos)
+ "snap": "snapd",
+ "flatpak": "flatpak",
+}
+
+UBUNTU_SERVICE_MAP = {
+ "nginx": "nginx",
+ "apache": "apache2",
+ "mysql": "mysql",
+ "postgresql": "postgresql",
+ "redis": "redis-server",
+ "mongodb": "mongod",
+ "docker": "docker",
+ "ssh": "ssh",
+ "cron": "cron",
+ "ufw": "ufw",
+}
+
+
+# ============================================================================
+# Error Diagnoser Class
+# ============================================================================
+
+
+class ErrorDiagnoser:
+ """Comprehensive error diagnosis for all system error types."""
+
+ def __init__(self):
+ self._compile_patterns()
+
+ def _compile_patterns(self):
+ """Pre-compile regex patterns for performance."""
+ self._compiled_patterns = []
+ for ep in ALL_ERROR_PATTERNS:
+ try:
+ compiled = re.compile(ep.pattern, re.IGNORECASE | re.MULTILINE)
+ self._compiled_patterns.append((compiled, ep))
+ except re.error:
+ console.print(f"[yellow]Warning: Invalid pattern: {ep.pattern}[/yellow]")
+
+ def extract_path_from_error(self, stderr: str, cmd: str) -> str | None:
+ """Extract the problematic file path from an error message."""
+ patterns = [
+ r"cannot (?:access|open|create|stat|read|write) ['\"]?([/\w\.\-_]+)['\"]?",
+ r"['\"]([/\w\.\-_]+)['\"]?: (?:Permission denied|No such file)",
+ r"open\(\) ['\"]([/\w\.\-_]+)['\"]? failed",
+ r"failed to open ['\"]?([/\w\.\-_]+)['\"]?",
+ r"couldn't open (?:temporary )?file ([/\w\.\-_]+)",
+ r"([/\w\.\-_]+): Permission denied",
+ r"([/\w\.\-_]+): No such file or directory",
+ r"mkdir: cannot create directory ['\"]?([/\w\.\-_]+)['\"]?",
+ r"touch: cannot touch ['\"]?([/\w\.\-_]+)['\"]?",
+ r"cp: cannot (?:create|stat|access) ['\"]?([/\w\.\-_]+)['\"]?",
+ ]
+
+ for pattern in patterns:
+ match = re.search(pattern, stderr, re.IGNORECASE)
+ if match:
+ path = match.group(1)
+ if path.startswith("/"):
+ return path
+
+ # Extract from command itself
+ for part in cmd.split():
+ if part.startswith("/") and any(
+ c in part for c in ["/etc/", "/var/", "/usr/", "/home/", "/opt/", "/tmp/"]
+ ):
+ return part
+
+ return None
+
+ def extract_service_from_error(self, stderr: str, cmd: str) -> str | None:
+ """Extract service name from error message or command."""
+ cmd_parts = cmd.split()
+
+ # From systemctl/service commands
+ for i, part in enumerate(cmd_parts):
+ if part in ["systemctl", "service"]:
+ for j in range(i + 1, len(cmd_parts)):
+ candidate = cmd_parts[j]
+ if candidate not in [
+ "start",
+ "stop",
+ "restart",
+ "reload",
+ "status",
+ "enable",
+ "disable",
+ "is-active",
+ "is-enabled",
+ "-q",
+ "--quiet",
+ "--no-pager",
+ ]:
+ return candidate.replace(".service", "")
+
+ # From error message
+ patterns = [
+ r"(?:Unit|Service) ([a-zA-Z0-9\-_]+)(?:\.service)? (?:not found|failed|could not)",
+ r"Failed to (?:start|stop|restart|enable|disable) ([a-zA-Z0-9\-_]+)",
+ r"([a-zA-Z0-9\-_]+)\.service",
+ ]
+
+ for pattern in patterns:
+ match = re.search(pattern, stderr, re.IGNORECASE)
+ if match:
+ return match.group(1).replace(".service", "")
+
+ return None
+
+ def extract_package_from_error(self, stderr: str, cmd: str) -> str | None:
+ """Extract package name from error."""
+ patterns = [
+ r"[Uu]nable to locate package ([a-zA-Z0-9\-_\.]+)",
+ r"[Pp]ackage '?([a-zA-Z0-9\-_\.]+)'? (?:is )?not (?:found|installed)",
+ r"[Nn]o package '?([a-zA-Z0-9\-_\.]+)'? (?:found|available)",
+ r"apt.*install.*?([a-zA-Z0-9\-_\.]+)",
+ ]
+
+ for pattern in patterns:
+ match = re.search(pattern, stderr + " " + cmd, re.IGNORECASE)
+ if match:
+ return match.group(1)
+
+ return None
+
+ def extract_port_from_error(self, stderr: str) -> int | None:
+ """Extract port number from error."""
+ patterns = [
+ r"[Pp]ort (\d+)",
+ r"[Aa]ddress.*:(\d+)",
+ r":(\d{2,5})\s",
+ ]
+
+ for pattern in patterns:
+ match = re.search(pattern, stderr)
+ if match:
+ port = int(match.group(1))
+ if 1 <= port <= 65535:
+ return port
+
+ return None
+
+ def _extract_container_name(self, stderr: str) -> str | None:
+ """Extract Docker container name from error message."""
+ patterns = [
+ r'container name ["\'/]([a-zA-Z0-9_\-]+)["\'/]',
+ r'["\'/]([a-zA-Z0-9_\-]+)["\'/] is already in use',
+ r'container ["\']?([a-zA-Z0-9_\-]+)["\']?',
+ r"No such container:?\s*([a-zA-Z0-9_\-]+)",
+ ]
+
+ for pattern in patterns:
+ match = re.search(pattern, stderr, re.IGNORECASE)
+ if match:
+ return match.group(1)
+
+ return None
+
+ def _extract_image_name(self, stderr: str, cmd: str) -> str | None:
+ """Extract Docker image name from error or command."""
+ # From command
+ if "docker" in cmd:
+ parts = cmd.split()
+ for i, part in enumerate(parts):
+ if part in ["run", "pull", "push"]:
+ # Look for image name after flags
+ for j in range(i + 1, len(parts)):
+ candidate = parts[j]
+ if not candidate.startswith("-") and "/" in candidate or ":" in candidate:
+ return candidate
+ elif not candidate.startswith("-") and j == len(parts) - 1:
+ return candidate
+
+ # From error
+ patterns = [
+ r'[Uu]nable to find image ["\']([^"\']+)["\']',
+ r'repository ["\']?([^"\':\s]+(?::[^"\':\s]+)?)["\']? not found',
+ r"manifest for ([^\s]+) not found",
+ ]
+
+ for pattern in patterns:
+ match = re.search(pattern, stderr)
+ if match:
+ return match.group(1)
+
+ return None
+
+ def _extract_port(self, stderr: str) -> str | None:
+ """Extract port from Docker error."""
+ patterns = [
+ r"[Pp]ort (\d+)",
+ r":(\d+)->",
+ r"address.*:(\d+)",
+ r"-p\s*(\d+):",
+ ]
+
+ for pattern in patterns:
+ match = re.search(pattern, stderr)
+ if match:
+ return match.group(1)
+
+ return None
+
+ def extract_config_file_and_line(self, stderr: str) -> tuple[str | None, int | None]:
+ """Extract config file path and line number from error."""
+ patterns = [
+ r"in\s+(/[^\s:]+):(\d+)", # "in /path:line"
+ r"at\s+(/[^\s:]+):(\d+)", # "at /path:line"
+ r"(/[^\s:]+):(\d+):", # "/path:line:"
+ r"line\s+(\d+)\s+of\s+(/[^\s:]+)", # "line X of /path"
+ r"(/[^\s:]+)\s+line\s+(\d+)", # "/path line X"
+ ]
+
+ for pattern in patterns:
+ match = re.search(pattern, stderr, re.IGNORECASE)
+ if match:
+ groups = match.groups()
+ if groups[0].startswith("/"):
+ return groups[0], int(groups[1])
+ elif len(groups) > 1 and groups[1].startswith("/"):
+ return groups[1], int(groups[0])
+
+ return None, None
+
+ def extract_command_from_error(self, stderr: str) -> str | None:
+ """Extract the failing command name from error."""
+ patterns = [
+ r"'([a-zA-Z0-9\-_]+)'.*command not found",
+ r"([a-zA-Z0-9\-_]+): command not found",
+ r"bash: ([a-zA-Z0-9\-_]+):",
+ r"/usr/bin/env: '?([a-zA-Z0-9\-_]+)'?:",
+ ]
+
+ for pattern in patterns:
+ match = re.search(pattern, stderr, re.IGNORECASE)
+ if match:
+ return match.group(1)
+
+ return None
+
+ def diagnose_error(self, cmd: str, stderr: str) -> dict[str, Any]:
+ """
+ Comprehensive error diagnosis using pattern matching.
+
+ Returns a detailed diagnosis dict with:
+ - error_type: Specific error type
+ - category: Error category (command_shell, network, etc.)
+ - description: Human-readable description
+ - fix_commands: Suggested fix commands
+ - can_auto_fix: Whether we can auto-fix
+ - fix_strategy: Strategy name for auto-fixer
+ - extracted_info: Extracted paths, services, etc.
+ - severity: error, warning, or critical
+ """
+ diagnosis = {
+ "error_type": "unknown",
+ "category": "unknown",
+ "description": stderr[:300] if len(stderr) > 300 else stderr,
+ "fix_commands": [],
+ "can_auto_fix": False,
+ "fix_strategy": "",
+ "extracted_path": None,
+ "extracted_info": {},
+ "severity": "error",
+ }
+
+ stderr_lower = stderr.lower()
+
+ # Extract common info
+ diagnosis["extracted_path"] = self.extract_path_from_error(stderr, cmd)
+ diagnosis["extracted_info"]["service"] = self.extract_service_from_error(stderr, cmd)
+ diagnosis["extracted_info"]["package"] = self.extract_package_from_error(stderr, cmd)
+ diagnosis["extracted_info"]["port"] = self.extract_port_from_error(stderr)
+
+ config_file, line_num = self.extract_config_file_and_line(stderr)
+ if config_file:
+ diagnosis["extracted_info"]["config_file"] = config_file
+ diagnosis["extracted_info"]["line_num"] = line_num
+
+ # Match against compiled patterns
+ for compiled, ep in self._compiled_patterns:
+ if compiled.search(stderr):
+ diagnosis["error_type"] = ep.error_type
+ diagnosis["category"] = ep.category
+ diagnosis["description"] = ep.description
+ diagnosis["can_auto_fix"] = ep.can_auto_fix
+ diagnosis["fix_strategy"] = ep.fix_strategy
+ diagnosis["severity"] = ep.severity
+
+ # Generate fix commands based on category and strategy
+ self._generate_fix_commands(diagnosis, cmd, stderr)
+
+ return diagnosis
+
+ # Fallback: try generic patterns
+ if "permission denied" in stderr_lower:
+ diagnosis["error_type"] = "permission_denied"
+ diagnosis["category"] = "permission"
+ diagnosis["description"] = "Permission denied"
+ diagnosis["can_auto_fix"] = True
+ diagnosis["fix_strategy"] = "use_sudo"
+ if not cmd.strip().startswith("sudo"):
+ diagnosis["fix_commands"] = [f"sudo {cmd}"]
+
+ elif "not found" in stderr_lower or "no such" in stderr_lower:
+ diagnosis["error_type"] = "not_found"
+ diagnosis["category"] = "file_directory"
+ diagnosis["description"] = "File or directory not found"
+ if diagnosis["extracted_path"]:
+ diagnosis["can_auto_fix"] = True
+ diagnosis["fix_strategy"] = "create_path"
+
+ return diagnosis
+
+ def _generate_fix_commands(self, diagnosis: dict, cmd: str, stderr: str) -> None:
+ """Generate specific fix commands based on the error type and strategy."""
+ strategy = diagnosis.get("fix_strategy", "")
+ extracted = diagnosis.get("extracted_info", {})
+ path = diagnosis.get("extracted_path")
+
+ # Permission/Sudo strategies
+ if strategy == "use_sudo":
+ if not cmd.strip().startswith("sudo"):
+ diagnosis["fix_commands"] = [f"sudo {cmd}"]
+
+ # Path creation strategies
+ elif strategy == "create_path":
+ if path:
+ parent = os.path.dirname(path)
+ if parent:
+ diagnosis["fix_commands"] = [f"sudo mkdir -p {parent}"]
+
+ # Package installation
+ elif strategy == "install_package":
+ missing_cmd = self.extract_command_from_error(stderr) or cmd.split()[0]
+ pkg = UBUNTU_PACKAGE_MAP.get(missing_cmd, missing_cmd)
+ diagnosis["fix_commands"] = ["sudo apt-get update", f"sudo apt-get install -y {pkg}"]
+ diagnosis["extracted_info"]["missing_command"] = missing_cmd
+ diagnosis["extracted_info"]["suggested_package"] = pkg
+
+ # Service management
+ elif strategy == "start_service" or strategy == "check_service":
+ service = extracted.get("service")
+ if service:
+ diagnosis["fix_commands"] = [
+ f"sudo systemctl start {service}",
+ f"sudo systemctl status {service}",
+ ]
+
+ elif strategy == "check_service_logs":
+ service = extracted.get("service")
+ if service:
+ # For web servers, check for port conflicts and common issues
+ if service in ("apache2", "httpd", "nginx"):
+ diagnosis["fix_commands"] = [
+ # First check what's using port 80
+ "sudo lsof -i :80 -t | head -1",
+ # Stop conflicting services
+ "sudo systemctl stop nginx 2>/dev/null || true",
+ "sudo systemctl stop apache2 2>/dev/null || true",
+ # Test config
+ f"sudo {'apache2ctl' if service == 'apache2' else 'nginx'} -t 2>&1 || true",
+ # Now try starting
+ f"sudo systemctl start {service}",
+ ]
+ elif service in ("mysql", "mariadb", "postgresql", "postgres"):
+ diagnosis["fix_commands"] = [
+ # Check disk space
+ "df -h /var/lib 2>/dev/null | tail -1",
+ # Check permissions
+ f"sudo chown -R {'mysql:mysql' if 'mysql' in service or 'mariadb' in service else 'postgres:postgres'} /var/lib/{'mysql' if 'mysql' in service or 'mariadb' in service else 'postgresql'} 2>/dev/null || true",
+ # Restart
+ f"sudo systemctl start {service}",
+ ]
+ else:
+ # Generic service - check logs and try restart
+ diagnosis["fix_commands"] = [
+ f"sudo journalctl -u {service} -n 20 --no-pager 2>&1 | tail -10",
+ f"sudo systemctl reset-failed {service} 2>/dev/null || true",
+ f"sudo systemctl start {service}",
+ ]
+
+ elif strategy == "unmask_service":
+ service = extracted.get("service")
+ if service:
+ diagnosis["fix_commands"] = [
+ f"sudo systemctl unmask {service}",
+ f"sudo systemctl start {service}",
+ ]
+
+ # Config file fixes
+ elif strategy in ["fix_nginx_config", "fix_nginx_permissions"]:
+ config_file = extracted.get("config_file")
+ line_num = extracted.get("line_num")
+ if config_file:
+ diagnosis["fix_commands"] = [
+ "sudo nginx -t 2>&1",
+ f"# Check config at: {config_file}" + (f":{line_num}" if line_num else ""),
+ ]
+ else:
+ diagnosis["fix_commands"] = [
+ "sudo nginx -t 2>&1",
+ "# Check /etc/nginx/nginx.conf and sites-enabled/*",
+ ]
+
+ elif strategy == "fix_apache_config":
+ config_file = extracted.get("config_file")
+ diagnosis["fix_commands"] = [
+ "sudo apache2ctl configtest",
+ "sudo apache2ctl -S", # Show virtual hosts
+ ]
+ if config_file:
+ diagnosis["fix_commands"].append(f"# Check config at: {config_file}")
+
+ elif strategy in ["fix_config_syntax", "fix_config_line"]:
+ config_file = extracted.get("config_file")
+ line_num = extracted.get("line_num")
+ if config_file and line_num:
+ diagnosis["fix_commands"] = [
+ f"sudo head -n {line_num + 5} {config_file} | tail -n 10",
+ f"# Edit: sudo nano +{line_num} {config_file}",
+ ]
+ elif config_file:
+ diagnosis["fix_commands"] = [
+ f"sudo cat {config_file}",
+ f"# Edit: sudo nano {config_file}",
+ ]
+
+ elif strategy == "fix_mysql_config":
+ diagnosis["fix_commands"] = [
+ "sudo mysql --help --verbose 2>&1 | grep -A 1 'Default options'",
+ "# Edit: sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf",
+ ]
+
+ elif strategy == "fix_postgres_config":
+ diagnosis["fix_commands"] = [
+ "sudo -u postgres psql -c 'SHOW config_file;'",
+ "# Edit: sudo nano /etc/postgresql/*/main/postgresql.conf",
+ ]
+
+ # Package manager
+ elif strategy == "clear_lock":
+ diagnosis["fix_commands"] = [
+ "sudo rm -f /var/lib/dpkg/lock-frontend",
+ "sudo rm -f /var/lib/dpkg/lock",
+ "sudo rm -f /var/cache/apt/archives/lock",
+ "sudo dpkg --configure -a",
+ ]
+
+ elif strategy == "update_repos":
+ pkg = extracted.get("package")
+ diagnosis["fix_commands"] = ["sudo apt-get update"]
+ if pkg:
+ diagnosis["fix_commands"].append(f"apt-cache search {pkg}")
+
+ elif strategy == "fix_dependencies":
+ diagnosis["fix_commands"] = [
+ "sudo apt-get install -f",
+ "sudo dpkg --configure -a",
+ "sudo apt-get update",
+ "sudo apt-get upgrade",
+ ]
+
+ elif strategy == "fix_broken":
+ diagnosis["fix_commands"] = [
+ "sudo apt-get install -f",
+ "sudo dpkg --configure -a",
+ "sudo apt-get clean",
+ "sudo apt-get update",
+ ]
+
+ elif strategy == "clean_apt":
+ diagnosis["fix_commands"] = [
+ "sudo apt-get clean",
+ "sudo rm -rf /var/lib/apt/lists/*",
+ "sudo apt-get update",
+ ]
+
+ elif strategy == "fix_gpg":
+ diagnosis["fix_commands"] = [
+ "sudo apt-key adv --refresh-keys --keyserver keyserver.ubuntu.com",
+ "sudo apt-get update",
+ ]
+
+ # Docker strategies
+ elif strategy == "remove_or_rename_container":
+ container_name = self._extract_container_name(stderr)
+ if container_name:
+ diagnosis["fix_commands"] = [
+ f"docker rm -f {container_name}",
+ "# Or rename: docker rename {container_name} {container_name}_old",
+ ]
+ diagnosis["suggestion"] = (
+ f"Container '{container_name}' already exists. Removing it and retrying."
+ )
+ else:
+ diagnosis["fix_commands"] = [
+ "docker ps -a",
+ "# Then: docker rm -f ",
+ ]
+
+ elif strategy == "stop_or_use_existing":
+ container_name = self._extract_container_name(stderr)
+ diagnosis["fix_commands"] = [
+ f"docker stop {container_name}" if container_name else "docker stop ",
+ "# Or connect to existing: docker exec -it /bin/sh",
+ ]
+
+ elif strategy == "start_container":
+ container_name = self._extract_container_name(stderr)
+ diagnosis["fix_commands"] = [
+ f"docker start {container_name}" if container_name else "docker start "
+ ]
+
+ elif strategy == "pull_image":
+ image_name = self._extract_image_name(stderr, cmd)
+ diagnosis["fix_commands"] = [
+ f"docker pull {image_name}" if image_name else "docker pull "
+ ]
+
+ elif strategy == "free_port_or_use_different":
+ port = self._extract_port(stderr)
+ if port:
+ diagnosis["fix_commands"] = [
+ f"sudo lsof -i :{port}",
+ f"# Kill process using port: sudo kill $(sudo lsof -t -i:{port})",
+ f"# Or use different port: -p {int(port)+1}:{port}",
+ ]
+ else:
+ diagnosis["fix_commands"] = ["docker ps", "# Check which ports are in use"]
+
+ elif strategy == "start_docker_daemon":
+ diagnosis["fix_commands"] = [
+ "sudo systemctl start docker",
+ "sudo systemctl status docker",
+ ]
+
+ elif strategy == "create_volume":
+ volume_name = extracted.get("volume")
+ diagnosis["fix_commands"] = [
+ (
+ f"docker volume create {volume_name}"
+ if volume_name
+ else "docker volume create "
+ )
+ ]
+
+ elif strategy == "create_network":
+ network_name = extracted.get("network")
+ diagnosis["fix_commands"] = [
+ (
+ f"docker network create {network_name}"
+ if network_name
+ else "docker network create "
+ )
+ ]
+
+ elif strategy == "check_container_name":
+ diagnosis["fix_commands"] = [
+ "docker ps -a",
+ "# Check container names and use correct one",
+ ]
+
+ # Timeout strategies
+ elif strategy == "retry_with_longer_timeout":
+ # Check if this is an interactive command that needs TTY
+ interactive_patterns = [
+ "docker exec -it",
+ "docker run -it",
+ "-ti ",
+ "ollama run",
+ "ollama chat",
+ ]
+ is_interactive = any(p in cmd.lower() for p in interactive_patterns)
+
+ if is_interactive:
+ diagnosis["fix_commands"] = [
+ "# This is an INTERACTIVE command that requires a terminal (TTY)",
+ "# Run it manually in a separate terminal window:",
+ f"# {cmd}",
+ ]
+ diagnosis["description"] = "Interactive command cannot run in background"
+ diagnosis["suggestion"] = (
+ "This command needs interactive input. Please run it in a separate terminal."
+ )
+ else:
+ diagnosis["fix_commands"] = [
+ "# This command timed out - it may still be running or need more time",
+ "# For docker pull: The image may be very large, try again with better network",
+ "# Check if the operation completed in background",
+ ]
+ diagnosis["suggestion"] = (
+ "The operation timed out. This often happens with large downloads. You can retry manually."
+ )
+ diagnosis["can_auto_fix"] = False # Let user decide what to do
+
+ # Network strategies
+ elif strategy == "check_network":
+ diagnosis["fix_commands"] = ["ping -c 2 8.8.8.8", "ip route", "cat /etc/resolv.conf"]
+
+ elif strategy == "check_dns":
+ diagnosis["fix_commands"] = [
+ "cat /etc/resolv.conf",
+ "systemd-resolve --status",
+ "sudo systemctl restart systemd-resolved",
+ ]
+
+ elif strategy == "check_service":
+ port = extracted.get("port")
+ if port:
+ diagnosis["fix_commands"] = [
+ f"sudo ss -tlnp sport = :{port}",
+ f"sudo lsof -i :{port}",
+ ]
+
+ elif strategy == "find_port_user":
+ port = extracted.get("port")
+ if port:
+ diagnosis["fix_commands"] = [
+ f"sudo lsof -i :{port}",
+ f"sudo ss -tlnp sport = :{port}",
+ "# Kill process: sudo kill ",
+ ]
+
+ elif strategy == "check_firewall":
+ diagnosis["fix_commands"] = ["sudo ufw status", "sudo iptables -L -n"]
+
+ # Disk/Memory strategies
+ elif strategy == "free_disk":
+ diagnosis["fix_commands"] = [
+ "df -h",
+ "sudo apt-get clean",
+ "sudo apt-get autoremove -y",
+ "sudo journalctl --vacuum-size=100M",
+ "du -sh /var/log/*",
+ ]
+
+ elif strategy == "free_memory":
+ diagnosis["fix_commands"] = [
+ "free -h",
+ "sudo sync && echo 3 | sudo tee /proc/sys/vm/drop_caches",
+ "top -b -n 1 | head -20",
+ ]
+
+ elif strategy == "increase_ulimit":
+ diagnosis["fix_commands"] = [
+ "ulimit -a",
+ "# Add to /etc/security/limits.conf:",
+ "# * soft nofile 65535",
+ "# * hard nofile 65535",
+ ]
+
+ # Filesystem strategies
+ elif strategy == "remount_rw":
+ if path:
+ mount_point = self._find_mount_point(path)
+ if mount_point:
+ diagnosis["fix_commands"] = [f"sudo mount -o remount,rw {mount_point}"]
+
+ elif strategy == "create_mountpoint":
+ if path:
+ diagnosis["fix_commands"] = [f"sudo mkdir -p {path}"]
+
+ elif strategy == "mount_fs":
+ diagnosis["fix_commands"] = ["mount", "cat /etc/fstab"]
+
+ # User strategies
+ elif strategy == "create_user":
+ # Extract username from error if possible
+ match = re.search(r"user '?([a-zA-Z0-9_-]+)'?", stderr, re.IGNORECASE)
+ if match:
+ user = match.group(1)
+ diagnosis["fix_commands"] = [f"sudo useradd -m {user}", f"sudo passwd {user}"]
+
+ elif strategy == "create_group":
+ match = re.search(r"group '?([a-zA-Z0-9_-]+)'?", stderr, re.IGNORECASE)
+ if match:
+ group = match.group(1)
+ diagnosis["fix_commands"] = [f"sudo groupadd {group}"]
+
+ # Build strategies
+ elif strategy == "install_lib":
+ lib_match = re.search(r"library.*?([a-zA-Z0-9_-]+)", stderr, re.IGNORECASE)
+ if lib_match:
+ lib = lib_match.group(1)
+ diagnosis["fix_commands"] = [
+ f"apt-cache search {lib}",
+ f"# Install with: sudo apt-get install lib{lib}-dev",
+ ]
+
+ elif strategy == "install_dev":
+ header_match = re.search(r"([a-zA-Z0-9_/]+\.h)", stderr)
+ if header_match:
+ header = header_match.group(1)
+ diagnosis["fix_commands"] = [
+ f"apt-file search {header}",
+ "# Install the -dev package that provides this header",
+ ]
+
+ elif strategy == "fix_ldpath":
+ diagnosis["fix_commands"] = [
+ "sudo ldconfig",
+ "echo $LD_LIBRARY_PATH",
+ "cat /etc/ld.so.conf.d/*.conf",
+ ]
+
+ # Wait/Retry strategies
+ elif strategy == "wait_retry":
+ diagnosis["fix_commands"] = ["sleep 2", f"# Then retry: {cmd}"]
+
+ # Script strategies
+ elif strategy == "fix_shebang":
+ if path:
+ diagnosis["fix_commands"] = [
+ f"head -1 {path}",
+ "# Fix shebang line to point to correct interpreter",
+ "# e.g., #!/usr/bin/env python3",
+ ]
+
+ # Environment strategies
+ elif strategy == "set_variable":
+ var_match = re.search(r"([A-Z_]+).*not set", stderr, re.IGNORECASE)
+ if var_match:
+ var = var_match.group(1)
+ diagnosis["fix_commands"] = [
+ f"export {var}=",
+ f"# Add to ~/.bashrc: export {var}=",
+ ]
+
+ elif strategy == "set_path":
+ diagnosis["fix_commands"] = [
+ "echo $PATH",
+ "export PATH=$PATH:/usr/local/bin",
+ "# Add to ~/.bashrc",
+ ]
+
+ elif strategy == "set_ldpath":
+ diagnosis["fix_commands"] = [
+ "echo $LD_LIBRARY_PATH",
+ "export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH",
+ "sudo ldconfig",
+ ]
+
+ # Backup/Overwrite strategy
+ elif strategy == "backup_overwrite":
+ if path:
+ diagnosis["fix_commands"] = [
+ f"sudo mv {path} {path}.backup",
+ f"# Then retry: {cmd}",
+ ]
+
+ # Symlink strategy
+ elif strategy == "fix_symlink":
+ if path:
+ diagnosis["fix_commands"] = [
+ f"ls -la {path}",
+ f"readlink -f {path}",
+ f"# Remove broken symlink: sudo rm {path}",
+ ]
+
+ # Directory not empty
+ elif strategy == "rm_recursive":
+ if path:
+ diagnosis["fix_commands"] = [
+ f"ls -la {path}",
+ f"# Remove recursively (CAUTION): sudo rm -rf {path}",
+ ]
+
+ # Copy instead of link
+ elif strategy == "copy_instead":
+ diagnosis["fix_commands"] = [
+ "# Use cp instead of ln/mv for cross-device operations",
+ "# cp -a ",
+ ]
+
+ def _find_mount_point(self, path: str) -> str | None:
+ """Find the mount point for a given path."""
+ try:
+ path = os.path.abspath(path)
+ while path != "/":
+ if os.path.ismount(path):
+ return path
+ path = os.path.dirname(path)
+ return "/"
+ except:
+ return None
+
+
+# ============================================================================
+# Login Handler Class
+# ============================================================================
+
+
+class LoginHandler:
+ """Handles interactive login/credential prompts for various services."""
+
+ CREDENTIALS_FILE = os.path.expanduser("~/.cortex/credentials.json")
+
+ def __init__(self):
+ self.cached_credentials: dict[str, dict] = {}
+ self._ensure_credentials_dir()
+ self._load_saved_credentials()
+
+ def _ensure_credentials_dir(self) -> None:
+ """Ensure the credentials directory exists with proper permissions."""
+ cred_dir = os.path.dirname(self.CREDENTIALS_FILE)
+ if not os.path.exists(cred_dir):
+ os.makedirs(cred_dir, mode=0o700, exist_ok=True)
+
+ def _encode_credential(self, value: str) -> str:
+ """Encode a credential value (basic obfuscation, not encryption)."""
+ import base64
+
+ return base64.b64encode(value.encode()).decode()
+
+ def _decode_credential(self, encoded: str) -> str:
+ """Decode a credential value."""
+ import base64
+
+ try:
+ return base64.b64decode(encoded.encode()).decode()
+ except Exception:
+ return ""
+
+ def _load_saved_credentials(self) -> None:
+ """Load saved credentials from file."""
+ import json
+
+ if not os.path.exists(self.CREDENTIALS_FILE):
+ return
+
+ try:
+ with open(self.CREDENTIALS_FILE) as f:
+ saved = json.load(f)
+
+ # Decode all saved credentials
+ for service, creds in saved.items():
+ decoded = {}
+ for field, value in creds.items():
+ if field.startswith("_"): # metadata fields
+ decoded[field] = value
+ else:
+ decoded[field] = self._decode_credential(value)
+ self.cached_credentials[service] = decoded
+
+ except (OSError, json.JSONDecodeError) as e:
+ console.print(f"[dim]Note: Could not load saved credentials: {e}[/dim]")
+
+ def _save_credentials(self, service: str, credentials: dict[str, str]) -> None:
+ """Save credentials to file."""
+ import json
+ from datetime import datetime
+
+ # Load existing credentials
+ all_creds = {}
+ if os.path.exists(self.CREDENTIALS_FILE):
+ try:
+ with open(self.CREDENTIALS_FILE) as f:
+ all_creds = json.load(f)
+ except (OSError, json.JSONDecodeError):
+ pass
+
+ # Encode new credentials
+ encoded = {}
+ for field_name, value in credentials.items():
+ if value: # Only save non-empty values
+ encoded[field_name] = self._encode_credential(value)
+
+ # Add metadata
+ encoded["_saved_at"] = datetime.now().isoformat()
+
+ all_creds[service] = encoded
+
+ # Save to file with restricted permissions
+ try:
+ with open(self.CREDENTIALS_FILE, "w") as f:
+ json.dump(all_creds, f, indent=2)
+ os.chmod(self.CREDENTIALS_FILE, 0o600) # Read/write only for owner
+ console.print(f"[green]✓ Credentials saved to {self.CREDENTIALS_FILE}[/green]")
+ except OSError as e:
+ console.print(f"[yellow]Warning: Could not save credentials: {e}[/yellow]")
+
+ def _delete_saved_credentials(self, service: str) -> None:
+ """Delete saved credentials for a service."""
+ import json
+
+ if not os.path.exists(self.CREDENTIALS_FILE):
+ return
+
+ try:
+ with open(self.CREDENTIALS_FILE) as f:
+ all_creds = json.load(f)
+
+ if service in all_creds:
+ del all_creds[service]
+
+ with open(self.CREDENTIALS_FILE, "w") as f:
+ json.dump(all_creds, f, indent=2)
+
+ console.print(f"[dim]Removed saved credentials for {service}[/dim]")
+ except (OSError, json.JSONDecodeError):
+ pass
+
+ def _has_saved_credentials(self, service: str) -> bool:
+ """Check if we have saved credentials for a service."""
+ return service in self.cached_credentials and bool(self.cached_credentials[service])
+
+ def _ask_use_saved(self, service: str, requirement: LoginRequirement) -> bool:
+ """Ask user if they want to use saved credentials."""
+ saved = self.cached_credentials.get(service, {})
+
+ # Show what we have saved (without showing secrets)
+ saved_fields = []
+ for field_name in requirement.required_fields:
+ if field_name in saved and saved[field_name]:
+ if requirement.field_secret.get(field_name, False):
+ saved_fields.append(f"{field_name}=****")
+ else:
+ value = saved[field_name]
+ # Truncate long values
+ if len(value) > 20:
+ value = value[:17] + "..."
+ saved_fields.append(f"{field_name}={value}")
+
+ if not saved_fields:
+ return False
+
+ console.print()
+ console.print(f"[cyan]📁 Found saved credentials for {requirement.display_name}:[/cyan]")
+ console.print(f"[dim] {', '.join(saved_fields)}[/dim]")
+
+ if "_saved_at" in saved:
+ console.print(f"[dim] Saved: {saved['_saved_at'][:19]}[/dim]")
+
+ console.print()
+ try:
+ response = input("Use saved credentials? (y/n/delete): ").strip().lower()
+ except (EOFError, KeyboardInterrupt):
+ return False
+
+ if response in ["d", "delete", "del", "remove"]:
+ self._delete_saved_credentials(service)
+ if service in self.cached_credentials:
+ del self.cached_credentials[service]
+ return False
+
+ return response in ["y", "yes", ""]
+
+ def _ask_save_credentials(self, service: str, credentials: dict[str, str]) -> None:
+ """Ask user if they want to save credentials for next time."""
+ console.print()
+ console.print("[cyan]💾 Save these credentials for next time?[/cyan]")
+ console.print(f"[dim] Credentials will be stored in {self.CREDENTIALS_FILE}[/dim]")
+ console.print("[dim] (encoded, readable only by you)[/dim]")
+
+ try:
+ response = input("Save credentials? (y/n): ").strip().lower()
+ except (EOFError, KeyboardInterrupt):
+ return
+
+ if response in ["y", "yes"]:
+ self._save_credentials(service, credentials)
+ # Also update cache
+ self.cached_credentials[service] = credentials.copy()
+
+ def detect_login_requirement(self, cmd: str, stderr: str) -> LoginRequirement | None:
+ """Detect which service needs login based on command and error."""
+ cmd_lower = cmd.lower()
+ stderr_lower = stderr.lower()
+
+ # Check for specific registries in docker commands
+ if "docker" in cmd_lower:
+ if "ghcr.io" in cmd_lower or "ghcr.io" in stderr_lower:
+ return LOGIN_REQUIREMENTS.get("ghcr")
+ if "gcr.io" in cmd_lower or "gcr.io" in stderr_lower:
+ return LOGIN_REQUIREMENTS.get("gcloud")
+ return LOGIN_REQUIREMENTS.get("docker")
+
+ # Check other services
+ for service, req in LOGIN_REQUIREMENTS.items():
+ if re.search(req.command_pattern, cmd, re.IGNORECASE):
+ return req
+
+ return None
+
+ def check_env_credentials(self, requirement: LoginRequirement) -> dict[str, str]:
+ """Check if credentials are available in environment variables."""
+ found = {}
+ for field_name, env_var in requirement.env_vars.items():
+ value = os.environ.get(env_var)
+ if value:
+ found[field_name] = value
+ return found
+
+ def prompt_for_credentials(
+ self, requirement: LoginRequirement, pre_filled: dict[str, str] | None = None
+ ) -> dict[str, str] | None:
+ """Prompt user for required credentials."""
+ import getpass
+
+ console.print()
+ console.print(
+ f"[bold cyan]🔐 {requirement.display_name} Authentication Required[/bold cyan]"
+ )
+ console.print()
+
+ if requirement.signup_url:
+ console.print(f"[dim]Don't have an account? Sign up at: {requirement.signup_url}[/dim]")
+ if requirement.docs_url:
+ console.print(f"[dim]Documentation: {requirement.docs_url}[/dim]")
+ console.print()
+
+ # Check for existing env vars
+ env_creds = self.check_env_credentials(requirement)
+ if env_creds:
+ console.print(
+ f"[green]Found credentials in environment: {', '.join(env_creds.keys())}[/green]"
+ )
+
+ credentials = pre_filled.copy() if pre_filled else {}
+ credentials.update(env_creds)
+
+ try:
+ for field in requirement.required_fields:
+ if field in credentials and credentials[field]:
+ console.print(
+ f"[dim]{requirement.field_prompts[field]}: (using existing)[/dim]"
+ )
+ continue
+
+ prompt_text = requirement.field_prompts.get(field, f"Enter {field}")
+ is_secret = requirement.field_secret.get(field, False)
+
+ # Handle special defaults
+ default_value = ""
+ if field == "registry":
+ default_value = "docker.io"
+ elif field == "region":
+ default_value = "us-east-1"
+ elif field == "kubeconfig":
+ default_value = os.path.expanduser("~/.kube/config")
+
+ if default_value:
+ prompt_text = f"{prompt_text} [{default_value}]"
+
+ console.print(f"[bold]{prompt_text}:[/bold] ", end="")
+
+ if is_secret:
+ value = getpass.getpass("")
+ else:
+ try:
+ value = input()
+ except (EOFError, KeyboardInterrupt):
+ console.print("\n[yellow]Authentication cancelled.[/yellow]")
+ return None
+
+ # Use default if empty
+ if not value and default_value:
+ value = default_value
+ console.print(f"[dim]Using default: {default_value}[/dim]")
+
+ if not value and field != "registry": # registry can be empty for Docker Hub
+ console.print(f"[red]Error: {field} is required.[/red]")
+ return None
+
+ credentials[field] = value
+
+ return credentials
+
+ except (EOFError, KeyboardInterrupt):
+ console.print("\n[yellow]Authentication cancelled.[/yellow]")
+ return None
+
+ def execute_login(
+ self, requirement: LoginRequirement, credentials: dict[str, str]
+ ) -> tuple[bool, str, str]:
+ """Execute the login command with provided credentials."""
+
+ # Build the login command
+ if not requirement.login_command_template:
+ return False, "", "No login command template defined"
+
+ # Handle special cases
+ if requirement.service == "docker" and credentials.get("registry") in ["", "docker.io"]:
+ credentials["registry"] = "" # Docker Hub doesn't need registry in command
+
+ # Format the command
+ try:
+ login_cmd = requirement.login_command_template.format(**credentials)
+ except KeyError as e:
+ return False, "", f"Missing credential: {e}"
+
+ # For Docker, use stdin for password to avoid it showing in ps
+ if requirement.service in ["docker", "ghcr"]:
+ password = credentials.get("password") or credentials.get("token", "")
+ username = credentials.get("username", "")
+ registry = credentials.get("registry", "")
+
+ if requirement.service == "ghcr":
+ registry = "ghcr.io"
+
+ # Build safe command
+ if registry:
+ cmd_parts = ["docker", "login", registry, "-u", username, "--password-stdin"]
+ else:
+ cmd_parts = ["docker", "login", "-u", username, "--password-stdin"]
+
+ console.print(
+ f"[dim]Executing: docker login {registry or 'docker.io'} -u {username}[/dim]"
+ )
+
+ try:
+ process = subprocess.Popen(
+ cmd_parts,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ text=True,
+ )
+ stdout, stderr = process.communicate(input=password, timeout=60)
+ return process.returncode == 0, stdout.strip(), stderr.strip()
+ except subprocess.TimeoutExpired:
+ process.kill()
+ return False, "", "Login timed out"
+ except Exception as e:
+ return False, "", str(e)
+
+ # For other services, execute directly
+ console.print("[dim]Executing login...[/dim]")
+ try:
+ result = subprocess.run(
+ login_cmd,
+ shell=True,
+ capture_output=True,
+ text=True,
+ timeout=120,
+ )
+ return result.returncode == 0, result.stdout.strip(), result.stderr.strip()
+ except subprocess.TimeoutExpired:
+ return False, "", "Login timed out"
+ except Exception as e:
+ return False, "", str(e)
+
+ def handle_login(self, cmd: str, stderr: str) -> tuple[bool, str]:
+ """
+ Main entry point: detect login requirement, prompt, and execute.
+
+ Returns:
+ (success, message)
+ """
+ requirement = self.detect_login_requirement(cmd, stderr)
+
+ if not requirement:
+ return False, "Could not determine which service needs authentication"
+
+ used_saved = False
+ credentials = None
+
+ # Check for saved credentials first
+ if self._has_saved_credentials(requirement.service):
+ if self._ask_use_saved(requirement.service, requirement):
+ # Use saved credentials
+ credentials = self.cached_credentials.get(requirement.service, {}).copy()
+ # Remove metadata fields
+ credentials = {k: v for k, v in credentials.items() if not k.startswith("_")}
+ used_saved = True
+
+ console.print("[cyan]Using saved credentials...[/cyan]")
+ success, stdout, login_stderr = self.execute_login(requirement, credentials)
+
+ if success:
+ console.print(
+ f"[green]✓ Successfully logged in to {requirement.display_name} using saved credentials[/green]"
+ )
+ return True, f"Logged in to {requirement.display_name} using saved credentials"
+ else:
+ console.print(
+ f"[yellow]Saved credentials didn't work: {login_stderr[:100] if login_stderr else 'Unknown error'}[/yellow]"
+ )
+ console.print("[dim]Let's enter new credentials...[/dim]")
+ credentials = None
+ used_saved = False
+
+ # Prompt for new credentials if we don't have valid ones
+ if not credentials:
+ # Pre-fill with any partial saved credentials (like username)
+ pre_filled = {}
+ if requirement.service in self.cached_credentials:
+ saved = self.cached_credentials[requirement.service]
+ for field in requirement.required_fields:
+ if (
+ field in saved
+ and saved[field]
+ and not requirement.field_secret.get(field, False)
+ ):
+ pre_filled[field] = saved[field]
+
+ credentials = self.prompt_for_credentials(
+ requirement, pre_filled if pre_filled else None
+ )
+
+ if not credentials:
+ return False, "Authentication cancelled by user"
+
+ # Execute login
+ success, stdout, login_stderr = self.execute_login(requirement, credentials)
+
+ if success:
+ console.print(f"[green]✓ Successfully logged in to {requirement.display_name}[/green]")
+
+ # Ask to save credentials if they weren't from saved file
+ if not used_saved:
+ self._ask_save_credentials(requirement.service, credentials)
+
+ # Update session cache
+ self.cached_credentials[requirement.service] = credentials.copy()
+
+ return True, f"Successfully authenticated with {requirement.display_name}"
+ else:
+ error_msg = login_stderr or "Login failed"
+ console.print(f"[red]✗ Login failed: {error_msg}[/red]")
+
+ # Offer to retry
+ console.print()
+ try:
+ retry = input("Would you like to try again? (y/n): ").strip().lower()
+ except (EOFError, KeyboardInterrupt):
+ retry = "n"
+
+ if retry in ["y", "yes"]:
+ # Clear cached credentials for this service since they failed
+ if requirement.service in self.cached_credentials:
+ del self.cached_credentials[requirement.service]
+ return self.handle_login(cmd, stderr) # Recursive retry
+
+ return False, f"Login failed: {error_msg}"
+
+
+# Auto-Fixer Class
+# ============================================================================
+
+
+class AutoFixer:
+ """Auto-fixes errors based on diagnosis."""
+
+ def __init__(self, llm_callback: Callable[[str, dict], dict] | None = None):
+ self.diagnoser = ErrorDiagnoser()
+ self.llm_callback = llm_callback
+ # Track all attempted fixes across multiple calls to avoid repeating
+ self._attempted_fixes: dict[str, set[str]] = {} # cmd -> set of fix commands tried
+ self._attempted_strategies: dict[str, set[str]] = {} # cmd -> set of strategies tried
+
+ def _get_fix_key(self, cmd: str) -> str:
+ """Generate a key for tracking fixes for a command."""
+ # Normalize the command (strip sudo, whitespace)
+ normalized = cmd.strip()
+ if normalized.startswith("sudo "):
+ normalized = normalized[5:].strip()
+ return normalized
+
+ def _is_fix_attempted(self, original_cmd: str, fix_cmd: str) -> bool:
+ """Check if a fix command has already been attempted for this command."""
+ key = self._get_fix_key(original_cmd)
+ fix_normalized = fix_cmd.strip()
+
+ if key not in self._attempted_fixes:
+ return False
+
+ return fix_normalized in self._attempted_fixes[key]
+
+ def _mark_fix_attempted(self, original_cmd: str, fix_cmd: str) -> None:
+ """Mark a fix command as attempted."""
+ key = self._get_fix_key(original_cmd)
+
+ if key not in self._attempted_fixes:
+ self._attempted_fixes[key] = set()
+
+ self._attempted_fixes[key].add(fix_cmd.strip())
+
+ def _is_strategy_attempted(self, original_cmd: str, strategy: str, error_type: str) -> bool:
+ """Check if a strategy has been attempted for this command/error combination."""
+ key = f"{self._get_fix_key(original_cmd)}:{error_type}"
+
+ if key not in self._attempted_strategies:
+ return False
+
+ return strategy in self._attempted_strategies[key]
+
+ def _mark_strategy_attempted(self, original_cmd: str, strategy: str, error_type: str) -> None:
+ """Mark a strategy as attempted for this command/error combination."""
+ key = f"{self._get_fix_key(original_cmd)}:{error_type}"
+
+ if key not in self._attempted_strategies:
+ self._attempted_strategies[key] = set()
+
+ self._attempted_strategies[key].add(strategy)
+
+ def reset_attempts(self, cmd: str | None = None) -> None:
+ """Reset attempted fixes tracking. If cmd is None, reset all."""
+ if cmd is None:
+ self._attempted_fixes.clear()
+ self._attempted_strategies.clear()
+ else:
+ key = self._get_fix_key(cmd)
+ if key in self._attempted_fixes:
+ del self._attempted_fixes[key]
+ # Clear all strategies for this command
+ to_delete = [k for k in self._attempted_strategies if k.startswith(key)]
+ for k in to_delete:
+ del self._attempted_strategies[k]
+
+ def _get_llm_fix(self, cmd: str, stderr: str, diagnosis: dict) -> dict | None:
+ """Use LLM to diagnose error and suggest fix commands.
+
+ This is called when pattern matching fails to identify the error.
+ """
+ if not self.llm_callback:
+ return None
+
+ context = {
+ "error_command": cmd,
+ "error_output": stderr[:1000], # Truncate for LLM context
+ "current_diagnosis": diagnosis,
+ }
+
+ # Create a targeted prompt for error diagnosis
+ prompt = f"""Analyze this Linux command error and provide fix commands.
+
+FAILED COMMAND: {cmd}
+
+ERROR OUTPUT:
+{stderr[:800]}
+
+Provide a JSON response with:
+1. "fix_commands": list of shell commands to fix this error (in order)
+2. "reasoning": brief explanation of the error and fix
+
+Focus on common issues:
+- Docker: container already exists (docker rm -f ), port conflicts, daemon not running
+- Permissions: use sudo, create directories
+- Services: systemctl start/restart
+- Files: mkdir -p, touch, chown
+
+Example response:
+{{"fix_commands": ["docker rm -f ollama", "docker run ..."], "reasoning": "Container 'ollama' already exists, removing it first"}}"""
+
+ try:
+ response = self.llm_callback(prompt, context)
+
+ if response and response.get("response_type") != "error":
+ # Check if the response contains fix commands directly
+ if response.get("fix_commands"):
+ return {
+ "fix_commands": response["fix_commands"],
+ "reasoning": response.get("reasoning", "AI-suggested fix"),
+ }
+
+ # Check if it's a do_commands response
+ if response.get("do_commands"):
+ return {
+ "fix_commands": [cmd["command"] for cmd in response["do_commands"]],
+ "reasoning": response.get("reasoning", "AI-suggested fix"),
+ }
+
+ # Try to parse answer as fix suggestion
+ if response.get("answer"):
+ # Extract commands from natural language response
+ answer = response["answer"]
+ commands = []
+ for line in answer.split("\n"):
+ line = line.strip()
+ if (
+ line.startswith("$")
+ or line.startswith("sudo ")
+ or line.startswith("docker ")
+ ):
+ commands.append(line.lstrip("$ "))
+ if commands:
+ return {"fix_commands": commands, "reasoning": "Extracted from AI response"}
+
+ return None
+
+ except Exception as e:
+ console.print(f"[dim] LLM fix generation failed: {e}[/dim]")
+ return None
+
+ def _execute_command(
+ self, cmd: str, needs_sudo: bool = False, timeout: int = 120
+ ) -> tuple[bool, str, str]:
+ """Execute a single command."""
+ import sys
+
+ try:
+ if needs_sudo and not cmd.strip().startswith("sudo"):
+ cmd = f"sudo {cmd}"
+
+ # Handle comments
+ if cmd.strip().startswith("#"):
+ return True, "", ""
+
+ # For sudo commands, we need to handle the password prompt specially
+ is_sudo = cmd.strip().startswith("sudo")
+
+ if is_sudo:
+ # Flush output before sudo to ensure clean state
+ sys.stdout.flush()
+ sys.stderr.flush()
+
+ result = subprocess.run(
+ cmd,
+ shell=True,
+ capture_output=True,
+ text=True,
+ timeout=timeout,
+ )
+
+ if is_sudo:
+ # After sudo, ensure console is in clean state
+ # Print empty line to reset cursor position after potential password prompt
+ sys.stdout.write("\n")
+ sys.stdout.flush()
+
+ return result.returncode == 0, result.stdout.strip(), result.stderr.strip()
+ except subprocess.TimeoutExpired:
+ return False, "", f"Command timed out after {timeout} seconds"
+ except Exception as e:
+ return False, "", str(e)
+
+ def auto_fix_error(
+ self,
+ cmd: str,
+ stderr: str,
+ diagnosis: dict[str, Any],
+ max_attempts: int = 5,
+ ) -> tuple[bool, str, list[str]]:
+ """
+ General-purpose auto-fix system with retry logic.
+
+ Tracks attempted fixes to avoid repeating the same fixes.
+
+ Returns:
+ Tuple of (fixed, message, commands_executed)
+ """
+ all_commands_executed = []
+ current_stderr = stderr
+ current_diagnosis = diagnosis
+ attempt = 0
+ skipped_attempts = 0
+ max_skips = 3 # Max attempts to skip before giving up
+
+ while attempt < max_attempts and skipped_attempts < max_skips:
+ attempt += 1
+ error_type = current_diagnosis.get("error_type", "unknown")
+ strategy = current_diagnosis.get("fix_strategy", "")
+ category = current_diagnosis.get("category", "unknown")
+
+ # Check if this strategy was already attempted for this error
+ if self._is_strategy_attempted(cmd, strategy, error_type):
+ console.print(
+ f"[dim] Skipping already-tried strategy: {strategy} for {error_type}[/dim]"
+ )
+ skipped_attempts += 1
+
+ # Try to get a different diagnosis by re-analyzing
+ if current_stderr:
+ # Force a different approach by marking current strategy as exhausted
+ current_diagnosis["fix_strategy"] = ""
+ current_diagnosis["can_auto_fix"] = False
+ continue
+
+ # Mark this strategy as attempted
+ self._mark_strategy_attempted(cmd, strategy, error_type)
+
+ # Check fix commands that would be generated
+ fix_commands = current_diagnosis.get("fix_commands", [])
+
+ # Filter out already-attempted fix commands
+ new_fix_commands = []
+ for fix_cmd in fix_commands:
+ if fix_cmd.startswith("#"): # Comments are always allowed
+ new_fix_commands.append(fix_cmd)
+ elif self._is_fix_attempted(cmd, fix_cmd):
+ console.print(f"[dim] Skipping already-executed: {fix_cmd[:50]}...[/dim]")
+ else:
+ new_fix_commands.append(fix_cmd)
+
+ # If all fix commands were already tried, skip this attempt
+ if fix_commands and not new_fix_commands:
+ console.print(f"[dim] All fix commands already tried for {error_type}[/dim]")
+ skipped_attempts += 1
+ continue
+
+ # Update diagnosis with filtered commands
+ current_diagnosis["fix_commands"] = new_fix_commands
+
+ # Reset skip counter since we found something new to try
+ skipped_attempts = 0
+
+ severity = current_diagnosis.get("severity", "error")
+
+ # Visual grouping for auto-fix attempts
+ from rich.panel import Panel
+ from rich.text import Text
+
+ fix_title = Text()
+ fix_title.append("🔧 AUTO-FIX ", style="bold yellow")
+ fix_title.append(f"Attempt {attempt}/{max_attempts}", style="dim")
+
+ severity_color = "red" if severity == "critical" else "yellow"
+ fix_content = Text()
+ if severity == "critical":
+ fix_content.append("⚠️ CRITICAL: ", style="bold red")
+ fix_content.append(f"[{category}] ", style="dim")
+ fix_content.append(error_type, style=f"bold {severity_color}")
+
+ console.print()
+ console.print(
+ Panel(
+ fix_content,
+ title=fix_title,
+ title_align="left",
+ border_style=severity_color,
+ padding=(0, 1),
+ )
+ )
+
+ # Ensure output is flushed before executing fixes
+ import sys
+
+ sys.stdout.flush()
+
+ fixed, message, commands = self.apply_single_fix(cmd, current_stderr, current_diagnosis)
+
+ # Mark all executed commands as attempted
+ for exec_cmd in commands:
+ self._mark_fix_attempted(cmd, exec_cmd)
+ all_commands_executed.extend(commands)
+
+ if fixed:
+ # Check if it's just a "use sudo" suggestion
+ if message == "Will retry with sudo":
+ sudo_cmd = f"sudo {cmd}" if not cmd.startswith("sudo") else cmd
+
+ # Check if we already tried sudo
+ if self._is_fix_attempted(cmd, sudo_cmd):
+ console.print("[dim] Already tried sudo, skipping...[/dim]")
+ skipped_attempts += 1
+ continue
+
+ self._mark_fix_attempted(cmd, sudo_cmd)
+ success, stdout, new_stderr = self._execute_command(sudo_cmd)
+ all_commands_executed.append(sudo_cmd)
+
+ if success:
+ console.print(
+ Panel(
+ "[bold green]✓ Fixed with sudo[/bold green]",
+ border_style="green",
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+ return (
+ True,
+ f"Fixed with sudo after {attempt} attempt(s)",
+ all_commands_executed,
+ )
+ else:
+ current_stderr = new_stderr
+ current_diagnosis = self.diagnoser.diagnose_error(cmd, new_stderr)
+ continue
+
+ # Verify the original command now works
+ console.print(
+ Panel(
+ f"[bold cyan]✓ Fix applied:[/bold cyan] {message}\n[dim]Verifying original command...[/dim]",
+ border_style="cyan",
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+
+ verify_cmd = f"sudo {cmd}" if not cmd.startswith("sudo") else cmd
+ success, stdout, new_stderr = self._execute_command(verify_cmd)
+ all_commands_executed.append(verify_cmd)
+
+ if success:
+ console.print(
+ Panel(
+ "[bold green]✓ Verified![/bold green] Command now succeeds",
+ border_style="green",
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+ return (
+ True,
+ f"Fixed after {attempt} attempt(s): {message}",
+ all_commands_executed,
+ )
+ else:
+ new_diagnosis = self.diagnoser.diagnose_error(cmd, new_stderr)
+
+ if new_diagnosis["error_type"] == error_type:
+ console.print(
+ " [dim yellow]Same error persists, trying different approach...[/dim yellow]"
+ )
+ else:
+ console.print(
+ f" [yellow]New error: {new_diagnosis['error_type']}[/yellow]"
+ )
+
+ current_stderr = new_stderr
+ current_diagnosis = new_diagnosis
+ else:
+ console.print(f" [dim red]Fix attempt failed: {message}[/dim red]")
+ console.print(" [dim]Trying fallback...[/dim]")
+
+ # Try with sudo as fallback
+ sudo_fallback = f"sudo {cmd}"
+ if not cmd.strip().startswith("sudo") and not self._is_fix_attempted(
+ cmd, sudo_fallback
+ ):
+ self._mark_fix_attempted(cmd, sudo_fallback)
+ success, _, new_stderr = self._execute_command(sudo_fallback)
+ all_commands_executed.append(sudo_fallback)
+
+ if success:
+ return True, "Fixed with sudo fallback", all_commands_executed
+
+ current_stderr = new_stderr
+ current_diagnosis = self.diagnoser.diagnose_error(cmd, new_stderr)
+ else:
+ if cmd.strip().startswith("sudo"):
+ console.print("[dim] Already running with sudo, no more fallbacks[/dim]")
+ else:
+ console.print("[dim] Sudo fallback already tried[/dim]")
+ break
+
+ # Final summary of what was attempted
+ unique_attempts = len(self._attempted_fixes.get(self._get_fix_key(cmd), set()))
+ if unique_attempts > 0:
+ console.print(f"[dim] Total unique fixes attempted: {unique_attempts}[/dim]")
+
+ return (
+ False,
+ f"Could not fix after {attempt} attempts ({skipped_attempts} skipped as duplicates)",
+ all_commands_executed,
+ )
+
+ def apply_single_fix(
+ self,
+ cmd: str,
+ stderr: str,
+ diagnosis: dict[str, Any],
+ ) -> tuple[bool, str, list[str]]:
+ """Apply a single fix attempt based on the error diagnosis."""
+ error_type = diagnosis.get("error_type", "unknown")
+ category = diagnosis.get("category", "unknown")
+ strategy = diagnosis.get("fix_strategy", "")
+ fix_commands = diagnosis.get("fix_commands", [])
+ extracted = diagnosis.get("extracted_info", {})
+ path = diagnosis.get("extracted_path")
+
+ commands_executed = []
+
+ # Strategy-based fixes
+
+ # === Use Sudo ===
+ if strategy == "use_sudo" or error_type in [
+ "permission_denied",
+ "operation_not_permitted",
+ "access_denied",
+ ]:
+ if not cmd.strip().startswith("sudo"):
+ console.print("[dim] Adding sudo...[/dim]")
+ return True, "Will retry with sudo", []
+
+ # === Create Path ===
+ if strategy == "create_path" or error_type == "not_found":
+ missing_path = path or extracted.get("missing_path")
+
+ if missing_path:
+ parent_dir = os.path.dirname(missing_path)
+
+ if parent_dir and not os.path.exists(parent_dir):
+ console.print(f"[dim] Creating directory: {parent_dir}[/dim]")
+ mkdir_cmd = f"sudo mkdir -p {parent_dir}"
+ success, _, mkdir_err = self._execute_command(mkdir_cmd)
+ commands_executed.append(mkdir_cmd)
+
+ if success:
+ return True, f"Created directory {parent_dir}", commands_executed
+ else:
+ return False, f"Failed to create directory: {mkdir_err}", commands_executed
+
+ # === Install Package ===
+ if strategy == "install_package" or error_type == "command_not_found":
+ missing_cmd = extracted.get(
+ "missing_command"
+ ) or self.diagnoser.extract_command_from_error(stderr)
+ if not missing_cmd:
+ missing_cmd = cmd.split()[0] if cmd.split() else ""
+
+ suggested_pkg = UBUNTU_PACKAGE_MAP.get(missing_cmd, missing_cmd)
+
+ if missing_cmd:
+ console.print(f"[dim] Installing package: {suggested_pkg}[/dim]")
+
+ # Update repos first
+ update_cmd = "sudo apt-get update"
+ self._execute_command(update_cmd)
+ commands_executed.append(update_cmd)
+
+ # Install package
+ install_cmd = f"sudo apt-get install -y {suggested_pkg}"
+ success, _, install_err = self._execute_command(install_cmd)
+ commands_executed.append(install_cmd)
+
+ if success:
+ return True, f"Installed {suggested_pkg}", commands_executed
+ else:
+ # Try without suggested package mapping
+ if suggested_pkg != missing_cmd:
+ install_cmd2 = f"sudo apt-get install -y {missing_cmd}"
+ success, _, _ = self._execute_command(install_cmd2)
+ commands_executed.append(install_cmd2)
+ if success:
+ return True, f"Installed {missing_cmd}", commands_executed
+
+ return False, f"Failed to install: {install_err[:100]}", commands_executed
+
+ # === Clear Package Lock ===
+ if strategy == "clear_lock" or error_type in [
+ "dpkg_lock",
+ "apt_lock",
+ "could_not_get_lock",
+ ]:
+ console.print("[dim] Clearing package locks...[/dim]")
+
+ lock_cmds = [
+ "sudo rm -f /var/lib/dpkg/lock-frontend",
+ "sudo rm -f /var/lib/dpkg/lock",
+ "sudo rm -f /var/cache/apt/archives/lock",
+ "sudo dpkg --configure -a",
+ ]
+
+ for lock_cmd in lock_cmds:
+ self._execute_command(lock_cmd)
+ commands_executed.append(lock_cmd)
+
+ return True, "Cleared package locks", commands_executed
+
+ # === Fix Dependencies ===
+ if strategy in ["fix_dependencies", "fix_broken"]:
+ console.print("[dim] Fixing package dependencies...[/dim]")
+
+ fix_cmds = [
+ "sudo apt-get install -f -y",
+ "sudo dpkg --configure -a",
+ ]
+
+ for fix_cmd in fix_cmds:
+ success, _, _ = self._execute_command(fix_cmd)
+ commands_executed.append(fix_cmd)
+
+ return True, "Attempted dependency fix", commands_executed
+
+ # === Start Service ===
+ if strategy in ["start_service", "check_service"] or error_type in [
+ "service_inactive",
+ "service_not_running",
+ ]:
+ service = extracted.get("service")
+
+ if service:
+ console.print(f"[dim] Starting service: {service}[/dim]")
+ start_cmd = f"sudo systemctl start {service}"
+ success, _, start_err = self._execute_command(start_cmd)
+ commands_executed.append(start_cmd)
+
+ if success:
+ return True, f"Started service {service}", commands_executed
+ else:
+ # Try enable --now
+ enable_cmd = f"sudo systemctl enable --now {service}"
+ success, _, _ = self._execute_command(enable_cmd)
+ commands_executed.append(enable_cmd)
+ if success:
+ return True, f"Enabled and started {service}", commands_executed
+ return False, f"Failed to start {service}: {start_err[:100]}", commands_executed
+
+ # === Unmask Service ===
+ if strategy == "unmask_service" or error_type == "service_masked":
+ service = extracted.get("service")
+
+ if service:
+ console.print(f"[dim] Unmasking service: {service}[/dim]")
+ unmask_cmd = f"sudo systemctl unmask {service}"
+ success, _, _ = self._execute_command(unmask_cmd)
+ commands_executed.append(unmask_cmd)
+
+ if success:
+ start_cmd = f"sudo systemctl start {service}"
+ self._execute_command(start_cmd)
+ commands_executed.append(start_cmd)
+ return True, f"Unmasked and started {service}", commands_executed
+
+ # === Free Disk Space ===
+ if strategy == "free_disk" or error_type == "no_space":
+ console.print("[dim] Cleaning up disk space...[/dim]")
+
+ cleanup_cmds = [
+ "sudo apt-get clean",
+ "sudo apt-get autoremove -y",
+ "sudo journalctl --vacuum-size=100M",
+ ]
+
+ for cleanup_cmd in cleanup_cmds:
+ self._execute_command(cleanup_cmd)
+ commands_executed.append(cleanup_cmd)
+
+ return True, "Freed disk space", commands_executed
+
+ # === Free Memory ===
+ if strategy == "free_memory" or error_type in [
+ "oom",
+ "cannot_allocate",
+ "memory_exhausted",
+ ]:
+ console.print("[dim] Freeing memory...[/dim]")
+
+ mem_cmds = [
+ "sudo sync",
+ "echo 3 | sudo tee /proc/sys/vm/drop_caches",
+ ]
+
+ for mem_cmd in mem_cmds:
+ self._execute_command(mem_cmd)
+ commands_executed.append(mem_cmd)
+
+ return True, "Freed memory caches", commands_executed
+
+ # === Fix Config Syntax (all config error types) ===
+ config_error_types = [
+ "config_syntax_error",
+ "nginx_config_error",
+ "nginx_syntax_error",
+ "nginx_unexpected",
+ "nginx_unknown_directive",
+ "nginx_test_failed",
+ "apache_syntax_error",
+ "apache_config_error",
+ "config_line_error",
+ "mysql_config_error",
+ "postgres_config_error",
+ "generic_config_syntax",
+ "invalid_config",
+ "config_parse_error",
+ "syntax_error",
+ ]
+
+ if error_type in config_error_types or category == "config":
+ config_file = extracted.get("config_file")
+ line_num = extracted.get("line_num")
+
+ # Try to extract config file/line from error if not already done
+ if not config_file:
+ config_file, line_num = self.diagnoser.extract_config_file_and_line(stderr)
+
+ if config_file and line_num:
+ console.print(f"[dim] Config error at {config_file}:{line_num}[/dim]")
+ fixed, msg = self.fix_config_syntax(config_file, line_num, stderr, cmd)
+ if fixed:
+ # Verify the fix (e.g., nginx -t)
+ if "nginx" in error_type or "nginx" in cmd.lower():
+ verify_cmd = "sudo nginx -t"
+ v_success, _, v_stderr = self._execute_command(verify_cmd)
+ commands_executed.append(verify_cmd)
+ if v_success:
+ return True, f"{msg} - nginx config now valid", commands_executed
+ else:
+ console.print("[yellow] Config still has errors[/yellow]")
+ # Re-diagnose for next iteration
+ return False, f"{msg} but still has errors", commands_executed
+ return True, msg, commands_executed
+ else:
+ return False, msg, commands_executed
+ else:
+ # Can't find specific line, provide general guidance
+ if "nginx" in error_type or "nginx" in cmd.lower():
+ console.print("[dim] Testing nginx config...[/dim]")
+ test_cmd = "sudo nginx -t 2>&1"
+ success, stdout, test_err = self._execute_command(test_cmd)
+ commands_executed.append(test_cmd)
+ if not success:
+ # Try to extract file/line from test output
+ cf, ln = self.diagnoser.extract_config_file_and_line(test_err)
+ if cf and ln:
+ fixed, msg = self.fix_config_syntax(cf, ln, test_err, cmd)
+ if fixed:
+ return True, msg, commands_executed
+ return False, "Could not identify config file/line to fix", commands_executed
+
+ # === Network Fixes ===
+ if category == "network":
+ if strategy == "check_dns" or error_type in [
+ "dns_temp_fail",
+ "dns_unknown",
+ "dns_failed",
+ ]:
+ console.print("[dim] Restarting DNS resolver...[/dim]")
+ dns_cmd = "sudo systemctl restart systemd-resolved"
+ success, _, _ = self._execute_command(dns_cmd)
+ commands_executed.append(dns_cmd)
+ if success:
+ return True, "Restarted DNS resolver", commands_executed
+
+ if strategy == "find_port_user" or error_type == "address_in_use":
+ port = extracted.get("port")
+ if port:
+ console.print(f"[dim] Port {port} in use, checking...[/dim]")
+ lsof_cmd = f"sudo lsof -i :{port}"
+ success, stdout, _ = self._execute_command(lsof_cmd)
+ commands_executed.append(lsof_cmd)
+ if stdout:
+ console.print(f"[dim] Process using port: {stdout[:100]}[/dim]")
+ return (
+ False,
+ f"Port {port} is in use - kill the process first",
+ commands_executed,
+ )
+
+ # === Remount Read-Write ===
+ if strategy == "remount_rw" or error_type == "readonly_fs":
+ if path:
+ console.print("[dim] Remounting filesystem read-write...[/dim]")
+ # Find mount point
+ mount_point = "/"
+ check_path = os.path.abspath(path) if path else "/"
+ while check_path != "/":
+ if os.path.ismount(check_path):
+ mount_point = check_path
+ break
+ check_path = os.path.dirname(check_path)
+
+ remount_cmd = f"sudo mount -o remount,rw {mount_point}"
+ success, _, remount_err = self._execute_command(remount_cmd)
+ commands_executed.append(remount_cmd)
+ if success:
+ return True, f"Remounted {mount_point} read-write", commands_executed
+
+ # === Fix Symlink Loop ===
+ if strategy == "fix_symlink" or error_type == "symlink_loop":
+ if path:
+ console.print(f"[dim] Fixing symlink: {path}[/dim]")
+ # Check if it's a broken symlink
+ if os.path.islink(path):
+ rm_cmd = f"sudo rm {path}"
+ success, _, _ = self._execute_command(rm_cmd)
+ commands_executed.append(rm_cmd)
+ if success:
+ return True, f"Removed broken symlink {path}", commands_executed
+
+ # === Wait and Retry ===
+ if strategy == "wait_retry" or error_type in [
+ "resource_unavailable",
+ "text_file_busy",
+ "device_busy",
+ ]:
+ import time
+
+ console.print("[dim] Waiting for resource...[/dim]")
+ time.sleep(2)
+ return True, "Waited 2 seconds", commands_executed
+
+ # === Use xargs for long argument lists ===
+ if strategy == "use_xargs" or error_type == "arg_list_too_long":
+ console.print("[dim] Argument list too long - need to use xargs or loop[/dim]")
+ return False, "Use xargs or a loop to process files in batches", commands_executed
+
+ # === Execute provided fix commands ===
+ if fix_commands:
+ console.print("[dim] Executing fix commands...[/dim]")
+ for fix_cmd in fix_commands:
+ if fix_cmd.startswith("#"):
+ continue # Skip comments
+ success, stdout, err = self._execute_command(fix_cmd)
+ commands_executed.append(fix_cmd)
+ if not success and err:
+ console.print(f"[dim] Warning: {fix_cmd} failed: {err[:50]}[/dim]")
+
+ if commands_executed:
+ return True, f"Executed {len(commands_executed)} fix commands", commands_executed
+
+ # === Try LLM-based fix if available ===
+ if self.llm_callback and error_type == "unknown":
+ console.print("[dim] Using AI to diagnose error...[/dim]")
+ llm_fix = self._get_llm_fix(cmd, stderr, diagnosis)
+ if llm_fix:
+ fix_commands = llm_fix.get("fix_commands", [])
+ reasoning = llm_fix.get("reasoning", "AI-suggested fix")
+
+ if fix_commands:
+ console.print(f"[cyan] 🤖 AI diagnosis: {reasoning}[/cyan]")
+ for fix_cmd in fix_commands:
+ if self._is_fix_attempted(cmd, fix_cmd):
+ console.print(f"[dim] Skipping (already tried): {fix_cmd}[/dim]")
+ continue
+
+ console.print(f"[dim] Executing: {fix_cmd}[/dim]")
+ self._mark_fix_attempted(cmd, fix_cmd)
+
+ needs_sudo = fix_cmd.strip().startswith("sudo") or "docker" in fix_cmd
+ success, stdout, stderr = self._execute_command(
+ fix_cmd, needs_sudo=needs_sudo
+ )
+ commands_executed.append(fix_cmd)
+
+ if success:
+ console.print(f"[green] ✓ Fixed: {fix_cmd}[/green]")
+ return True, reasoning, commands_executed
+
+ if commands_executed:
+ return True, "Executed AI-suggested fixes", commands_executed
+
+ # === Fallback: try with sudo ===
+ if not cmd.strip().startswith("sudo"):
+ console.print("[dim] Fallback: will try with sudo...[/dim]")
+ return True, "Will retry with sudo", []
+
+ return False, f"No fix strategy for {error_type}", commands_executed
+
+ def fix_config_syntax(
+ self,
+ config_file: str,
+ line_num: int,
+ stderr: str,
+ original_cmd: str,
+ ) -> tuple[bool, str]:
+ """Fix configuration file syntax errors."""
+ console.print(f"[dim] Analyzing config: {config_file}:{line_num}[/dim]")
+
+ # Read the config file
+ success, config_content, read_err = self._execute_command(f"sudo cat {config_file}")
+ if not success or not config_content:
+ return False, f"Could not read {config_file}: {read_err}"
+
+ lines = config_content.split("\n")
+ if line_num > len(lines) or line_num < 1:
+ return False, f"Invalid line number {line_num}"
+
+ problem_line = lines[line_num - 1]
+ console.print(f"[dim] Line {line_num}: {problem_line.strip()[:60]}...[/dim]")
+
+ stderr_lower = stderr.lower()
+
+ # Duplicate entry
+ if "duplicate" in stderr_lower:
+ console.print("[cyan] Commenting out duplicate entry...[/cyan]")
+ fix_cmd = f"sudo sed -i '{line_num}s/^/# DUPLICATE: /' {config_file}"
+ success, _, _ = self._execute_command(fix_cmd)
+ if success:
+ return True, f"Commented out duplicate at line {line_num}"
+
+ # Missing semicolon (for nginx, etc.)
+ if "unexpected" in stderr_lower or "expecting" in stderr_lower:
+ stripped = problem_line.strip()
+ if stripped and not stripped.endswith((";", "{", "}", ":", ",", "#", ")")):
+ console.print("[cyan] Adding missing semicolon...[/cyan]")
+ escaped_line = stripped.replace("/", "\\/").replace("&", "\\&")
+ fix_cmd = f"sudo sed -i '{line_num}s/.*/ {escaped_line};/' {config_file}"
+ success, _, _ = self._execute_command(fix_cmd)
+ if success:
+ return True, f"Added semicolon at line {line_num}"
+
+ # Unknown directive
+ if "unknown" in stderr_lower and ("directive" in stderr_lower or "option" in stderr_lower):
+ console.print("[cyan] Commenting out unknown directive...[/cyan]")
+ fix_cmd = f"sudo sed -i '{line_num}s/^/# UNKNOWN: /' {config_file}"
+ success, _, _ = self._execute_command(fix_cmd)
+ if success:
+ return True, f"Commented out unknown directive at line {line_num}"
+
+ # Invalid value/argument
+ if "invalid" in stderr_lower:
+ console.print("[cyan] Commenting out line with invalid value...[/cyan]")
+ fix_cmd = f"sudo sed -i '{line_num}s/^/# INVALID: /' {config_file}"
+ success, _, _ = self._execute_command(fix_cmd)
+ if success:
+ return True, f"Commented out invalid line at line {line_num}"
+
+ # Unterminated string
+ if "unterminated" in stderr_lower or ("string" in stderr_lower and "quote" in stderr_lower):
+ if problem_line.count('"') % 2 == 1:
+ console.print("[cyan] Adding missing double quote...[/cyan]")
+ fix_cmd = f"sudo sed -i '{line_num}s/$/\"/' {config_file}"
+ success, _, _ = self._execute_command(fix_cmd)
+ if success:
+ return True, f"Added missing quote at line {line_num}"
+ elif problem_line.count("'") % 2 == 1:
+ console.print("[cyan] Adding missing single quote...[/cyan]")
+ fix_cmd = f'sudo sed -i "{line_num}s/$/\'/" {config_file}'
+ success, _, _ = self._execute_command(fix_cmd)
+ if success:
+ return True, f"Added missing quote at line {line_num}"
+
+ # Fallback: comment out problematic line
+ console.print("[cyan] Fallback: commenting out problematic line...[/cyan]")
+ fix_cmd = f"sudo sed -i '{line_num}s/^/# ERROR: /' {config_file}"
+ success, _, _ = self._execute_command(fix_cmd)
+ if success:
+ return True, f"Commented out problematic line {line_num}"
+
+ return False, "Could not identify a fix for this config error"
+
+
+# ============================================================================
+# Utility Functions
+# ============================================================================
+
+
+def get_error_category(error_type: str) -> str:
+ """Get the category for an error type."""
+ for pattern in ALL_ERROR_PATTERNS:
+ if pattern.error_type == error_type:
+ return pattern.category
+ return "unknown"
+
+
+def get_severity(error_type: str) -> str:
+ """Get the severity for an error type."""
+ for pattern in ALL_ERROR_PATTERNS:
+ if pattern.error_type == error_type:
+ return pattern.severity
+ return "error"
+
+
+def is_critical_error(error_type: str) -> bool:
+ """Check if an error type is critical."""
+ return get_severity(error_type) == "critical"
diff --git a/cortex/do_runner/diagnosis_v2.py b/cortex/do_runner/diagnosis_v2.py
new file mode 100644
index 00000000..7607a06f
--- /dev/null
+++ b/cortex/do_runner/diagnosis_v2.py
@@ -0,0 +1,1931 @@
+"""
+Cortex Diagnosis System v2
+
+A structured error diagnosis and resolution system with the following flow:
+1. Categorize error type (file, login, package, syntax, input, etc.)
+2. LLM generates fix commands with variable placeholders
+3. Resolve variables from query, LLM, or system_info_generator
+4. Execute fix commands and log output
+5. If error, push to stack and repeat
+6. Test original command, if still fails, repeat
+
+Uses a stack-based approach for tracking command errors.
+"""
+
+import json
+import os
+import re
+import subprocess
+import time
+from collections.abc import Callable
+from dataclasses import dataclass, field
+from enum import Enum
+from typing import Any
+
+from rich.console import Console
+from rich.panel import Panel
+from rich.table import Table
+from rich.tree import Tree
+
+console = Console()
+
+
+# =============================================================================
+# ERROR CATEGORIES
+# =============================================================================
+
+
+class ErrorCategory(str, Enum):
+ """Broad categories of errors that can occur during command execution."""
+
+ # File & Directory Errors (LOCAL)
+ FILE_NOT_FOUND = "file_not_found"
+ FILE_EXISTS = "file_exists"
+ DIRECTORY_NOT_FOUND = "directory_not_found"
+ PERMISSION_DENIED_LOCAL = "permission_denied_local" # Local file/dir permission
+ READ_ONLY_FILESYSTEM = "read_only_filesystem"
+ DISK_FULL = "disk_full"
+
+ # URL/Link Permission Errors (REMOTE)
+ PERMISSION_DENIED_URL = "permission_denied_url" # URL/API permission
+ ACCESS_DENIED_REGISTRY = "access_denied_registry" # Container registry
+ ACCESS_DENIED_REPO = "access_denied_repo" # Git/package repo
+ ACCESS_DENIED_API = "access_denied_api" # API endpoint
+
+ # Authentication & Login Errors
+ LOGIN_REQUIRED = "login_required"
+ AUTH_FAILED = "auth_failed"
+ TOKEN_EXPIRED = "token_expired"
+ INVALID_CREDENTIALS = "invalid_credentials"
+
+ # Legacy - for backward compatibility
+ PERMISSION_DENIED = "permission_denied" # Will be resolved to LOCAL or URL
+
+ # Package & Resource Errors
+ PACKAGE_NOT_FOUND = "package_not_found"
+ IMAGE_NOT_FOUND = "image_not_found"
+ RESOURCE_NOT_FOUND = "resource_not_found"
+ DEPENDENCY_MISSING = "dependency_missing"
+ VERSION_CONFLICT = "version_conflict"
+
+ # Command Errors
+ COMMAND_NOT_FOUND = "command_not_found"
+ SYNTAX_ERROR = "syntax_error"
+ INVALID_ARGUMENT = "invalid_argument"
+ MISSING_ARGUMENT = "missing_argument"
+ DEPRECATED_SYNTAX = "deprecated_syntax"
+
+ # Service & Process Errors
+ SERVICE_NOT_RUNNING = "service_not_running"
+ SERVICE_FAILED = "service_failed"
+ PORT_IN_USE = "port_in_use"
+ PROCESS_KILLED = "process_killed"
+ TIMEOUT = "timeout"
+
+ # Network Errors
+ NETWORK_UNREACHABLE = "network_unreachable"
+ CONNECTION_REFUSED = "connection_refused"
+ DNS_FAILED = "dns_failed"
+ SSL_ERROR = "ssl_error"
+
+ # Configuration Errors
+ CONFIG_SYNTAX_ERROR = "config_syntax_error"
+ CONFIG_INVALID_VALUE = "config_invalid_value"
+ CONFIG_MISSING_KEY = "config_missing_key"
+
+ # Resource Errors
+ OUT_OF_MEMORY = "out_of_memory"
+ CPU_LIMIT = "cpu_limit"
+ QUOTA_EXCEEDED = "quota_exceeded"
+
+ # Unknown
+ UNKNOWN = "unknown"
+
+
+# Error pattern definitions for each category
+ERROR_PATTERNS: dict[ErrorCategory, list[tuple[str, str]]] = {
+ # File & Directory
+ ErrorCategory.FILE_NOT_FOUND: [
+ (r"No such file or directory", "file"),
+ (r"cannot open '([^']+)'.*No such file", "file"),
+ (r"stat\(\): cannot stat '([^']+)'", "file"),
+ (r"File not found:? ([^\n]+)", "file"),
+ ],
+ ErrorCategory.FILE_EXISTS: [
+ (r"File exists", "file"),
+ (r"cannot create.*File exists", "file"),
+ ],
+ ErrorCategory.DIRECTORY_NOT_FOUND: [
+ (r"No such file or directory:.*/$", "directory"),
+ (r"cannot access '([^']+/)': No such file or directory", "directory"),
+ (r"mkdir: cannot create directory '([^']+)'.*No such file", "parent_directory"),
+ ],
+ # Local file/directory permission denied
+ ErrorCategory.PERMISSION_DENIED_LOCAL: [
+ (r"Permission denied.*(/[^\s:]+)", "path"),
+ (r"cannot open '([^']+)'.*Permission denied", "path"),
+ (r"cannot create.*'([^']+)'.*Permission denied", "path"),
+ (r"cannot access '([^']+)'.*Permission denied", "path"),
+ (r"Operation not permitted.*(/[^\s:]+)", "path"),
+ (r"EACCES.*(/[^\s]+)", "path"),
+ ],
+ # URL/Link permission denied (registries, APIs, repos)
+ ErrorCategory.PERMISSION_DENIED_URL: [
+ (r"403 Forbidden.*https?://([^\s/]+)", "host"),
+ (r"401 Unauthorized.*https?://([^\s/]+)", "host"),
+ (r"Access denied.*https?://([^\s/]+)", "host"),
+ ],
+ ErrorCategory.ACCESS_DENIED_REGISTRY: [
+ (r"denied: requested access to the resource is denied", "registry"),
+ (r"pull access denied", "registry"), # Higher priority pattern
+ (r"pull access denied for ([^\s,]+)", "image"),
+ (r"unauthorized: authentication required.*registry", "registry"),
+ (r"Error response from daemon.*denied", "registry"),
+ (r"UNAUTHORIZED.*registry", "registry"),
+ (r"unauthorized to access repository", "registry"),
+ ],
+ ErrorCategory.ACCESS_DENIED_REPO: [
+ (r"Repository not found.*https?://([^\s]+)", "repo"),
+ (r"fatal: could not read from remote repository", "repo"),
+ (r"Permission denied \(publickey\)", "repo"),
+ (r"Host key verification failed", "host"),
+ (r"remote: Permission to ([^\s]+) denied", "repo"),
+ ],
+ ErrorCategory.ACCESS_DENIED_API: [
+ (r"API.*access denied", "api"),
+ (r"AccessDenied.*Access denied", "api"), # AWS-style error
+ (r"403.*API", "api"),
+ (r"unauthorized.*api", "api"),
+ (r"An error occurred \(AccessDenied\)", "api"), # AWS CLI error
+ (r"not authorized to perform", "api"),
+ ],
+ # Legacy pattern for generic permission denied
+ ErrorCategory.PERMISSION_DENIED: [
+ (r"Permission denied", "resource"),
+ (r"Operation not permitted", "operation"),
+ (r"Access denied", "resource"),
+ (r"EACCES", "resource"),
+ ],
+ ErrorCategory.READ_ONLY_FILESYSTEM: [
+ (r"Read-only file system", "filesystem"),
+ ],
+ ErrorCategory.DISK_FULL: [
+ (r"No space left on device", "device"),
+ (r"Disk quota exceeded", "quota"),
+ ],
+ # Authentication & Login
+ ErrorCategory.LOGIN_REQUIRED: [
+ (r"Login required", "service"),
+ (r"Authentication required", "service"),
+ (r"401 Unauthorized", "service"),
+ (r"not logged in", "service"),
+ (r"must be logged in", "service"),
+ (r"Non-null Username Required", "service"),
+ ],
+ ErrorCategory.AUTH_FAILED: [
+ (r"Authentication failed", "service"),
+ (r"invalid username or password", "credentials"),
+ (r"403 Forbidden", "access"),
+ (r"access denied", "resource"),
+ ],
+ ErrorCategory.TOKEN_EXPIRED: [
+ (r"token.*expired", "token"),
+ (r"session expired", "session"),
+ (r"credential.*expired", "credential"),
+ ],
+ ErrorCategory.INVALID_CREDENTIALS: [
+ (r"invalid.*credentials?", "type"),
+ (r"bad credentials", "type"),
+ (r"incorrect password", "auth"),
+ ],
+ # Package & Resource
+ ErrorCategory.PACKAGE_NOT_FOUND: [
+ (r"Unable to locate package ([^\s]+)", "package"),
+ (r"Package ([^\s]+) is not available", "package"),
+ (r"No package ([^\s]+) available", "package"),
+ (r"E: Package '([^']+)' has no installation candidate", "package"),
+ (r"error: package '([^']+)' not found", "package"),
+ (r"ModuleNotFoundError: No module named '([^']+)'", "module"),
+ ],
+ ErrorCategory.IMAGE_NOT_FOUND: [
+ (r"manifest.*not found", "image"),
+ (r"image.*not found", "image"),
+ (r"repository does not exist", "repository"),
+ (r"Error response from daemon: manifest for ([^\s]+) not found", "image"),
+ # Note: "pull access denied" moved to ACCESS_DENIED_REGISTRY
+ ],
+ ErrorCategory.RESOURCE_NOT_FOUND: [
+ (r"resource.*not found", "resource"),
+ (r"404 Not Found", "url"),
+ (r"could not find ([^\n]+)", "resource"),
+ (r"No matching distribution found for ([^\s]+)", "package"),
+ (r"Could not find a version that satisfies the requirement ([^\s]+)", "package"),
+ ],
+ ErrorCategory.DEPENDENCY_MISSING: [
+ (r"Depends:.*but it is not going to be installed", "dependency"),
+ (r"unmet dependencies", "packages"),
+ (r"dependency.*not satisfied", "dependency"),
+ (r"peer dep missing", "dependency"),
+ ],
+ ErrorCategory.VERSION_CONFLICT: [
+ (r"version conflict", "packages"),
+ (r"incompatible version", "version"),
+ (r"requires.*but ([^\s]+) is installed", "conflict"),
+ ],
+ # Command Errors
+ ErrorCategory.COMMAND_NOT_FOUND: [
+ (r"command not found", "command"),
+ (r"not found", "binary"),
+ (r"is not recognized as", "command"),
+ (r"Unknown command", "subcommand"),
+ ],
+ ErrorCategory.SYNTAX_ERROR: [
+ (r"syntax error", "location"),
+ (r"parse error", "location"),
+ (r"unexpected token", "token"),
+ (r"near unexpected", "token"),
+ ],
+ ErrorCategory.INVALID_ARGUMENT: [
+ (r"invalid.*argument", "argument"),
+ (r"unrecognized option", "option"),
+ (r"unknown option", "option"),
+ (r"illegal option", "option"),
+ (r"bad argument", "argument"),
+ ],
+ ErrorCategory.MISSING_ARGUMENT: [
+ (r"missing.*argument", "argument"),
+ (r"requires.*argument", "argument"),
+ (r"missing operand", "operand"),
+ (r"option.*requires an argument", "option"),
+ ],
+ ErrorCategory.DEPRECATED_SYNTAX: [
+ (r"deprecated", "feature"),
+ (r"obsolete", "feature"),
+ (r"use.*instead", "replacement"),
+ ],
+ # Service & Process
+ ErrorCategory.SERVICE_NOT_RUNNING: [
+ (r"is not running", "service"),
+ (r"service.*stopped", "service"),
+ (r"inactive \(dead\)", "service"),
+ (r"Unit.*not found", "unit"),
+ (r"Failed to connect to", "service"),
+ (r"could not be found", "service"),
+ (r"Unit ([^\s]+)\.service could not be found", "service"),
+ ],
+ ErrorCategory.SERVICE_FAILED: [
+ (r"failed to start", "service"),
+ (r"service.*failed", "service"),
+ (r"Job.*failed", "job"),
+ (r"Main process exited", "process"),
+ ],
+ ErrorCategory.PORT_IN_USE: [
+ (r"Address already in use", "port"),
+ (r"port.*already.*use", "port"),
+ (r"bind\(\): Address already in use", "port"),
+ (r"EADDRINUSE", "port"),
+ ],
+ ErrorCategory.PROCESS_KILLED: [
+ (r"Killed", "signal"),
+ (r"SIGKILL", "signal"),
+ (r"Out of memory", "oom"),
+ ],
+ ErrorCategory.TIMEOUT: [
+ (r"timed out", "operation"),
+ (r"timeout", "operation"),
+ (r"deadline exceeded", "operation"),
+ ],
+ # Network
+ ErrorCategory.NETWORK_UNREACHABLE: [
+ (r"Network is unreachable", "network"),
+ (r"No route to host", "host"),
+ (r"Could not resolve host", "host"),
+ ],
+ ErrorCategory.CONNECTION_REFUSED: [
+ (r"Connection refused", "target"),
+ (r"ECONNREFUSED", "target"),
+ (r"couldn't connect to host", "host"),
+ ],
+ ErrorCategory.DNS_FAILED: [
+ (r"Name or service not known", "hostname"),
+ (r"Temporary failure in name resolution", "dns"),
+ (r"DNS lookup failed", "hostname"),
+ ],
+ ErrorCategory.SSL_ERROR: [
+ (r"SSL.*error", "ssl"),
+ (r"certificate.*error", "certificate"),
+ (r"CERT_", "certificate"),
+ ],
+ # Configuration
+ ErrorCategory.CONFIG_SYNTAX_ERROR: [
+ (r"configuration.*syntax.*error", "config"),
+ (r"invalid configuration", "config"),
+ (r"parse error in", "config"),
+ (r"nginx:.*emerg.*", "nginx_config"),
+ (r"Failed to parse", "config"),
+ ],
+ ErrorCategory.CONFIG_INVALID_VALUE: [
+ (r"invalid value", "config"),
+ (r"unknown directive", "directive"),
+ (r"invalid parameter", "parameter"),
+ ],
+ ErrorCategory.CONFIG_MISSING_KEY: [
+ (r"missing.*key", "key"),
+ (r"required.*not set", "key"),
+ (r"undefined variable", "variable"),
+ ],
+ # Resource
+ ErrorCategory.OUT_OF_MEMORY: [
+ (r"Out of memory", "memory"),
+ (r"Cannot allocate memory", "memory"),
+ (r"MemoryError", "memory"),
+ (r"OOMKilled", "oom"),
+ ],
+ ErrorCategory.QUOTA_EXCEEDED: [
+ (r"quota exceeded", "quota"),
+ (r"limit reached", "limit"),
+ (r"rate limit", "rate"),
+ ],
+}
+
+
+# =============================================================================
+# DATA STRUCTURES
+# =============================================================================
+
+
+@dataclass
+class DiagnosisResult:
+ """Result of error diagnosis (Step 1)."""
+
+ category: ErrorCategory
+ error_message: str
+ extracted_info: dict[str, str] = field(default_factory=dict)
+ confidence: float = 1.0
+ raw_stderr: str = ""
+
+
+@dataclass
+class FixCommand:
+ """A single fix command with variable placeholders."""
+
+ command_template: str # Command with {variable} placeholders
+ purpose: str
+ variables: list[str] = field(default_factory=list) # Variable names found
+ requires_sudo: bool = False
+
+ def __post_init__(self):
+ # Extract variables from template
+ self.variables = re.findall(r"\{(\w+)\}", self.command_template)
+
+
+@dataclass
+class FixPlan:
+ """Plan for fixing an error (Step 2 output)."""
+
+ category: ErrorCategory
+ commands: list[FixCommand]
+ reasoning: str
+ all_variables: set[str] = field(default_factory=set)
+
+ def __post_init__(self):
+ # Collect all unique variables
+ for cmd in self.commands:
+ self.all_variables.update(cmd.variables)
+
+
+@dataclass
+class VariableResolution:
+ """Resolution for a variable (Step 3)."""
+
+ name: str
+ value: str
+ source: str # "query", "llm", "system_info", "default"
+
+
+@dataclass
+class ExecutionResult:
+ """Result of executing a fix command (Step 4)."""
+
+ command: str
+ success: bool
+ stdout: str
+ stderr: str
+ execution_time: float
+
+
+@dataclass
+class ErrorStackEntry:
+ """Entry in the error stack for tracking."""
+
+ original_command: str
+ intent: str
+ error: str
+ category: ErrorCategory
+ fix_plan: FixPlan | None = None
+ fix_attempts: int = 0
+ timestamp: float = field(default_factory=time.time)
+
+
+# =============================================================================
+# DIAGNOSIS ENGINE
+# =============================================================================
+
+
+class DiagnosisEngine:
+ """
+ Main diagnosis engine implementing the structured error resolution flow.
+
+ Flow:
+ 1. Categorize error type
+ 2. LLM generates fix commands with variables
+ 3. Resolve variables
+ 4. Execute fix commands
+ 5. If error, push to stack and repeat
+ 6. Test original command
+ """
+
+ MAX_FIX_ATTEMPTS = 5
+ MAX_STACK_DEPTH = 10
+
+ # Known URL/remote service patterns in commands
+ URL_COMMAND_PATTERNS = [
+ r"docker\s+(pull|push|login)",
+ r"git\s+(clone|push|pull|fetch|remote)",
+ r"npm\s+(publish|login|install.*@)",
+ r"pip\s+install.*--index-url",
+ r"curl\s+",
+ r"wget\s+",
+ r"aws\s+",
+ r"gcloud\s+",
+ r"kubectl\s+",
+ r"helm\s+",
+ r"az\s+", # Azure CLI
+ r"gh\s+", # GitHub CLI
+ ]
+
+ # Known registries and their authentication services
+ KNOWN_SERVICES = {
+ "ghcr.io": "ghcr",
+ "docker.io": "docker",
+ "registry.hub.docker.com": "docker",
+ "github.com": "git_https",
+ "gitlab.com": "git_https",
+ "bitbucket.org": "git_https",
+ "registry.npmjs.org": "npm",
+ "pypi.org": "pypi",
+ "amazonaws.com": "aws",
+ "gcr.io": "gcloud",
+ }
+
+ def __init__(
+ self,
+ api_key: str | None = None,
+ provider: str = "claude",
+ model: str | None = None,
+ debug: bool = False,
+ ):
+ self.api_key = (
+ api_key or os.environ.get("ANTHROPIC_API_KEY") or os.environ.get("OPENAI_API_KEY")
+ )
+ self.provider = provider.lower()
+ self.model = model or self._default_model()
+ self.debug = debug
+
+ # Error stack for tracking command errors
+ self.error_stack: list[ErrorStackEntry] = []
+
+ # Resolution cache to avoid re-resolving same variables
+ self.variable_cache: dict[str, str] = {}
+
+ # Execution history for logging
+ self.execution_history: list[dict[str, Any]] = []
+
+ # Initialize LoginHandler for credential management
+ self._login_handler = None
+ try:
+ from cortex.do_runner.diagnosis import LoginHandler
+
+ self._login_handler = LoginHandler()
+ except ImportError:
+ pass
+
+ self._initialize_client()
+
+ def _default_model(self) -> str:
+ if self.provider == "openai":
+ return "gpt-4o"
+ elif self.provider == "claude":
+ return "claude-sonnet-4-20250514"
+ return "gpt-4o"
+
+ def _initialize_client(self):
+ """Initialize the LLM client."""
+ if not self.api_key:
+ console.print("[yellow]⚠ No API key found - LLM features disabled[/yellow]")
+ self.client = None
+ return
+
+ if self.provider == "openai":
+ try:
+ from openai import OpenAI
+
+ self.client = OpenAI(api_key=self.api_key)
+ except ImportError:
+ self.client = None
+ elif self.provider == "claude":
+ try:
+ from anthropic import Anthropic
+
+ self.client = Anthropic(api_key=self.api_key)
+ except ImportError:
+ self.client = None
+ else:
+ self.client = None
+
+ # =========================================================================
+ # PERMISSION TYPE DETECTION
+ # =========================================================================
+
+ def _is_url_based_permission_error(
+ self, command: str, stderr: str
+ ) -> tuple[bool, str | None, str | None]:
+ """
+ Determine if permission denied is for a local file/dir or a URL/link.
+
+ Returns:
+ Tuple of (is_url_based, service_name, url_or_host)
+ """
+ # Check if command involves known remote operations
+ is_remote_command = any(
+ re.search(pattern, command, re.IGNORECASE) for pattern in self.URL_COMMAND_PATTERNS
+ )
+
+ # Check stderr for URL patterns
+ url_patterns = [
+ r"https?://([^\s/]+)",
+ r"([a-zA-Z0-9.-]+\.(io|com|org|net))",
+ r"registry[.\s]",
+ r"(ghcr\.io|docker\.io|gcr\.io|quay\.io)",
+ ]
+
+ found_host = None
+ for pattern in url_patterns:
+ match = re.search(pattern, stderr, re.IGNORECASE)
+ if match:
+ found_host = match.group(1) if match.groups() else match.group(0)
+ break
+
+ # Also check command for URLs/hosts
+ if not found_host:
+ for pattern in url_patterns:
+ match = re.search(pattern, command, re.IGNORECASE)
+ if match:
+ found_host = match.group(1) if match.groups() else match.group(0)
+ break
+
+ # Determine service
+ service = None
+ if found_host:
+ for host_pattern, svc in self.KNOWN_SERVICES.items():
+ if host_pattern in found_host.lower():
+ service = svc
+ break
+
+ # Detect service from command if not found from host
+ if not service:
+ if "git " in command.lower():
+ service = "git_https"
+ if not found_host:
+ found_host = "git remote"
+ elif "aws " in command.lower():
+ service = "aws"
+ if not found_host:
+ found_host = "aws"
+ elif "docker " in command.lower():
+ service = "docker"
+ elif "npm " in command.lower():
+ service = "npm"
+
+ # Git-specific patterns
+ git_remote_patterns = [
+ "remote:" in stderr.lower(),
+ "permission to" in stderr.lower() and ".git" in stderr.lower(),
+ "denied to" in stderr.lower(),
+ "could not read from remote repository" in stderr.lower(),
+ "fatal: authentication failed" in stderr.lower(),
+ ]
+
+ # AWS-specific patterns
+ aws_patterns = [
+ "accessdenied" in stderr.lower().replace(" ", ""),
+ "an error occurred" in stderr.lower() and "denied" in stderr.lower(),
+ "not authorized" in stderr.lower(),
+ ]
+
+ # If it's a remote command with a host or URL-based error patterns
+ is_url_based = (
+ bool(is_remote_command and found_host)
+ or any(
+ [
+ "401" in stderr,
+ "403" in stderr,
+ "unauthorized" in stderr.lower(),
+ "authentication required" in stderr.lower(),
+ "login required" in stderr.lower(),
+ "access denied" in stderr.lower() and found_host,
+ "pull access denied" in stderr.lower(),
+ "denied: requested access" in stderr.lower(),
+ ]
+ )
+ or any(git_remote_patterns)
+ or any(aws_patterns)
+ )
+
+ if is_url_based:
+ console.print("[cyan] 🌐 Detected URL-based permission error[/cyan]")
+ console.print(f"[dim] Host: {found_host or 'unknown'}[/dim]")
+ console.print(f"[dim] Service: {service or 'unknown'}[/dim]")
+
+ return is_url_based, service, found_host
+
+ def _is_local_file_permission_error(self, command: str, stderr: str) -> tuple[bool, str | None]:
+ """
+ Check if permission error is for a local file/directory.
+
+ Returns:
+ Tuple of (is_local_file, file_path)
+ """
+ # Check for local path patterns in stderr
+ local_patterns = [
+ r"Permission denied.*(/[^\s:]+)",
+ r"cannot open '([^']+)'.*Permission denied",
+ r"cannot create.*'([^']+)'.*Permission denied",
+ r"cannot access '([^']+)'.*Permission denied",
+ r"cannot read '([^']+)'",
+ r"failed to open '([^']+)'",
+ r"open\(\) \"([^\"]+)\" failed",
+ ]
+
+ for pattern in local_patterns:
+ match = re.search(pattern, stderr, re.IGNORECASE)
+ if match:
+ path = match.group(1)
+ # Verify it's a local path (starts with / or ./)
+ if path.startswith("/") or path.startswith("./"):
+ console.print("[cyan] 📁 Detected local file permission error[/cyan]")
+ console.print(f"[dim] Path: {path}[/dim]")
+ return True, path
+
+ # Check command for local paths being accessed
+ path_match = re.search(r"(/[^\s]+)", command)
+ if path_match and "permission denied" in stderr.lower():
+ path = path_match.group(1)
+ console.print("[cyan] 📁 Detected local file permission error (from command)[/cyan]")
+ console.print(f"[dim] Path: {path}[/dim]")
+ return True, path
+
+ return False, None
+
+ def _resolve_permission_error_type(
+ self,
+ command: str,
+ stderr: str,
+ current_category: ErrorCategory,
+ ) -> tuple[ErrorCategory, dict[str, str]]:
+ """
+ Resolve generic PERMISSION_DENIED to specific LOCAL or URL category.
+
+ Returns:
+ Tuple of (refined_category, additional_info)
+ """
+ additional_info = {}
+
+ # Only process if it's a generic permission error
+ permission_categories = [
+ ErrorCategory.PERMISSION_DENIED,
+ ErrorCategory.PERMISSION_DENIED_LOCAL,
+ ErrorCategory.PERMISSION_DENIED_URL,
+ ErrorCategory.ACCESS_DENIED_REGISTRY,
+ ErrorCategory.ACCESS_DENIED_REPO,
+ ErrorCategory.ACCESS_DENIED_API,
+ ErrorCategory.AUTH_FAILED,
+ ]
+
+ if current_category not in permission_categories:
+ return current_category, additional_info
+
+ # Check URL-based first (more specific)
+ is_url, service, host = self._is_url_based_permission_error(command, stderr)
+ if is_url:
+ additional_info["service"] = service or "unknown"
+ additional_info["host"] = host or "unknown"
+
+ # Determine more specific category
+ if "registry" in stderr.lower() or service in ["docker", "ghcr", "gcloud"]:
+ return ErrorCategory.ACCESS_DENIED_REGISTRY, additional_info
+ elif "git" in command.lower() or service in ["git_https"]:
+ return ErrorCategory.ACCESS_DENIED_REPO, additional_info
+ elif "api" in stderr.lower() or service in ["aws", "gcloud", "azure"]:
+ # AWS, GCloud, Azure are API-based services
+ return ErrorCategory.ACCESS_DENIED_API, additional_info
+ elif (
+ "aws " in command.lower()
+ or "az " in command.lower()
+ or "gcloud " in command.lower()
+ ):
+ # Cloud CLI commands are API-based
+ return ErrorCategory.ACCESS_DENIED_API, additional_info
+ else:
+ return ErrorCategory.PERMISSION_DENIED_URL, additional_info
+
+ # Check local file
+ is_local, path = self._is_local_file_permission_error(command, stderr)
+ if is_local:
+ additional_info["path"] = path or ""
+ return ErrorCategory.PERMISSION_DENIED_LOCAL, additional_info
+
+ # Default to local for generic permission denied
+ return ErrorCategory.PERMISSION_DENIED_LOCAL, additional_info
+
+ # =========================================================================
+ # STEP 1: Categorize Error
+ # =========================================================================
+
+ def categorize_error(self, command: str, stderr: str, stdout: str = "") -> DiagnosisResult:
+ """
+ Step 1: Categorize the error type.
+
+ Examines stderr (and stdout) to determine the broad category of error.
+ For permission errors, distinguishes between local file/dir and URL/link.
+ """
+ self._log_step(1, "Categorizing error type")
+
+ combined_output = f"{stderr}\n{stdout}".lower()
+
+ best_match: tuple[ErrorCategory, dict[str, str], float] | None = None
+
+ for category, patterns in ERROR_PATTERNS.items():
+ for pattern, info_key in patterns:
+ match = re.search(pattern, stderr, re.IGNORECASE)
+ if match:
+ extracted_info = {info_key: match.group(1) if match.groups() else ""}
+
+ # Calculate confidence based on pattern specificity
+ confidence = len(pattern) / 50.0 # Longer patterns = more specific
+ confidence = min(confidence, 1.0)
+
+ if best_match is None or confidence > best_match[2]:
+ best_match = (category, extracted_info, confidence)
+
+ if best_match:
+ category, extracted_info, confidence = best_match
+
+ # Refine permission errors to LOCAL or URL
+ refined_category, additional_info = self._resolve_permission_error_type(
+ command, stderr, category
+ )
+ extracted_info.update(additional_info)
+
+ result = DiagnosisResult(
+ category=refined_category,
+ error_message=stderr[:500],
+ extracted_info=extracted_info,
+ confidence=confidence,
+ raw_stderr=stderr,
+ )
+ else:
+ result = DiagnosisResult(
+ category=ErrorCategory.UNKNOWN,
+ error_message=stderr[:500],
+ confidence=0.0,
+ raw_stderr=stderr,
+ )
+
+ self._print_diagnosis(result, command)
+ return result
+
+ # =========================================================================
+ # STEP 2: Generate Fix Plan via LLM
+ # =========================================================================
+
+ def generate_fix_plan(self, command: str, intent: str, diagnosis: DiagnosisResult) -> FixPlan:
+ """
+ Step 2: LLM generates fix commands with variable placeholders.
+
+ Context given: command, intent, error, category
+ Output: List of commands with {variable} placeholders
+ """
+ self._log_step(2, "Generating fix plan via LLM")
+
+ if not self.client:
+ # Fallback to rule-based fix generation
+ return self._generate_fallback_fix_plan(command, intent, diagnosis)
+
+ system_prompt = self._get_fix_generation_prompt()
+
+ user_prompt = f"""Generate fix commands for this error:
+
+**Command:** `{command}`
+**Intent:** {intent}
+**Error Category:** {diagnosis.category.value}
+**Error Message:** {diagnosis.error_message}
+**Extracted Info:** {json.dumps(diagnosis.extracted_info)}
+
+Provide fix commands with variable placeholders in {{curly_braces}} for any values that need to be determined at runtime.
+
+Respond with JSON:
+{{
+ "reasoning": "explanation of the fix approach",
+ "commands": [
+ {{
+ "command": "command with {{variable}} placeholders",
+ "purpose": "what this command does",
+ "requires_sudo": true/false
+ }}
+ ]
+}}"""
+
+ try:
+ response = self._call_llm(system_prompt, user_prompt)
+
+ # Parse response
+ json_match = re.search(r"\{[\s\S]*\}", response)
+ if json_match:
+ data = json.loads(json_match.group())
+
+ commands = []
+ for cmd_data in data.get("commands", []):
+ commands.append(
+ FixCommand(
+ command_template=cmd_data.get("command", ""),
+ purpose=cmd_data.get("purpose", ""),
+ requires_sudo=cmd_data.get("requires_sudo", False),
+ )
+ )
+
+ plan = FixPlan(
+ category=diagnosis.category,
+ commands=commands,
+ reasoning=data.get("reasoning", ""),
+ )
+
+ self._print_fix_plan(plan)
+ return plan
+
+ except Exception as e:
+ console.print(f"[yellow]⚠ LLM fix generation failed: {e}[/yellow]")
+
+ # Fallback
+ return self._generate_fallback_fix_plan(command, intent, diagnosis)
+
+ def _get_fix_generation_prompt(self) -> str:
+ return """You are a Linux system error diagnosis expert. Generate shell commands to fix errors.
+
+RULES:
+1. Use {variable} placeholders for values that need to be determined at runtime
+2. Common variables: {file_path}, {package_name}, {service_name}, {user}, {port}, {config_file}
+3. Commands should be atomic and specific
+4. Include sudo only when necessary
+5. Order commands logically (prerequisites first)
+
+VARIABLE NAMING:
+- {file_path} - path to a file
+- {dir_path} - path to a directory
+- {package} - package name to install
+- {service} - systemd service name
+- {user} - username
+- {port} - port number
+- {config_file} - configuration file path
+- {config_line} - line number in config
+- {image} - Docker/container image name
+- {registry} - Container registry URL
+- {username} - Login username
+- {token} - Auth token or password
+
+EXAMPLE OUTPUT:
+{
+ "reasoning": "Permission denied on /etc/nginx - need sudo to write, also backup first",
+ "commands": [
+ {
+ "command": "sudo cp {config_file} {config_file}.backup",
+ "purpose": "Backup the configuration file before modifying",
+ "requires_sudo": true
+ },
+ {
+ "command": "sudo sed -i 's/{old_value}/{new_value}/' {config_file}",
+ "purpose": "Fix the configuration value",
+ "requires_sudo": true
+ }
+ ]
+}"""
+
+ def _generate_fallback_fix_plan(
+ self, command: str, intent: str, diagnosis: DiagnosisResult
+ ) -> FixPlan:
+ """Generate a fix plan using rules when LLM is unavailable."""
+ commands: list[FixCommand] = []
+ reasoning = f"Rule-based fix for {diagnosis.category.value}"
+
+ category = diagnosis.category
+ info = diagnosis.extracted_info
+
+ # LOCAL permission denied - use sudo
+ if category == ErrorCategory.PERMISSION_DENIED_LOCAL:
+ path = info.get("path", "")
+ reasoning = "Local file/directory permission denied - using elevated privileges"
+ commands.append(
+ FixCommand(
+ command_template=f"sudo {command}",
+ purpose=f"Retry with elevated privileges for local path{' ' + path if path else ''}",
+ requires_sudo=True,
+ )
+ )
+
+ # URL-based permission - handle login
+ elif category in [
+ ErrorCategory.PERMISSION_DENIED_URL,
+ ErrorCategory.ACCESS_DENIED_REGISTRY,
+ ErrorCategory.ACCESS_DENIED_REPO,
+ ErrorCategory.ACCESS_DENIED_API,
+ ]:
+ service = info.get("service", "unknown")
+ host = info.get("host", "unknown")
+ reasoning = f"URL/remote access denied - requires authentication to {service or host}"
+
+ # Generate login command based on service
+ if service == "docker" or service == "ghcr" or "registry" in category.value:
+ registry = host if host != "unknown" else "{registry}"
+ commands.extend(
+ [
+ FixCommand(
+ command_template=f"docker login {registry}",
+ purpose=f"Login to container registry {registry}",
+ ),
+ FixCommand(
+ command_template=command,
+ purpose="Retry original command after login",
+ ),
+ ]
+ )
+ elif service == "git_https" or "repo" in category.value:
+ commands.extend(
+ [
+ FixCommand(
+ command_template="git config --global credential.helper store",
+ purpose="Enable credential storage for git",
+ ),
+ FixCommand(
+ command_template=command,
+ purpose="Retry original command (will prompt for credentials)",
+ ),
+ ]
+ )
+ elif service == "npm":
+ commands.extend(
+ [
+ FixCommand(
+ command_template="npm login",
+ purpose="Login to npm registry",
+ ),
+ FixCommand(
+ command_template=command,
+ purpose="Retry original command after login",
+ ),
+ ]
+ )
+ elif service == "aws":
+ commands.extend(
+ [
+ FixCommand(
+ command_template="aws configure",
+ purpose="Configure AWS credentials",
+ ),
+ FixCommand(
+ command_template=command,
+ purpose="Retry original command after configuration",
+ ),
+ ]
+ )
+ else:
+ # Generic login placeholder
+ commands.append(
+ FixCommand(
+ command_template="{login_command}",
+ purpose=f"Login to {service or host}",
+ )
+ )
+ commands.append(
+ FixCommand(
+ command_template=command,
+ purpose="Retry original command after login",
+ )
+ )
+
+ # Legacy generic permission denied - try to determine type
+ elif category == ErrorCategory.PERMISSION_DENIED:
+ commands.append(
+ FixCommand(
+ command_template=f"sudo {command}",
+ purpose="Retry with elevated privileges",
+ requires_sudo=True,
+ )
+ )
+
+ elif category == ErrorCategory.FILE_NOT_FOUND:
+ file_path = info.get("file", "{file_path}")
+ commands.append(
+ FixCommand(
+ command_template=f"touch {file_path}",
+ purpose="Create missing file",
+ )
+ )
+
+ elif category == ErrorCategory.DIRECTORY_NOT_FOUND:
+ dir_path = info.get("directory", info.get("parent_directory", "{dir_path}"))
+ commands.append(
+ FixCommand(
+ command_template=f"mkdir -p {dir_path}",
+ purpose="Create missing directory",
+ )
+ )
+
+ elif category == ErrorCategory.COMMAND_NOT_FOUND:
+ # Try to guess package from command
+ cmd_name = command.split()[0] if command else "{package}"
+ commands.append(
+ FixCommand(
+ command_template="sudo apt install -y {package}",
+ purpose="Install package providing the command",
+ requires_sudo=True,
+ )
+ )
+
+ elif category == ErrorCategory.SERVICE_NOT_RUNNING:
+ service = info.get("service", "{service}")
+ commands.append(
+ FixCommand(
+ command_template=f"sudo systemctl start {service}",
+ purpose="Start the service",
+ requires_sudo=True,
+ )
+ )
+
+ elif category == ErrorCategory.LOGIN_REQUIRED:
+ service = info.get("service", "{service}")
+ commands.append(
+ FixCommand(
+ command_template="{login_command}",
+ purpose=f"Login to {service}",
+ )
+ )
+
+ elif category == ErrorCategory.PACKAGE_NOT_FOUND:
+ package = info.get("package", "{package}")
+ commands.extend(
+ [
+ FixCommand(
+ command_template="sudo apt update",
+ purpose="Update package lists",
+ requires_sudo=True,
+ ),
+ FixCommand(
+ command_template=f"sudo apt install -y {package}",
+ purpose="Install the package",
+ requires_sudo=True,
+ ),
+ ]
+ )
+
+ elif category == ErrorCategory.PORT_IN_USE:
+ port = info.get("port", "{port}")
+ commands.extend(
+ [
+ FixCommand(
+ command_template=f"sudo lsof -i :{port}",
+ purpose="Find process using the port",
+ requires_sudo=True,
+ ),
+ FixCommand(
+ command_template="sudo kill -9 {pid}",
+ purpose="Kill the process using the port",
+ requires_sudo=True,
+ ),
+ ]
+ )
+
+ elif category == ErrorCategory.CONFIG_SYNTAX_ERROR:
+ config_file = info.get("config", info.get("nginx_config", "{config_file}"))
+ commands.extend(
+ [
+ FixCommand(
+ command_template=f"cat -n {config_file}",
+ purpose="Show config file with line numbers",
+ ),
+ FixCommand(
+ command_template=f"sudo nano {config_file}",
+ purpose="Edit config file to fix syntax",
+ requires_sudo=True,
+ ),
+ ]
+ )
+
+ else:
+ # Generic retry with sudo
+ commands.append(
+ FixCommand(
+ command_template=f"sudo {command}",
+ purpose="Retry with elevated privileges",
+ requires_sudo=True,
+ )
+ )
+
+ plan = FixPlan(
+ category=diagnosis.category,
+ commands=commands,
+ reasoning=reasoning,
+ )
+
+ self._print_fix_plan(plan)
+ return plan
+
+ # =========================================================================
+ # STEP 3: Resolve Variables
+ # =========================================================================
+
+ def resolve_variables(
+ self,
+ fix_plan: FixPlan,
+ original_query: str,
+ command: str,
+ diagnosis: DiagnosisResult,
+ ) -> dict[str, str]:
+ """
+ Step 3: Resolve variable values using:
+ 1. Extract from original query
+ 2. LLM call with context
+ 3. system_info_command_generator
+ """
+ self._log_step(3, "Resolving variables")
+
+ if not fix_plan.all_variables:
+ console.print("[dim] No variables to resolve[/dim]")
+ return {}
+
+ console.print(f"[cyan] Variables to resolve: {', '.join(fix_plan.all_variables)}[/cyan]")
+
+ resolved: dict[str, str] = {}
+
+ for var_name in fix_plan.all_variables:
+ # Check cache first
+ if var_name in self.variable_cache:
+ resolved[var_name] = self.variable_cache[var_name]
+ console.print(f"[dim] {var_name}: {resolved[var_name]} (cached)[/dim]")
+ continue
+
+ # Try extraction from diagnosis info
+ value = self._try_extract_from_diagnosis(var_name, diagnosis)
+ if value:
+ resolved[var_name] = value
+ console.print(f"[green] ✓ {var_name}: {value} (from error)[/green]")
+ continue
+
+ # Try extraction from query
+ value = self._try_extract_from_query(var_name, original_query)
+ if value:
+ resolved[var_name] = value
+ console.print(f"[green] ✓ {var_name}: {value} (from query)[/green]")
+ continue
+
+ # Try system_info_command_generator
+ value = self._try_system_info(var_name, command, diagnosis)
+ if value:
+ resolved[var_name] = value
+ console.print(f"[green] ✓ {var_name}: {value} (from system)[/green]")
+ continue
+
+ # Fall back to LLM
+ value = self._try_llm_resolution(var_name, original_query, command, diagnosis)
+ if value:
+ resolved[var_name] = value
+ console.print(f"[green] ✓ {var_name}: {value} (from LLM)[/green]")
+ continue
+
+ # Prompt user as last resort
+ console.print(f"[yellow] ⚠ Could not resolve {var_name}[/yellow]")
+ try:
+ from rich.prompt import Prompt
+
+ value = Prompt.ask(f" Enter value for {var_name}")
+ if value:
+ resolved[var_name] = value
+ console.print(f"[green] ✓ {var_name}: {value} (from user)[/green]")
+ except Exception:
+ pass
+
+ # Update cache
+ self.variable_cache.update(resolved)
+
+ return resolved
+
+ def _try_extract_from_diagnosis(self, var_name: str, diagnosis: DiagnosisResult) -> str | None:
+ """Try to extract variable from diagnosis extracted_info."""
+ # Map variable names to diagnosis info keys
+ mappings = {
+ "file_path": ["file", "path"],
+ "dir_path": ["directory", "parent_directory", "dir"],
+ "package": ["package", "module"],
+ "service": ["service", "unit"],
+ "port": ["port"],
+ "config_file": ["config", "nginx_config", "config_file"],
+ "user": ["user"],
+ "image": ["image", "repository"],
+ }
+
+ keys_to_check = mappings.get(var_name, [var_name])
+ for key in keys_to_check:
+ if key in diagnosis.extracted_info and diagnosis.extracted_info[key]:
+ return diagnosis.extracted_info[key]
+
+ return None
+
+ def _try_extract_from_query(self, var_name: str, query: str) -> str | None:
+ """Try to extract variable from the original query."""
+ # Pattern-based extraction from query
+ patterns = {
+ "file_path": [r"file\s+['\"]?([/\w.-]+)['\"]?", r"([/\w]+\.\w+)"],
+ "dir_path": [r"directory\s+['\"]?([/\w.-]+)['\"]?", r"folder\s+['\"]?([/\w.-]+)['\"]?"],
+ "package": [r"install\s+(\w[\w-]*)", r"package\s+(\w[\w-]*)"],
+ "service": [r"service\s+(\w[\w-]*)", r"(\w+)\.service"],
+ "port": [r"port\s+(\d+)", r":(\d{2,5})"],
+ "image": [r"image\s+([^\s]+)", r"docker.*\s+([^\s]+:[^\s]*)"],
+ }
+
+ if var_name in patterns:
+ for pattern in patterns[var_name]:
+ match = re.search(pattern, query, re.IGNORECASE)
+ if match:
+ return match.group(1)
+
+ return None
+
+ def _try_system_info(
+ self, var_name: str, command: str, diagnosis: DiagnosisResult
+ ) -> str | None:
+ """Use system_info_command_generator to get variable value."""
+ try:
+ from cortex.system_info_generator import SystemInfoGenerator
+
+ # System info commands for different variable types
+ system_queries = {
+ "user": "whoami",
+ "home_dir": "echo $HOME",
+ "current_dir": "pwd",
+ }
+
+ if var_name in system_queries:
+ result = subprocess.run(
+ system_queries[var_name],
+ shell=True,
+ capture_output=True,
+ text=True,
+ timeout=5,
+ )
+ if result.returncode == 0 and result.stdout.strip():
+ return result.stdout.strip()
+
+ # For package commands, try to find the package
+ if var_name == "package":
+ cmd_name = command.split()[0] if command else ""
+ # Common command-to-package mappings for Ubuntu
+ package_map = {
+ "nginx": "nginx",
+ "docker": "docker.io",
+ "python": "python3",
+ "pip": "python3-pip",
+ "node": "nodejs",
+ "npm": "npm",
+ "git": "git",
+ "curl": "curl",
+ "wget": "wget",
+ "htop": "htop",
+ "vim": "vim",
+ "nano": "nano",
+ }
+ if cmd_name in package_map:
+ return package_map[cmd_name]
+
+ # Try apt-file search if available
+ result = subprocess.run(
+ f"apt-file search --regexp 'bin/{cmd_name}$' 2>/dev/null | head -1 | cut -d: -f1",
+ shell=True,
+ capture_output=True,
+ text=True,
+ timeout=10,
+ )
+ if result.returncode == 0 and result.stdout.strip():
+ return result.stdout.strip()
+
+ # For service names, try systemctl
+ if var_name == "service":
+ # Extract service name from command if present
+ service_match = re.search(r"systemctl\s+\w+\s+(\S+)", command)
+ if service_match:
+ return service_match.group(1)
+
+ except Exception as e:
+ if self.debug:
+ console.print(f"[dim] System info failed for {var_name}: {e}[/dim]")
+
+ return None
+
+ def _try_llm_resolution(
+ self,
+ var_name: str,
+ query: str,
+ command: str,
+ diagnosis: DiagnosisResult,
+ ) -> str | None:
+ """Use LLM to resolve variable value."""
+ if not self.client:
+ return None
+
+ prompt = f"""Extract the value for variable '{var_name}' from this context:
+
+Query: {query}
+Command: {command}
+Error Category: {diagnosis.category.value}
+Error: {diagnosis.error_message[:200]}
+
+Respond with ONLY the value, nothing else. If you cannot determine the value, respond with "UNKNOWN"."""
+
+ try:
+ response = self._call_llm("You extract specific values from context.", prompt)
+ value = response.strip().strip("\"'")
+ if value and value.upper() != "UNKNOWN":
+ return value
+ except Exception:
+ pass
+
+ return None
+
+ # =========================================================================
+ # URL AUTHENTICATION HANDLING
+ # =========================================================================
+
+ def handle_url_authentication(
+ self,
+ command: str,
+ diagnosis: DiagnosisResult,
+ ) -> tuple[bool, str]:
+ """
+ Handle URL-based permission errors by prompting for login.
+
+ Uses LoginHandler to:
+ 1. Detect the service/website
+ 2. Prompt for credentials
+ 3. Store credentials for future use
+ 4. Execute login command
+
+ Returns:
+ Tuple of (success, message)
+ """
+ console.print("\n[bold cyan]🔐 URL Authentication Required[/bold cyan]")
+
+ if not self._login_handler:
+ console.print("[yellow]⚠ LoginHandler not available[/yellow]")
+ return False, "LoginHandler not available"
+
+ service = diagnosis.extracted_info.get("service", "unknown")
+ host = diagnosis.extracted_info.get("host", "")
+
+ console.print(f"[dim] Service: {service}[/dim]")
+ console.print(f"[dim] Host: {host}[/dim]")
+
+ try:
+ # Use LoginHandler to manage authentication
+ login_req = self._login_handler.detect_login_requirement(command, diagnosis.raw_stderr)
+
+ if login_req:
+ console.print(f"\n[cyan]📝 Login to {login_req.display_name}[/cyan]")
+
+ # Handle login (will prompt, execute, and optionally save credentials)
+ success, message = self._login_handler.handle_login(command, diagnosis.raw_stderr)
+
+ if success:
+ console.print(f"[green]✓ {message}[/green]")
+ return True, message
+ else:
+ console.print(f"[yellow]⚠ {message}[/yellow]")
+ return False, message
+ else:
+ # No matching login requirement, try generic approach
+ console.print("[yellow] Unknown service, trying generic login...[/yellow]")
+ return self._handle_generic_login(command, diagnosis)
+
+ except Exception as e:
+ console.print(f"[red]✗ Authentication error: {e}[/red]")
+ return False, str(e)
+
+ def _handle_generic_login(
+ self,
+ command: str,
+ diagnosis: DiagnosisResult,
+ ) -> tuple[bool, str]:
+ """Handle login for unknown services with interactive prompts."""
+ from rich.prompt import Confirm, Prompt
+
+ host = diagnosis.extracted_info.get("host", "unknown service")
+
+ console.print(f"\n[cyan]Login required for: {host}[/cyan]")
+
+ try:
+ # Prompt for credentials
+ username = Prompt.ask("Username")
+ if not username:
+ return False, "Username is required"
+
+ password = Prompt.ask("Password", password=True)
+
+ # Determine login command based on command context
+ login_cmd = None
+
+ if "docker" in command.lower():
+ registry = diagnosis.extracted_info.get("host", "")
+ login_cmd = f"docker login {registry}" if registry else "docker login"
+ elif "git" in command.lower():
+ # Store git credentials
+ subprocess.run("git config --global credential.helper store", shell=True)
+ login_cmd = None # Git will prompt automatically
+ elif "npm" in command.lower():
+ login_cmd = "npm login"
+ elif "pip" in command.lower() or "pypi" in host.lower():
+ login_cmd = f"pip config set global.index-url https://{username}:{{password}}@pypi.org/simple/"
+
+ if login_cmd:
+ console.print(f"[dim] Running: {login_cmd}[/dim]")
+
+ # Execute login with password via stdin if needed
+ if "{password}" in login_cmd:
+ login_cmd = login_cmd.replace("{password}", password)
+ result = subprocess.run(login_cmd, shell=True, capture_output=True, text=True)
+ else:
+ # Interactive login
+ result = subprocess.run(
+ login_cmd,
+ shell=True,
+ input=f"{username}\n{password}\n",
+ capture_output=True,
+ text=True,
+ )
+
+ if result.returncode == 0:
+ # Offer to save credentials
+ if self._login_handler and Confirm.ask(
+ "Save credentials for future use?", default=True
+ ):
+ self._login_handler._save_credentials(
+ host,
+ {
+ "username": username,
+ "password": password,
+ },
+ )
+ console.print("[green]✓ Credentials saved[/green]")
+
+ return True, f"Logged in to {host}"
+ else:
+ return False, f"Login failed: {result.stderr[:200]}"
+
+ return False, "Could not determine login command"
+
+ except KeyboardInterrupt:
+ return False, "Login cancelled"
+ except Exception as e:
+ return False, str(e)
+
+ # =========================================================================
+ # STEP 4: Execute Fix Commands
+ # =========================================================================
+
+ def execute_fix_commands(
+ self, fix_plan: FixPlan, resolved_variables: dict[str, str]
+ ) -> list[ExecutionResult]:
+ """
+ Step 4: Execute fix commands with resolved variables.
+ """
+ self._log_step(4, "Executing fix commands")
+
+ results: list[ExecutionResult] = []
+
+ for i, fix_cmd in enumerate(fix_plan.commands, 1):
+ # Substitute variables
+ command = fix_cmd.command_template
+ for var_name, value in resolved_variables.items():
+ command = command.replace(f"{{{var_name}}}", value)
+
+ # Check for unresolved variables
+ unresolved = re.findall(r"\{(\w+)\}", command)
+ if unresolved:
+ console.print(
+ f"[yellow] ⚠ Skipping command with unresolved variables: {unresolved}[/yellow]"
+ )
+ results.append(
+ ExecutionResult(
+ command=command,
+ success=False,
+ stdout="",
+ stderr=f"Unresolved variables: {unresolved}",
+ execution_time=0,
+ )
+ )
+ continue
+
+ console.print(f"\n[cyan] [{i}/{len(fix_plan.commands)}] {command}[/cyan]")
+ console.print(f"[dim] └─ {fix_cmd.purpose}[/dim]")
+
+ # Execute
+ start_time = time.time()
+ try:
+ result = subprocess.run(
+ command,
+ shell=True,
+ capture_output=True,
+ text=True,
+ timeout=120,
+ )
+ execution_time = time.time() - start_time
+
+ exec_result = ExecutionResult(
+ command=command,
+ success=result.returncode == 0,
+ stdout=result.stdout.strip(),
+ stderr=result.stderr.strip(),
+ execution_time=execution_time,
+ )
+
+ if exec_result.success:
+ console.print(f"[green] ✓ Success ({execution_time:.2f}s)[/green]")
+ if exec_result.stdout and self.debug:
+ console.print(f"[dim] Output: {exec_result.stdout[:200]}[/dim]")
+ else:
+ console.print(f"[red] ✗ Failed: {exec_result.stderr[:200]}[/red]")
+
+ results.append(exec_result)
+
+ # Log to history
+ self.execution_history.append(
+ {
+ "command": command,
+ "success": exec_result.success,
+ "stderr": exec_result.stderr[:500],
+ "timestamp": time.time(),
+ }
+ )
+
+ except subprocess.TimeoutExpired:
+ console.print("[red] ✗ Timeout after 120s[/red]")
+ results.append(
+ ExecutionResult(
+ command=command,
+ success=False,
+ stdout="",
+ stderr="Command timed out",
+ execution_time=120,
+ )
+ )
+ except Exception as e:
+ console.print(f"[red] ✗ Error: {e}[/red]")
+ results.append(
+ ExecutionResult(
+ command=command,
+ success=False,
+ stdout="",
+ stderr=str(e),
+ execution_time=time.time() - start_time,
+ )
+ )
+
+ return results
+
+ # =========================================================================
+ # STEP 5 & 6: Error Stack Management and Retry Logic
+ # =========================================================================
+
+ def push_error(self, entry: ErrorStackEntry) -> None:
+ """Push an error onto the stack."""
+ if len(self.error_stack) >= self.MAX_STACK_DEPTH:
+ console.print(f"[red]⚠ Error stack depth limit ({self.MAX_STACK_DEPTH}) reached[/red]")
+ return
+
+ self.error_stack.append(entry)
+ self._print_error_stack()
+
+ def pop_error(self) -> ErrorStackEntry | None:
+ """Pop an error from the stack."""
+ if self.error_stack:
+ return self.error_stack.pop()
+ return None
+
+ def diagnose_and_fix(
+ self,
+ command: str,
+ stderr: str,
+ intent: str,
+ original_query: str,
+ stdout: str = "",
+ ) -> tuple[bool, str]:
+ """
+ Main diagnosis and fix flow.
+
+ Returns:
+ Tuple of (success, message)
+ """
+ console.print(
+ Panel(
+ f"[bold]Starting Diagnosis[/bold]\n"
+ f"Command: [cyan]{command}[/cyan]\n"
+ f"Intent: {intent}",
+ title="🔧 Cortex Diagnosis Engine",
+ border_style="blue",
+ )
+ )
+
+ # Push initial error to stack
+ initial_entry = ErrorStackEntry(
+ original_command=command,
+ intent=intent,
+ error=stderr,
+ category=ErrorCategory.UNKNOWN, # Will be set in Step 1
+ )
+ self.push_error(initial_entry)
+
+ # Process error stack
+ while self.error_stack:
+ entry = self.error_stack[-1] # Peek at top
+
+ if entry.fix_attempts >= self.MAX_FIX_ATTEMPTS:
+ console.print(
+ f"[red]✗ Max fix attempts ({self.MAX_FIX_ATTEMPTS}) reached for command[/red]"
+ )
+ self.pop_error()
+ continue
+
+ entry.fix_attempts += 1
+ console.print(
+ f"\n[bold]Fix Attempt {entry.fix_attempts}/{self.MAX_FIX_ATTEMPTS}[/bold]"
+ )
+
+ # Step 1: Categorize error
+ diagnosis = self.categorize_error(entry.original_command, entry.error)
+ entry.category = diagnosis.category
+
+ # SPECIAL HANDLING: URL-based permission errors need authentication
+ url_auth_categories = [
+ ErrorCategory.PERMISSION_DENIED_URL,
+ ErrorCategory.ACCESS_DENIED_REGISTRY,
+ ErrorCategory.ACCESS_DENIED_REPO,
+ ErrorCategory.ACCESS_DENIED_API,
+ ErrorCategory.LOGIN_REQUIRED,
+ ]
+
+ if diagnosis.category in url_auth_categories:
+ console.print(
+ "[cyan]🌐 URL-based access error detected - handling authentication[/cyan]"
+ )
+
+ auth_success, auth_message = self.handle_url_authentication(
+ entry.original_command, diagnosis
+ )
+
+ if auth_success:
+ # Re-test the original command after login
+ console.print("\n[cyan]📋 Testing original command after login...[/cyan]")
+
+ test_result = subprocess.run(
+ entry.original_command,
+ shell=True,
+ capture_output=True,
+ text=True,
+ timeout=120,
+ )
+
+ if test_result.returncode == 0:
+ console.print("[green]✓ Command succeeded after authentication![/green]")
+ self.pop_error()
+ if not self.error_stack:
+ return True, f"Fixed via authentication: {auth_message}"
+ continue
+ else:
+ # Different error after login
+ entry.error = test_result.stderr.strip()
+ console.print(
+ "[yellow]⚠ New error after login, continuing diagnosis...[/yellow]"
+ )
+ continue
+ else:
+ console.print(f"[yellow]⚠ Authentication failed: {auth_message}[/yellow]")
+ # Continue with normal fix flow
+
+ # Step 2: Generate fix plan
+ fix_plan = self.generate_fix_plan(entry.original_command, entry.intent, diagnosis)
+ entry.fix_plan = fix_plan
+
+ # Step 3: Resolve variables
+ resolved_vars = self.resolve_variables(
+ fix_plan,
+ original_query,
+ entry.original_command,
+ diagnosis,
+ )
+
+ # Check if all variables resolved
+ unresolved = fix_plan.all_variables - set(resolved_vars.keys())
+ if unresolved:
+ console.print(f"[yellow]⚠ Could not resolve all variables: {unresolved}[/yellow]")
+ # Continue anyway with what we have
+
+ # Step 4: Execute fix commands
+ results = self.execute_fix_commands(fix_plan, resolved_vars)
+
+ # Check for errors in fix commands (Step 5)
+ fix_errors = [r for r in results if not r.success]
+ if fix_errors:
+ console.print(f"\n[yellow]⚠ {len(fix_errors)} fix command(s) failed[/yellow]")
+
+ # Push the first error back to stack for diagnosis
+ first_error = fix_errors[0]
+ if first_error.stderr and "Unresolved variables" not in first_error.stderr:
+ new_entry = ErrorStackEntry(
+ original_command=first_error.command,
+ intent=f"Fix command for: {entry.intent}",
+ error=first_error.stderr,
+ category=ErrorCategory.UNKNOWN,
+ )
+ self.push_error(new_entry)
+ continue
+
+ # Step 6: Test original command
+ console.print(f"\n[cyan]📋 Testing original command: {entry.original_command}[/cyan]")
+
+ test_result = subprocess.run(
+ entry.original_command,
+ shell=True,
+ capture_output=True,
+ text=True,
+ timeout=120,
+ )
+
+ if test_result.returncode == 0:
+ console.print("[green]✓ Original command now succeeds![/green]")
+ self.pop_error()
+
+ # Check if stack is empty
+ if not self.error_stack:
+ return True, "All errors resolved successfully"
+ else:
+ new_error = test_result.stderr.strip()
+ console.print("[yellow]⚠ Original command still fails[/yellow]")
+
+ if new_error != entry.error:
+ console.print("[cyan] New error detected, updating...[/cyan]")
+ entry.error = new_error
+ # Loop will continue with same entry
+
+ # Stack empty but we didn't explicitly succeed
+ return False, "Could not resolve all errors"
+
+ # =========================================================================
+ # HELPERS
+ # =========================================================================
+
+ def _call_llm(self, system_prompt: str, user_prompt: str) -> str:
+ """Call the LLM and return response text."""
+ if self.provider == "claude":
+ response = self.client.messages.create(
+ model=self.model,
+ max_tokens=2048,
+ system=system_prompt,
+ messages=[{"role": "user", "content": user_prompt}],
+ )
+ return response.content[0].text
+ elif self.provider == "openai":
+ response = self.client.chat.completions.create(
+ model=self.model,
+ max_tokens=2048,
+ messages=[
+ {"role": "system", "content": system_prompt},
+ {"role": "user", "content": user_prompt},
+ ],
+ )
+ return response.choices[0].message.content
+ else:
+ raise ValueError(f"Unsupported provider: {self.provider}")
+
+ def _log_step(self, step_num: int, description: str) -> None:
+ """Log a diagnosis step."""
+ console.print(f"\n[bold blue]Step {step_num}:[/bold blue] {description}")
+
+ def _print_diagnosis(self, diagnosis: DiagnosisResult, command: str) -> None:
+ """Print diagnosis result."""
+ table = Table(title="Error Diagnosis", show_header=False, border_style="dim")
+ table.add_column("Field", style="bold")
+ table.add_column("Value")
+
+ table.add_row("Category", f"[cyan]{diagnosis.category.value}[/cyan]")
+ table.add_row("Confidence", f"{diagnosis.confidence:.0%}")
+
+ if diagnosis.extracted_info:
+ info_str = ", ".join(f"{k}={v}" for k, v in diagnosis.extracted_info.items() if v)
+ table.add_row("Extracted", info_str)
+
+ table.add_row(
+ "Error",
+ (
+ diagnosis.error_message[:100] + "..."
+ if len(diagnosis.error_message) > 100
+ else diagnosis.error_message
+ ),
+ )
+
+ console.print(table)
+
+ def _print_fix_plan(self, plan: FixPlan) -> None:
+ """Print fix plan."""
+ console.print(f"\n[bold]Fix Plan:[/bold] {plan.reasoning}")
+
+ for i, cmd in enumerate(plan.commands, 1):
+ sudo_tag = "[sudo]" if cmd.requires_sudo else ""
+ vars_tag = f"[vars: {', '.join(cmd.variables)}]" if cmd.variables else ""
+ console.print(f" {i}. [cyan]{cmd.command_template}[/cyan] {sudo_tag} {vars_tag}")
+ console.print(f" [dim]{cmd.purpose}[/dim]")
+
+ def _print_error_stack(self) -> None:
+ """Print current error stack."""
+ if not self.error_stack:
+ console.print("[dim] Error stack: empty[/dim]")
+ return
+
+ tree = Tree("[bold]Error Stack[/bold]")
+ for i, entry in enumerate(reversed(self.error_stack)):
+ branch = tree.add(f"[{'yellow' if i == 0 else 'dim'}]{entry.original_command[:50]}[/]")
+ branch.add(f"[dim]Category: {entry.category.value}[/dim]")
+ branch.add(f"[dim]Attempts: {entry.fix_attempts}[/dim]")
+
+ console.print(tree)
+
+ def get_execution_summary(self) -> dict[str, Any]:
+ """Get summary of all executions."""
+ return {
+ "total_commands": len(self.execution_history),
+ "successful": sum(1 for h in self.execution_history if h.get("success")),
+ "failed": sum(1 for h in self.execution_history if not h.get("success")),
+ "history": self.execution_history[-20:], # Last 20
+ "variables_cached": len(self.variable_cache),
+ }
+
+
+# =============================================================================
+# FACTORY FUNCTION
+# =============================================================================
+
+
+def get_diagnosis_engine(
+ provider: str = "claude",
+ debug: bool = False,
+) -> DiagnosisEngine:
+ """Factory function to create a DiagnosisEngine."""
+ api_key = os.environ.get("ANTHROPIC_API_KEY") or os.environ.get("OPENAI_API_KEY")
+ return DiagnosisEngine(api_key=api_key, provider=provider, debug=debug)
+
+
+# =============================================================================
+# CLI TEST
+# =============================================================================
+
+if __name__ == "__main__":
+ import sys
+
+ console.print("[bold]Diagnosis Engine Test[/bold]\n")
+
+ engine = get_diagnosis_engine(debug=True)
+
+ # Test error categorization
+ test_cases = [
+ ("cat /nonexistent/file", "cat: /nonexistent/file: No such file or directory"),
+ ("docker pull ghcr.io/test/image", "Error: Non-null Username Required"),
+ ("apt install fakepackage", "E: Unable to locate package fakepackage"),
+ ("nginx -t", 'nginx: [emerg] unknown directive "invalid" in /etc/nginx/nginx.conf:10'),
+ (
+ "systemctl start myservice",
+ "Failed to start myservice.service: Unit myservice.service not found.",
+ ),
+ ]
+
+ for cmd, error in test_cases:
+ console.print(f"\n[bold]Test:[/bold] {cmd}")
+ console.print(f"[dim]Error: {error}[/dim]")
+
+ diagnosis = engine.categorize_error(cmd, error)
+ console.print(f"[green]Category: {diagnosis.category.value}[/green]")
+ console.print("")
diff --git a/cortex/do_runner/executor.py b/cortex/do_runner/executor.py
new file mode 100644
index 00000000..15769fcd
--- /dev/null
+++ b/cortex/do_runner/executor.py
@@ -0,0 +1,517 @@
+"""Task Tree Executor for advanced command execution with auto-repair."""
+
+import os
+import subprocess
+import time
+from collections.abc import Callable
+from typing import Any
+
+from rich.console import Console
+from rich.prompt import Confirm
+
+from .models import (
+ CommandLog,
+ CommandStatus,
+ DoRun,
+ TaskNode,
+ TaskTree,
+ TaskType,
+)
+from .terminal import TerminalMonitor
+
+console = Console()
+
+
+class TaskTreeExecutor:
+ """
+ Executes a task tree with auto-repair capabilities.
+
+ This handles:
+ - Executing commands in order
+ - Spawning repair sub-tasks when commands fail
+ - Asking for additional permissions when needed
+ - Monitoring terminals during manual intervention
+ - Providing detailed reasoning for failures
+ """
+
+ def __init__(
+ self,
+ user_manager: type,
+ paths_manager: Any,
+ llm_callback: Callable[[str], dict] | None = None,
+ ):
+ self.user_manager = user_manager
+ self.paths_manager = paths_manager
+ self.llm_callback = llm_callback
+ self.tree = TaskTree()
+ self._granted_privileges: list[str] = []
+ self._permission_sets_requested: int = 0
+ self._terminal_monitor: TerminalMonitor | None = None
+
+ self._in_manual_mode = False
+ self._manual_commands_executed: list[dict] = []
+
+ def build_tree_from_commands(
+ self,
+ commands: list[dict[str, str]],
+ ) -> TaskTree:
+ """Build a task tree from a list of commands."""
+ for cmd in commands:
+ self.tree.add_root_task(
+ command=cmd.get("command", ""),
+ purpose=cmd.get("purpose", ""),
+ )
+ return self.tree
+
+ def execute_tree(
+ self,
+ confirm_callback: Callable[[list[TaskNode]], bool] | None = None,
+ notify_callback: Callable[[str, str], None] | None = None,
+ ) -> tuple[bool, str]:
+ """
+ Execute the task tree with auto-repair.
+
+ Returns:
+ Tuple of (success, summary)
+ """
+ total_success = 0
+ total_failed = 0
+ total_repaired = 0
+ repair_details = []
+
+ for root_task in self.tree.root_tasks:
+ success, repaired = self._execute_task_with_repair(
+ root_task,
+ confirm_callback,
+ notify_callback,
+ )
+
+ if success:
+ total_success += 1
+ if repaired:
+ total_repaired += 1
+ else:
+ total_failed += 1
+ if root_task.failure_reason:
+ repair_details.append(
+ f"- {root_task.command[:40]}...: {root_task.failure_reason}"
+ )
+
+ summary_parts = [
+ f"Completed: {total_success}",
+ f"Failed: {total_failed}",
+ ]
+ if total_repaired > 0:
+ summary_parts.append(f"Auto-repaired: {total_repaired}")
+
+ summary = f"Tasks: {' | '.join(summary_parts)}"
+
+ if repair_details:
+ summary += "\n\nFailure reasons:\n" + "\n".join(repair_details)
+
+ return total_failed == 0, summary
+
+ def _execute_task_with_repair(
+ self,
+ task: TaskNode,
+ confirm_callback: Callable[[list[TaskNode]], bool] | None = None,
+ notify_callback: Callable[[str, str], None] | None = None,
+ ) -> tuple[bool, bool]:
+ """Execute a task and attempt repair if it fails."""
+ was_repaired = False
+
+ task.status = CommandStatus.RUNNING
+ success, output, error, duration = self._execute_command(task.command)
+
+ task.output = output
+ task.error = error
+ task.duration_seconds = duration
+
+ if success:
+ task.status = CommandStatus.SUCCESS
+ console.print(f"[green]✓[/green] {task.purpose}")
+ return True, False
+
+ task.status = CommandStatus.NEEDS_REPAIR
+ diagnosis = self._diagnose_error(task.command, error, output)
+ task.failure_reason = diagnosis.get("description", "Unknown error")
+
+ console.print(f"[yellow]⚠[/yellow] {task.purpose} - {diagnosis['error_type']}")
+ console.print(f"[dim] └─ {diagnosis['description']}[/dim]")
+
+ if diagnosis.get("can_auto_fix") and task.repair_attempts < task.max_repair_attempts:
+ task.repair_attempts += 1
+ fix_commands = diagnosis.get("fix_commands", [])
+
+ if fix_commands:
+ console.print(
+ f"[cyan]🔧 Attempting auto-repair ({task.repair_attempts}/{task.max_repair_attempts})...[/cyan]"
+ )
+
+ new_paths = self._identify_paths_needing_privileges(fix_commands)
+ if new_paths and confirm_callback:
+ repair_tasks = []
+ for cmd in fix_commands:
+ repair_task = self.tree.add_repair_task(
+ parent=task,
+ command=cmd,
+ purpose=f"Repair: {diagnosis['description'][:50]}",
+ reasoning=diagnosis.get("reasoning", ""),
+ )
+ repair_tasks.append(repair_task)
+
+ self._permission_sets_requested += 1
+ console.print(
+ f"\n[yellow]🔐 Permission request #{self._permission_sets_requested} for repair commands:[/yellow]"
+ )
+
+ if confirm_callback(repair_tasks):
+ all_repairs_success = True
+ for repair_task in repair_tasks:
+ repair_success, _ = self._execute_task_with_repair(
+ repair_task, confirm_callback, notify_callback
+ )
+ if not repair_success:
+ all_repairs_success = False
+
+ if all_repairs_success:
+ console.print("[cyan]↻ Retrying original command...[/cyan]")
+ success, output, error, duration = self._execute_command(task.command)
+ task.output = output
+ task.error = error
+ task.duration_seconds += duration
+
+ if success:
+ task.status = CommandStatus.SUCCESS
+ task.reasoning = (
+ f"Auto-repaired after {task.repair_attempts} attempt(s)"
+ )
+ console.print(
+ f"[green]✓[/green] {task.purpose} [dim](repaired)[/dim]"
+ )
+ return True, True
+ else:
+ all_repairs_success = True
+ for cmd in fix_commands:
+ repair_task = self.tree.add_repair_task(
+ parent=task,
+ command=cmd,
+ purpose=f"Repair: {diagnosis['description'][:50]}",
+ reasoning=diagnosis.get("reasoning", ""),
+ )
+ repair_success, _ = self._execute_task_with_repair(
+ repair_task, confirm_callback, notify_callback
+ )
+ if not repair_success:
+ all_repairs_success = False
+
+ if all_repairs_success:
+ console.print("[cyan]↻ Retrying original command...[/cyan]")
+ success, output, error, duration = self._execute_command(task.command)
+ task.output = output
+ task.error = error
+ task.duration_seconds += duration
+
+ if success:
+ task.status = CommandStatus.SUCCESS
+ task.reasoning = (
+ f"Auto-repaired after {task.repair_attempts} attempt(s)"
+ )
+ console.print(f"[green]✓[/green] {task.purpose} [dim](repaired)[/dim]")
+ return True, True
+
+ task.status = CommandStatus.FAILED
+ task.reasoning = self._generate_failure_reasoning(task, diagnosis)
+
+ if diagnosis.get("manual_suggestion") and notify_callback:
+ console.print("\n[yellow]📋 Manual intervention suggested:[/yellow]")
+ console.print(f"[dim]{diagnosis['manual_suggestion']}[/dim]")
+
+ if Confirm.ask(
+ "Would you like to run this manually while Cortex monitors?", default=False
+ ):
+ success = self._supervise_manual_intervention(
+ task,
+ diagnosis.get("manual_suggestion", ""),
+ notify_callback,
+ )
+ if success:
+ task.status = CommandStatus.SUCCESS
+ task.reasoning = "Completed via manual intervention with Cortex monitoring"
+ return True, True
+
+ console.print(f"\n[red]✗ Failed:[/red] {task.purpose}")
+ console.print(f"[dim] Reason: {task.reasoning}[/dim]")
+
+ return False, was_repaired
+
+ def _execute_command(self, command: str) -> tuple[bool, str, str, float]:
+ """Execute a command."""
+ start_time = time.time()
+
+ try:
+ needs_sudo = self._needs_sudo(command)
+
+ if needs_sudo and not command.strip().startswith("sudo"):
+ command = f"sudo {command}"
+
+ result = subprocess.run(
+ command,
+ shell=True,
+ capture_output=True,
+ text=True,
+ timeout=300,
+ )
+
+ duration = time.time() - start_time
+ success = result.returncode == 0
+
+ return success, result.stdout, result.stderr, duration
+
+ except subprocess.TimeoutExpired:
+ return False, "", "Command timed out after 300 seconds", time.time() - start_time
+ except Exception as e:
+ return False, "", str(e), time.time() - start_time
+
+ def _needs_sudo(self, command: str) -> bool:
+ """Determine if a command needs sudo."""
+ sudo_keywords = [
+ "systemctl",
+ "service",
+ "apt",
+ "apt-get",
+ "dpkg",
+ "useradd",
+ "usermod",
+ "userdel",
+ "groupadd",
+ "chmod",
+ "chown",
+ "mount",
+ "umount",
+ "fdisk",
+ "iptables",
+ "ufw",
+ "firewall-cmd",
+ ]
+
+ system_paths = ["/etc/", "/var/", "/usr/", "/opt/", "/sys/", "/proc/"]
+
+ cmd_parts = command.strip().split()
+ if not cmd_parts:
+ return False
+
+ base_cmd = cmd_parts[0]
+
+ if base_cmd in sudo_keywords:
+ return True
+
+ for part in cmd_parts:
+ for path in system_paths:
+ if path in part:
+ if any(
+ op in command
+ for op in [
+ ">",
+ ">>",
+ "cp ",
+ "mv ",
+ "rm ",
+ "mkdir ",
+ "touch ",
+ "sed ",
+ "tee ",
+ ]
+ ):
+ return True
+
+ return False
+
+ def _diagnose_error(
+ self,
+ command: str,
+ stderr: str,
+ stdout: str,
+ ) -> dict[str, Any]:
+ """Diagnose why a command failed and suggest repairs."""
+ error_lower = stderr.lower()
+ combined = (stderr + stdout).lower()
+
+ if "permission denied" in error_lower:
+ import re
+
+ path_match = None
+ path_patterns = [
+ r"cannot (?:create|open|access|stat|remove|modify) (?:regular file |directory )?['\"]?([^'\":\n]+)['\"]?",
+ r"open\(\) ['\"]?([^'\"]+)['\"]? failed",
+ r"['\"]([^'\"]+)['\"]?: [Pp]ermission denied",
+ ]
+ for pattern in path_patterns:
+ match = re.search(pattern, stderr)
+ if match:
+ path_match = match.group(1).strip()
+ break
+
+ return {
+ "error_type": "Permission Denied",
+ "description": f"Insufficient permissions to access: {path_match or 'unknown path'}",
+ "can_auto_fix": True,
+ "fix_commands": (
+ [f"sudo {command}"] if not command.strip().startswith("sudo") else []
+ ),
+ "manual_suggestion": f"Run with sudo: sudo {command}",
+ "reasoning": f"The command tried to access '{path_match or 'a protected resource'}' without sufficient privileges.",
+ }
+
+ if "no such file or directory" in error_lower:
+ import re
+
+ path_match = re.search(r"['\"]?([^'\"\n]+)['\"]?: [Nn]o such file", stderr)
+ missing_path = path_match.group(1) if path_match else None
+
+ if missing_path:
+ parent_dir = os.path.dirname(missing_path)
+ if parent_dir:
+ return {
+ "error_type": "File Not Found",
+ "description": f"Path does not exist: {missing_path}",
+ "can_auto_fix": True,
+ "fix_commands": [f"sudo mkdir -p {parent_dir}"],
+ "manual_suggestion": f"Create the directory: sudo mkdir -p {parent_dir}",
+ "reasoning": f"The target path '{missing_path}' doesn't exist.",
+ }
+
+ return {
+ "error_type": "File Not Found",
+ "description": "A required file or directory does not exist",
+ "can_auto_fix": False,
+ "fix_commands": [],
+ "manual_suggestion": "Check the file path and ensure it exists",
+ "reasoning": "The command references a non-existent path.",
+ }
+
+ if "command not found" in error_lower or "not found" in error_lower:
+ import re
+
+ cmd_match = re.search(r"(\w+): (?:command )?not found", stderr)
+ missing_cmd = cmd_match.group(1) if cmd_match else None
+
+ return {
+ "error_type": "Command Not Found",
+ "description": f"Command not installed: {missing_cmd or 'unknown'}",
+ "can_auto_fix": bool(missing_cmd),
+ "fix_commands": [f"sudo apt install -y {missing_cmd}"] if missing_cmd else [],
+ "manual_suggestion": (
+ f"Install: sudo apt install {missing_cmd}"
+ if missing_cmd
+ else "Install the required command"
+ ),
+ "reasoning": f"The command '{missing_cmd or 'required'}' is not installed.",
+ }
+
+ return {
+ "error_type": "Unknown Error",
+ "description": stderr[:200] if stderr else "Command failed with no error output",
+ "can_auto_fix": False,
+ "fix_commands": [],
+ "manual_suggestion": f"Review the error and try: {command}",
+ "reasoning": "The command failed with an unexpected error.",
+ }
+
+ def _generate_failure_reasoning(self, task: TaskNode, diagnosis: dict) -> str:
+ """Generate detailed reasoning for why a task failed."""
+ parts = [
+ f"Error type: {diagnosis.get('error_type', 'Unknown')}",
+ f"Description: {diagnosis.get('description', 'No details available')}",
+ ]
+
+ if task.repair_attempts > 0:
+ parts.append(f"Repair attempts: {task.repair_attempts} (all failed)")
+
+ if diagnosis.get("reasoning"):
+ parts.append(f"Analysis: {diagnosis['reasoning']}")
+
+ if diagnosis.get("manual_suggestion"):
+ parts.append(f"Suggestion: {diagnosis['manual_suggestion']}")
+
+ return " | ".join(parts)
+
+ def _identify_paths_needing_privileges(self, commands: list[str]) -> list[str]:
+ """Identify paths in commands that need privilege grants."""
+ paths = []
+ for cmd in commands:
+ parts = cmd.split()
+ for part in parts:
+ if part.startswith("/") and self.paths_manager.is_protected(part):
+ paths.append(part)
+ return paths
+
+ def _supervise_manual_intervention(
+ self,
+ task: TaskNode,
+ instruction: str,
+ notify_callback: Callable[[str, str], None],
+ ) -> bool:
+ """Supervise manual command execution with terminal monitoring."""
+ self._in_manual_mode = True
+
+ console.print("\n[bold cyan]═══ Manual Intervention Mode ═══[/bold cyan]")
+ console.print("\n[yellow]Run this command in another terminal:[/yellow]")
+ console.print(f"[bold]{instruction}[/bold]")
+
+ self._terminal_monitor = TerminalMonitor(
+ notification_callback=lambda title, msg: notify_callback(title, msg)
+ )
+ self._terminal_monitor.start()
+
+ console.print("\n[dim]Cortex is now monitoring your terminal for issues...[/dim]")
+
+ try:
+ while True:
+ choice = Confirm.ask(
+ "\nHave you completed the manual step?",
+ default=True,
+ )
+
+ if choice:
+ success = Confirm.ask("Was it successful?", default=True)
+
+ if success:
+ console.print("[green]✓ Manual step completed successfully[/green]")
+ return True
+ else:
+ console.print("\n[yellow]What went wrong?[/yellow]")
+ console.print("1. Permission denied")
+ console.print("2. File not found")
+ console.print("3. Other error")
+
+ try:
+ error_choice = int(input("Enter choice (1-3): "))
+ except ValueError:
+ error_choice = 3
+
+ if error_choice == 1:
+ console.print(f"[yellow]Try: sudo {instruction}[/yellow]")
+ elif error_choice == 2:
+ console.print("[yellow]Check the file path exists[/yellow]")
+ else:
+ console.print("[yellow]Describe the error and try again[/yellow]")
+
+ continue_trying = Confirm.ask("Continue trying?", default=True)
+ if not continue_trying:
+ return False
+ else:
+ console.print("[dim]Take your time. Cortex is still monitoring...[/dim]")
+
+ finally:
+ self._in_manual_mode = False
+ if self._terminal_monitor:
+ self._terminal_monitor.stop()
+
+ def get_tree_summary(self) -> dict:
+ """Get a summary of the task tree execution."""
+ return {
+ "tree": self.tree.to_dict(),
+ "permission_requests": self._permission_sets_requested,
+ "manual_commands": self._manual_commands_executed,
+ }
diff --git a/cortex/do_runner/handler.py b/cortex/do_runner/handler.py
new file mode 100644
index 00000000..52507041
--- /dev/null
+++ b/cortex/do_runner/handler.py
@@ -0,0 +1,4267 @@
+"""Main DoHandler class for the --do functionality."""
+
+import datetime
+import os
+import shutil
+import signal
+import subprocess
+import sys
+import time
+from collections.abc import Callable
+from pathlib import Path
+from typing import Any
+
+from rich.console import Console
+from rich.panel import Panel
+from rich.prompt import Confirm
+from rich.table import Table
+
+# Dracula-Inspired Theme Colors
+PURPLE = "#bd93f9" # Dracula purple
+PURPLE_LIGHT = "#ff79c6" # Dracula pink
+PURPLE_DARK = "#6272a4" # Dracula comment
+WHITE = "#f8f8f2" # Dracula foreground
+GRAY = "#6272a4" # Dracula comment
+GREEN = "#50fa7b" # Dracula green
+RED = "#ff5555" # Dracula red
+YELLOW = "#f1fa8c" # Dracula yellow
+CYAN = "#8be9fd" # Dracula cyan
+ORANGE = "#ffb86c" # Dracula orange
+
+# Round Icons
+ICON_SUCCESS = "●"
+ICON_ERROR = "●"
+ICON_INFO = "○"
+ICON_PENDING = "◐"
+ICON_ARROW = "→"
+ICON_CMD = "❯"
+
+from .database import DoRunDatabase
+from .diagnosis import AutoFixer, ErrorDiagnoser, LoginHandler
+from .managers import CortexUserManager, ProtectedPathsManager
+from .models import (
+ CommandLog,
+ CommandStatus,
+ DoRun,
+ RunMode,
+ TaskNode,
+ TaskTree,
+)
+from .terminal import TerminalMonitor
+from .verification import (
+ ConflictDetector,
+ FileUsefulnessAnalyzer,
+ VerificationRunner,
+)
+
+console = Console()
+
+
+class DoHandler:
+ """Main handler for the --do functionality."""
+
+ def __init__(self, llm_callback: Callable[[str], dict] | None = None):
+ self.db = DoRunDatabase()
+ self.paths_manager = ProtectedPathsManager()
+ self.user_manager = CortexUserManager
+ self.current_run: DoRun | None = None
+ self._granted_privileges: list[str] = []
+ self.llm_callback = llm_callback
+
+ self._task_tree: TaskTree | None = None
+ self._permission_requests_count = 0
+
+ self._terminal_monitor: TerminalMonitor | None = None
+
+ # Manual intervention tracking
+ self._expected_manual_commands: list[str] = []
+ self._completed_manual_commands: list[str] = []
+
+ # Session tracking
+ self.current_session_id: str | None = None
+
+ # Initialize helper classes
+ self._diagnoser = ErrorDiagnoser()
+ self._auto_fixer = AutoFixer(llm_callback=llm_callback)
+ self._login_handler = LoginHandler()
+ self._conflict_detector = ConflictDetector()
+ self._verification_runner = VerificationRunner()
+ self._file_analyzer = FileUsefulnessAnalyzer()
+
+ # Execution state tracking for interruption handling
+ self._current_process: subprocess.Popen | None = None
+ self._current_command: str | None = None
+ self._executed_commands: list[dict] = []
+ self._interrupted = False
+ self._interrupted_command: str | None = (
+ None # Track which command was interrupted for retry
+ )
+ self._remaining_commands: list[tuple[str, str, list[str]]] = (
+ []
+ ) # Commands that weren't executed
+ self._original_sigtstp = None
+ self._original_sigint = None
+
+ def cleanup(self) -> None:
+ """Clean up any running threads or resources."""
+ if self._terminal_monitor:
+ self._terminal_monitor.stop()
+ self._terminal_monitor = None
+
+ def _is_json_like(self, text: str) -> bool:
+ """Check if text looks like raw JSON that shouldn't be displayed."""
+ if not text:
+ return False
+ text = text.strip()
+ # Check for obvious JSON patterns
+ json_indicators = [
+ text.startswith(("{", "[", "]", "}")),
+ '"response_type"' in text,
+ '"do_commands"' in text,
+ '"command":' in text,
+ '"requires_sudo"' in text,
+ '{"' in text and '":' in text,
+ text.count('"') > 6 and ":" in text, # Multiple quoted keys
+ ]
+ return any(json_indicators)
+
+ def _setup_signal_handlers(self):
+ """Set up signal handlers for Ctrl+Z and Ctrl+C."""
+ self._original_sigtstp = signal.signal(signal.SIGTSTP, self._handle_interrupt)
+ self._original_sigint = signal.signal(signal.SIGINT, self._handle_interrupt)
+
+ def _restore_signal_handlers(self):
+ """Restore original signal handlers."""
+ if self._original_sigtstp is not None:
+ signal.signal(signal.SIGTSTP, self._original_sigtstp)
+ if self._original_sigint is not None:
+ signal.signal(signal.SIGINT, self._original_sigint)
+
+ def _handle_interrupt(self, signum, frame):
+ """Handle Ctrl+Z (SIGTSTP) or Ctrl+C (SIGINT) to stop current command only.
+
+ This does NOT exit the session - it only stops the currently executing command.
+ The session continues so the user can decide what to do next.
+ """
+ self._interrupted = True
+ # Store the interrupted command for potential retry
+ self._interrupted_command = self._current_command
+ signal_name = "Ctrl+Z" if signum == signal.SIGTSTP else "Ctrl+C"
+
+ console.print()
+ console.print(
+ f"[{YELLOW}]⚠ {signal_name} detected - Stopping current command...[/{YELLOW}]"
+ )
+
+ # Kill current subprocess if running
+ if self._current_process and self._current_process.poll() is None:
+ try:
+ self._current_process.terminate()
+ # Give it a moment to terminate gracefully
+ try:
+ self._current_process.wait(timeout=2)
+ except subprocess.TimeoutExpired:
+ self._current_process.kill()
+ console.print(f"[{YELLOW}] Stopped: {self._current_command}[/{YELLOW}]")
+ except Exception as e:
+ console.print(f"[{GRAY}] Error stopping process: {e}[/{GRAY}]")
+
+ # Note: We do NOT raise KeyboardInterrupt here
+ # The session continues - only the current command is stopped
+
+ def _track_command_start(self, command: str, process: subprocess.Popen | None = None):
+ """Track when a command starts executing."""
+ self._current_command = command
+ self._current_process = process
+
+ def _track_command_complete(
+ self, command: str, success: bool, output: str = "", error: str = ""
+ ):
+ """Track when a command completes."""
+ self._executed_commands.append(
+ {
+ "command": command,
+ "success": success,
+ "output": output[:500] if output else "",
+ "error": error[:200] if error else "",
+ "timestamp": datetime.datetime.now().isoformat(),
+ }
+ )
+ self._current_command = None
+ self._current_process = None
+
+ def _reset_execution_state(self):
+ """Reset execution tracking state for a new run."""
+ self._current_process = None
+ self._current_command = None
+ self._executed_commands = []
+ self._interrupted = False
+ self._interrupted_command = None
+ self._remaining_commands = []
+
+ def __del__(self):
+ """Destructor to ensure cleanup."""
+ self.cleanup()
+
+ def _show_expandable_output(self, output: str, command: str) -> None:
+ """Show output with expand/collapse capability."""
+ from rich.panel import Panel
+ from rich.prompt import Prompt
+ from rich.text import Text
+
+ lines = output.split("\n")
+ total_lines = len(lines)
+
+ # Always show first 3 lines as preview
+ preview_count = 3
+
+ if total_lines <= preview_count + 2:
+ # Small output - just show it all
+ console.print(
+ Panel(
+ output,
+ title=f"[{GRAY}]Output[/{GRAY}]",
+ title_align="left",
+ border_style=GRAY,
+ padding=(0, 1),
+ )
+ )
+ return
+
+ # Show collapsed preview
+ preview = "\n".join(lines[:preview_count])
+ remaining = total_lines - preview_count
+
+ content = Text()
+ content.append(preview)
+ content.append(f"\n\n[{GRAY}]─── {remaining} more lines hidden ───[/{GRAY}]", style=GRAY)
+
+ console.print(
+ Panel(
+ content,
+ title=f"[{GRAY}]Output ({total_lines} lines)[/{GRAY}]",
+ subtitle=f"[italic {GRAY}]Press Enter to continue, 'e' to expand[/italic {GRAY}]",
+ subtitle_align="right",
+ title_align="left",
+ border_style=GRAY,
+ padding=(0, 1),
+ )
+ )
+
+ # Quick check if user wants to expand
+ try:
+ response = input().strip().lower()
+ if response == "e":
+ # Show full output
+ console.print(
+ Panel(
+ output,
+ title=f"[{GRAY}]Full Output ({total_lines} lines)[/{GRAY}]",
+ title_align="left",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+ except (EOFError, KeyboardInterrupt):
+ pass
+
+ # Initialize notification manager
+ try:
+ from cortex.notification_manager import NotificationManager
+
+ self.notifier = NotificationManager()
+ except ImportError:
+ self.notifier = None
+
+ def _send_notification(self, title: str, message: str, level: str = "normal"):
+ """Send a desktop notification."""
+ if self.notifier:
+ self.notifier.send(title, message, level=level)
+ else:
+ console.print(
+ f"[bold {YELLOW}]🔔 {title}:[/bold {YELLOW}] [{WHITE}]{message}[/{WHITE}]"
+ )
+
+ def setup_cortex_user(self) -> bool:
+ """Ensure the cortex user exists."""
+ if not self.user_manager.user_exists():
+ console.print(f"[{YELLOW}]Setting up cortex user...[/{YELLOW}]")
+ success, message = self.user_manager.create_user()
+ if success:
+ console.print(f"[{GREEN}]{ICON_SUCCESS} {message}[/{GREEN}]")
+ else:
+ console.print(f"[{RED}]{ICON_ERROR} {message}[/{RED}]")
+ return success
+ return True
+
+ def analyze_commands_for_protected_paths(
+ self, commands: list[tuple[str, str]]
+ ) -> list[tuple[str, str, list[str]]]:
+ """Analyze commands and identify protected paths they access."""
+ results = []
+
+ for command, purpose in commands:
+ protected = []
+ parts = command.split()
+ for part in parts:
+ if part.startswith("/") or part.startswith("~"):
+ path = os.path.expanduser(part)
+ if self.paths_manager.is_protected(path):
+ protected.append(path)
+
+ results.append((command, purpose, protected))
+
+ return results
+
+ def request_user_confirmation(
+ self,
+ commands: list[tuple[str, str, list[str]]],
+ ) -> bool:
+ """Show commands to user and request confirmation with improved visual UI."""
+ from rich import box
+ from rich.columns import Columns
+ from rich.panel import Panel
+ from rich.text import Text
+
+ console.print()
+
+ # Create a table for commands
+ cmd_table = Table(
+ show_header=True,
+ header_style=f"bold {PURPLE_LIGHT}",
+ box=box.ROUNDED,
+ border_style=PURPLE,
+ expand=True,
+ padding=(0, 1),
+ )
+ cmd_table.add_column("#", style=f"bold {PURPLE_LIGHT}", width=3, justify="right")
+ cmd_table.add_column("Command", style=f"bold {WHITE}")
+ cmd_table.add_column("Purpose", style=GRAY)
+
+ all_protected = []
+ for i, (cmd, purpose, protected) in enumerate(commands, 1):
+ # Truncate long commands for display
+ cmd_display = cmd if len(cmd) <= 60 else cmd[:57] + "..."
+ purpose_display = purpose if len(purpose) <= 50 else purpose[:47] + "..."
+
+ # Add protected path indicator
+ if protected:
+ cmd_display = f"{cmd_display} [{YELLOW}]⚠[/{YELLOW}]"
+ all_protected.extend(protected)
+
+ cmd_table.add_row(str(i), cmd_display, purpose_display)
+
+ # Create header
+ header_text = Text()
+ header_text.append("🔐 ", style="bold")
+ header_text.append("Permission Required", style=f"bold {WHITE}")
+ header_text.append(
+ f" ({len(commands)} command{'s' if len(commands) > 1 else ''})", style=GRAY
+ )
+
+ console.print(
+ Panel(
+ cmd_table,
+ title=header_text,
+ title_align="left",
+ border_style=PURPLE,
+ padding=(1, 1),
+ )
+ )
+
+ # Show protected paths if any
+ if all_protected:
+ protected_set = set(all_protected)
+ protected_text = Text()
+ protected_text.append("⚠ Protected paths: ", style=f"bold {YELLOW}")
+ protected_text.append(", ".join(protected_set), style=GRAY)
+ console.print(
+ Panel(
+ protected_text,
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+
+ console.print()
+ return Confirm.ask("[bold]Proceed?[/bold]", default=False)
+
+ def _needs_sudo(self, cmd: str, protected_paths: list[str]) -> bool:
+ """Determine if a command needs sudo to execute."""
+ sudo_commands = [
+ "systemctl",
+ "service",
+ "apt",
+ "apt-get",
+ "dpkg",
+ "mount",
+ "umount",
+ "fdisk",
+ "mkfs",
+ "chown",
+ "chmod",
+ "useradd",
+ "userdel",
+ "usermod",
+ "groupadd",
+ "groupdel",
+ ]
+
+ cmd_parts = cmd.split()
+ if not cmd_parts:
+ return False
+
+ base_cmd = cmd_parts[0]
+
+ if base_cmd in sudo_commands:
+ return True
+
+ if protected_paths:
+ return True
+
+ if any(p in cmd for p in ["/etc/", "/var/lib/", "/usr/", "/opt/", "/root/"]):
+ return True
+
+ return False
+
+ # Commands that benefit from streaming output (long-running with progress)
+ STREAMING_COMMANDS = [
+ "docker pull",
+ "docker push",
+ "docker build",
+ "apt install",
+ "apt-get install",
+ "apt update",
+ "apt-get update",
+ "apt upgrade",
+ "apt-get upgrade",
+ "pip install",
+ "pip3 install",
+ "pip download",
+ "pip3 download",
+ "npm install",
+ "npm ci",
+ "yarn install",
+ "yarn add",
+ "cargo build",
+ "cargo install",
+ "go build",
+ "go install",
+ "go get",
+ "gem install",
+ "bundle install",
+ "wget",
+ "curl -o",
+ "curl -O",
+ "git clone",
+ "git pull",
+ "git fetch",
+ "make",
+ "cmake",
+ "ninja",
+ "rsync",
+ "scp",
+ ]
+
+ # Interactive commands that need a TTY - cannot be run in background/automated
+ INTERACTIVE_COMMANDS = [
+ "docker exec -it",
+ "docker exec -ti",
+ "docker run -it",
+ "docker run -ti",
+ "docker attach",
+ "ollama run",
+ "ollama chat",
+ "ssh ",
+ "bash -i",
+ "sh -i",
+ "zsh -i",
+ "vi ",
+ "vim ",
+ "nano ",
+ "emacs ",
+ "python -i",
+ "python3 -i",
+ "ipython",
+ "node -i",
+ "mysql -u",
+ "psql -U",
+ "mongo ",
+ "redis-cli",
+ "htop",
+ "top -i",
+ "less ",
+ "more ",
+ ]
+
+ def _should_stream_output(self, cmd: str) -> bool:
+ """Check if command should use streaming output."""
+ cmd_lower = cmd.lower()
+ return any(streaming_cmd in cmd_lower for streaming_cmd in self.STREAMING_COMMANDS)
+
+ def _is_interactive_command(self, cmd: str) -> bool:
+ """Check if command requires interactive TTY and cannot be automated."""
+ cmd_lower = cmd.lower()
+ # Check explicit patterns
+ if any(interactive in cmd_lower for interactive in self.INTERACTIVE_COMMANDS):
+ return True
+ # Check for -it or -ti flags in docker commands
+ if "docker" in cmd_lower and (
+ " -it " in cmd_lower
+ or " -ti " in cmd_lower
+ or cmd_lower.endswith(" -it")
+ or cmd_lower.endswith(" -ti")
+ ):
+ return True
+ return False
+
+ # Timeout settings by command type (in seconds)
+ COMMAND_TIMEOUTS = {
+ "docker pull": 1800, # 30 minutes for large images
+ "docker push": 1800, # 30 minutes for large images
+ "docker build": 3600, # 1 hour for complex builds
+ "apt install": 900, # 15 minutes
+ "apt-get install": 900,
+ "apt update": 300, # 5 minutes
+ "apt-get update": 300,
+ "apt upgrade": 1800, # 30 minutes
+ "apt-get upgrade": 1800,
+ "pip install": 600, # 10 minutes
+ "pip3 install": 600,
+ "npm install": 900, # 15 minutes
+ "yarn install": 900,
+ "git clone": 600, # 10 minutes
+ "make": 1800, # 30 minutes
+ "cargo build": 1800,
+ }
+
+ def _get_command_timeout(self, cmd: str) -> int:
+ """Get appropriate timeout for a command."""
+ cmd_lower = cmd.lower()
+ for cmd_pattern, timeout in self.COMMAND_TIMEOUTS.items():
+ if cmd_pattern in cmd_lower:
+ return timeout
+ return 600 # Default 10 minutes for streaming commands
+
+ def _execute_with_streaming(
+ self,
+ cmd: str,
+ needs_sudo: bool,
+ timeout: int | None = None, # None = auto-detect
+ ) -> tuple[bool, str, str]:
+ """Execute a command with real-time output streaming."""
+ import select
+ import sys
+
+ # Auto-detect timeout if not specified
+ if timeout is None:
+ timeout = self._get_command_timeout(cmd)
+
+ # Show timeout info for long operations
+ if timeout > 300:
+ console.print(
+ f"[dim] ⏱️ Timeout: {timeout // 60} minutes (large operation)[/dim]"
+ )
+
+ stdout_lines = []
+ stderr_lines = []
+
+ try:
+ if needs_sudo:
+ process = subprocess.Popen(
+ ["sudo", "bash", "-c", cmd],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ text=True,
+ bufsize=1, # Line buffered
+ )
+ else:
+ process = subprocess.Popen(
+ cmd,
+ shell=True,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ text=True,
+ bufsize=1,
+ )
+
+ # Use select for non-blocking reads on both stdout and stderr
+ import time
+
+ start_time = time.time()
+
+ while True:
+ # Check timeout
+ if time.time() - start_time > timeout:
+ process.kill()
+ return (
+ False,
+ "\n".join(stdout_lines),
+ f"Command timed out after {timeout} seconds",
+ )
+
+ # Check if process has finished
+ if process.poll() is not None:
+ # Read any remaining output
+ remaining_stdout, remaining_stderr = process.communicate()
+ if remaining_stdout:
+ for line in remaining_stdout.splitlines():
+ stdout_lines.append(line)
+ self._print_progress_line(line, is_stderr=False)
+ if remaining_stderr:
+ for line in remaining_stderr.splitlines():
+ stderr_lines.append(line)
+ self._print_progress_line(line, is_stderr=True)
+ break
+
+ # Try to read from stdout/stderr without blocking
+ try:
+ readable, _, _ = select.select([process.stdout, process.stderr], [], [], 0.1)
+
+ for stream in readable:
+ line = stream.readline()
+ if line:
+ line = line.rstrip()
+ if stream == process.stdout:
+ stdout_lines.append(line)
+ self._print_progress_line(line, is_stderr=False)
+ else:
+ stderr_lines.append(line)
+ self._print_progress_line(line, is_stderr=True)
+ except (ValueError, OSError):
+ # Stream closed
+ break
+
+ return (
+ process.returncode == 0,
+ "\n".join(stdout_lines).strip(),
+ "\n".join(stderr_lines).strip(),
+ )
+
+ except Exception as e:
+ return False, "\n".join(stdout_lines), str(e)
+
+ def _print_progress_line(self, line: str, is_stderr: bool = False) -> None:
+ """Print a progress line with appropriate formatting."""
+ if not line.strip():
+ return
+
+ line = line.strip()
+
+ # Docker pull progress patterns
+ if any(
+ p in line
+ for p in [
+ "Pulling from",
+ "Digest:",
+ "Status:",
+ "Pull complete",
+ "Downloading",
+ "Extracting",
+ ]
+ ):
+ console.print(f"[dim] 📦 {line}[/dim]")
+ # Docker build progress
+ elif line.startswith("Step ") or line.startswith("---> "):
+ console.print(f"[dim] 🔨 {line}[/dim]")
+ # apt progress patterns
+ elif any(
+ p in line
+ for p in [
+ "Get:",
+ "Hit:",
+ "Fetched",
+ "Reading",
+ "Building",
+ "Setting up",
+ "Processing",
+ "Unpacking",
+ ]
+ ):
+ console.print(f"[dim] 📦 {line}[/dim]")
+ # pip progress patterns
+ elif any(p in line for p in ["Collecting", "Downloading", "Installing", "Successfully"]):
+ console.print(f"[dim] 📦 {line}[/dim]")
+ # npm progress patterns
+ elif any(p in line for p in ["npm", "added", "packages", "audited"]):
+ console.print(f"[dim] 📦 {line}[/dim]")
+ # git progress patterns
+ elif any(
+ p in line for p in ["Cloning", "remote:", "Receiving", "Resolving", "Checking out"]
+ ):
+ console.print(f"[dim] 📦 {line}[/dim]")
+ # wget/curl progress
+ elif "%" in line and any(c.isdigit() for c in line):
+ # Progress percentage - update in place
+ console.print(f"[dim] ⬇️ {line[:80]}[/dim]", end="\r")
+ # Error lines
+ elif is_stderr and any(
+ p in line.lower() for p in ["error", "fail", "denied", "cannot", "unable"]
+ ):
+ console.print(f"[{YELLOW}] ⚠ {line}[/{YELLOW}]")
+ # Truncate very long lines
+ elif len(line) > 100:
+ console.print(f"[dim] {line[:100]}...[/dim]")
+
+ def _execute_single_command(
+ self, cmd: str, needs_sudo: bool, timeout: int = 120
+ ) -> tuple[bool, str, str]:
+ """Execute a single command with proper privilege handling and interruption support."""
+ # Check for interactive commands that need a TTY
+ if self._is_interactive_command(cmd):
+ return self._handle_interactive_command(cmd, needs_sudo)
+
+ # Use streaming for long-running commands
+ if self._should_stream_output(cmd):
+ return self._execute_with_streaming(cmd, needs_sudo, timeout=300)
+
+ # Track command start
+ self._track_command_start(cmd)
+
+ try:
+ # Flush output before sudo to handle password prompts cleanly
+ if needs_sudo:
+ sys.stdout.flush()
+ sys.stderr.flush()
+
+ # Use Popen for interruptibility
+ if needs_sudo:
+ process = subprocess.Popen(
+ ["sudo", "bash", "-c", cmd],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ text=True,
+ )
+ else:
+ process = subprocess.Popen(
+ cmd,
+ shell=True,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ text=True,
+ )
+
+ # Store process for interruption handling
+ self._current_process = process
+
+ try:
+ stdout, stderr = process.communicate(timeout=timeout)
+
+ # Check if interrupted during execution
+ if self._interrupted:
+ self._track_command_complete(
+ cmd, False, stdout or "", "Command interrupted by user"
+ )
+ return False, stdout.strip() if stdout else "", "Command interrupted by user"
+
+ success = process.returncode == 0
+
+ # Track completion
+ self._track_command_complete(cmd, success, stdout, stderr)
+
+ # After sudo, reset console state
+ if needs_sudo:
+ sys.stdout.write("") # Force flush
+ sys.stdout.flush()
+
+ return (success, stdout.strip(), stderr.strip())
+
+ except subprocess.TimeoutExpired:
+ process.kill()
+ stdout, stderr = process.communicate()
+ self._track_command_complete(
+ cmd, False, stdout, f"Command timed out after {timeout} seconds"
+ )
+ return (
+ False,
+ stdout.strip() if stdout else "",
+ f"Command timed out after {timeout} seconds",
+ )
+ except Exception as e:
+ self._track_command_complete(cmd, False, "", str(e))
+ return False, "", str(e)
+
+ def _handle_interactive_command(self, cmd: str, needs_sudo: bool) -> tuple[bool, str, str]:
+ """Handle interactive commands that need a TTY.
+
+ These commands cannot be run in the background - they need user interaction.
+ We'll either:
+ 1. Try to open in a new terminal window
+ 2. Or inform the user to run it manually
+ """
+ console.print()
+ console.print(f"[{YELLOW}]⚡ Interactive command detected[/{YELLOW}]")
+ console.print(f"[{GRAY}] This command requires a terminal for interaction.[/{GRAY}]")
+ console.print()
+
+ full_cmd = f"sudo {cmd}" if needs_sudo else cmd
+
+ # Try to detect if we can open a new terminal
+ terminal_cmds = [
+ (
+ "gnome-terminal",
+ f'gnome-terminal -- bash -c "{full_cmd}; echo; echo Press Enter to close...; read"',
+ ),
+ (
+ "konsole",
+ f'konsole -e bash -c "{full_cmd}; echo; echo Press Enter to close...; read"',
+ ),
+ ("xterm", f'xterm -e bash -c "{full_cmd}; echo; echo Press Enter to close...; read"'),
+ (
+ "x-terminal-emulator",
+ f'x-terminal-emulator -e bash -c "{full_cmd}; echo; echo Press Enter to close...; read"',
+ ),
+ ]
+
+ # Check which terminal is available
+ for term_name, term_cmd in terminal_cmds:
+ if shutil.which(term_name):
+ console.print(
+ f"[{PURPLE_LIGHT}]🖥️ Opening in new terminal window ({term_name})...[/{PURPLE_LIGHT}]"
+ )
+ console.print(f"[{GRAY}] Command: {full_cmd}[/{GRAY}]")
+ console.print()
+
+ try:
+ # Start the terminal in background
+ subprocess.Popen(
+ term_cmd,
+ shell=True,
+ stdout=subprocess.DEVNULL,
+ stderr=subprocess.DEVNULL,
+ )
+ return True, f"Command opened in new {term_name} window", ""
+ except Exception as e:
+ console.print(f"[{YELLOW}] ⚠ Could not open terminal: {e}[/{YELLOW}]")
+ break
+
+ # Fallback: ask user to run manually
+ console.print(
+ f"[bold {PURPLE_LIGHT}]📋 Please run this command manually in another terminal:[/bold {PURPLE_LIGHT}]"
+ )
+ console.print()
+ console.print(f" [{GREEN}]{full_cmd}[/{GREEN}]")
+ console.print()
+ console.print(f"[{GRAY}] This command needs interactive input (TTY).[/{GRAY}]")
+ console.print(f"[{GRAY}] Cortex cannot capture its output automatically.[/{GRAY}]")
+ console.print()
+
+ # Return special status indicating manual run needed
+ return True, "INTERACTIVE_COMMAND_MANUAL", f"Interactive command - run manually: {full_cmd}"
+
+ def execute_commands_as_cortex(
+ self,
+ commands: list[tuple[str, str, list[str]]],
+ user_query: str,
+ ) -> DoRun:
+ """Execute commands with granular error handling and auto-recovery."""
+ run = DoRun(
+ run_id=self.db._generate_run_id(),
+ summary="",
+ mode=RunMode.CORTEX_EXEC,
+ user_query=user_query,
+ started_at=datetime.datetime.now().isoformat(),
+ session_id=self.current_session_id or "",
+ )
+ self.current_run = run
+
+ console.print()
+ console.print(
+ f"[bold {PURPLE_LIGHT}]🚀 Executing commands with conflict detection...[/bold {PURPLE_LIGHT}]"
+ )
+ console.print()
+
+ # Phase 1: Conflict Detection
+ console.print(f"[{GRAY}]Checking for conflicts...[/{GRAY}]")
+
+ cleanup_commands = []
+ for cmd, purpose, protected in commands:
+ conflict = self._conflict_detector.check_for_conflicts(cmd, purpose)
+ if conflict["has_conflict"]:
+ console.print(
+ f"[{YELLOW}] ⚠ {conflict['conflict_type']}: {conflict['suggestion']}[/{YELLOW}]"
+ )
+ if conflict["cleanup_commands"]:
+ cleanup_commands.extend(conflict["cleanup_commands"])
+
+ if cleanup_commands:
+ console.print("[dim]Running cleanup commands...[/dim]")
+ for cleanup_cmd in cleanup_commands:
+ self._execute_single_command(cleanup_cmd, needs_sudo=True)
+
+ console.print()
+
+ all_protected = set()
+ for _, _, protected in commands:
+ all_protected.update(protected)
+
+ if all_protected:
+ console.print(f"[dim]📁 Protected paths involved: {', '.join(all_protected)}[/dim]")
+ console.print()
+
+ # Phase 2: Execute Commands
+ from rich.panel import Panel
+ from rich.text import Text
+
+ for i, (cmd, purpose, protected) in enumerate(commands, 1):
+ # Create a visually distinct panel for each command
+ cmd_header = Text()
+ cmd_header.append(f"[{i}/{len(commands)}] ", style=f"bold {WHITE}")
+ cmd_header.append(f" {cmd}", style=f"bold {PURPLE_LIGHT}")
+
+ console.print()
+ console.print(
+ Panel(
+ f"[bold {PURPLE_LIGHT}]{cmd}[/bold {PURPLE_LIGHT}]\n[{GRAY}]└─ {purpose}[/{GRAY}]",
+ title=f"[bold {WHITE}] Command {i}/{len(commands)} [/bold {WHITE}]",
+ title_align="left",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+
+ file_check = self._file_analyzer.check_file_exists_and_usefulness(
+ cmd, purpose, user_query
+ )
+
+ if file_check["recommendations"]:
+ self._file_analyzer.apply_file_recommendations(file_check["recommendations"])
+
+ cmd_log = CommandLog(
+ command=cmd,
+ purpose=purpose,
+ timestamp=datetime.datetime.now().isoformat(),
+ status=CommandStatus.RUNNING,
+ )
+
+ start_time = time.time()
+ needs_sudo = self._needs_sudo(cmd, protected)
+
+ success, stdout, stderr = self._execute_single_command(cmd, needs_sudo)
+
+ if not success:
+ diagnosis = self._diagnoser.diagnose_error(cmd, stderr)
+
+ # Create error panel for visual grouping
+ error_info = (
+ f"[bold {RED}]{ICON_ERROR} {diagnosis['description']}[/bold {RED}]\n"
+ f"[{GRAY}]Type: {diagnosis['error_type']} | Category: {diagnosis.get('category', 'unknown')}[/{GRAY}]"
+ )
+ console.print(
+ Panel(
+ error_info,
+ title=f"[bold {RED}] {ICON_ERROR} Error Detected [/bold {RED}]",
+ title_align="left",
+ border_style=RED,
+ padding=(0, 1),
+ )
+ )
+
+ # Check if this is a login/credential required error
+ if diagnosis.get("category") == "login_required":
+ console.print(
+ Panel(
+ f"[bold {PURPLE_LIGHT}]🔐 Authentication required for this command[/bold {PURPLE_LIGHT}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+
+ login_success, login_msg = self._login_handler.handle_login(cmd, stderr)
+
+ if login_success:
+ console.print(
+ Panel(
+ f"[bold {GREEN}]{ICON_SUCCESS} {login_msg}[/bold {GREEN}]\n[{GRAY}]Retrying command...[/{GRAY}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+
+ # Retry the command after successful login
+ success, stdout, stderr = self._execute_single_command(cmd, needs_sudo)
+
+ if success:
+ console.print(
+ Panel(
+ f"[bold {GREEN}]{ICON_SUCCESS} Command succeeded after authentication![/bold {GREEN}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+ else:
+ console.print(
+ Panel(
+ f"[bold {YELLOW}]Command still failed after login[/bold {YELLOW}]\n[{GRAY}]{stderr[:100]}[/{GRAY}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+ else:
+ console.print(f"[{YELLOW}]{login_msg}[/{YELLOW}]")
+ else:
+ # Not a login error, proceed with regular error handling
+ extra_info = []
+ if diagnosis.get("extracted_path"):
+ extra_info.append(f"[{GRAY}]Path:[/{GRAY}] {diagnosis['extracted_path']}")
+ if diagnosis.get("extracted_info"):
+ for key, value in diagnosis["extracted_info"].items():
+ if value:
+ extra_info.append(f"[{GRAY}]{key}:[/{GRAY}] {value}")
+
+ if extra_info:
+ console.print(
+ Panel(
+ "\n".join(extra_info),
+ title=f"[{GRAY}] Error Details [{GRAY}]",
+ title_align="left",
+ border_style=GRAY,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+
+ fixed, fix_message, fix_commands = self._auto_fixer.auto_fix_error(
+ cmd, stderr, diagnosis, max_attempts=3
+ )
+
+ if fixed:
+ success = True
+ console.print(
+ Panel(
+ f"[bold {GREEN}]{ICON_SUCCESS} Auto-fixed:[/bold {GREEN}] [{WHITE}]{fix_message}[/{WHITE}]",
+ title=f"[bold {GREEN}] Fix Successful [/bold {GREEN}]",
+ title_align="left",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+ _, stdout, stderr = self._execute_single_command(cmd, needs_sudo=True)
+ else:
+ fix_info = []
+ if fix_commands:
+ fix_info.append(
+ f"[{GRAY}]Attempted:[/{GRAY}] {len(fix_commands)} fix command(s)"
+ )
+ fix_info.append(
+ f"[bold {YELLOW}]Result:[/bold {YELLOW}] [{WHITE}]{fix_message}[/{WHITE}]"
+ )
+ console.print(
+ Panel(
+ "\n".join(fix_info),
+ title=f"[bold {YELLOW}] Fix Incomplete [/bold {YELLOW}]",
+ title_align="left",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+
+ cmd_log.duration_seconds = time.time() - start_time
+ cmd_log.output = stdout
+ cmd_log.error = stderr
+ cmd_log.status = CommandStatus.SUCCESS if success else CommandStatus.FAILED
+
+ run.commands.append(cmd_log)
+ run.files_accessed.extend(protected)
+
+ if success:
+ console.print(
+ Panel(
+ f"[bold {GREEN}]{ICON_SUCCESS} Success[/bold {GREEN}] [{GRAY}]({cmd_log.duration_seconds:.2f}s)[/{GRAY}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+ if stdout:
+ self._show_expandable_output(stdout, cmd)
+ else:
+ console.print(
+ Panel(
+ f"[bold {RED}]{ICON_ERROR} Failed[/bold {RED}]\n[{GRAY}]{stderr[:200]}[/{GRAY}]",
+ border_style=RED,
+ padding=(0, 1),
+ )
+ )
+
+ final_diagnosis = self._diagnoser.diagnose_error(cmd, stderr)
+ if final_diagnosis["fix_commands"] and not final_diagnosis["can_auto_fix"]:
+ # Create a manual intervention panel
+ manual_content = [
+ f"[bold {YELLOW}]Issue:[/bold {YELLOW}] [{WHITE}]{final_diagnosis['description']}[/{WHITE}]",
+ "",
+ ]
+ manual_content.append(f"[bold {WHITE}]Suggested commands:[/bold {WHITE}]")
+ for fix_cmd in final_diagnosis["fix_commands"]:
+ if not fix_cmd.startswith("#"):
+ manual_content.append(f" [{PURPLE_LIGHT}]$ {fix_cmd}[/{PURPLE_LIGHT}]")
+ else:
+ manual_content.append(f" [{GRAY}]{fix_cmd}[/{GRAY}]")
+
+ console.print(
+ Panel(
+ "\n".join(manual_content),
+ title=f"[bold {YELLOW}] 💡 Manual Intervention Required [/bold {YELLOW}]",
+ title_align="left",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+
+ console.print()
+
+ self._granted_privileges = []
+
+ # Phase 3: Verification Tests
+ console.print()
+ console.print(
+ Panel(
+ f"[bold {WHITE}]Running verification tests...[/bold {WHITE}]",
+ title=f"[bold {PURPLE_LIGHT}] 🧪 Verification Phase [/bold {PURPLE_LIGHT}]",
+ title_align="left",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+ all_tests_passed, test_results = self._verification_runner.run_verification_tests(
+ run.commands, user_query
+ )
+
+ # Phase 4: Auto-repair if tests failed
+ if not all_tests_passed:
+ console.print()
+ console.print(
+ Panel(
+ f"[bold {YELLOW}]Attempting to repair test failures...[/bold {YELLOW}]",
+ title=f"[bold {YELLOW}] 🔧 Auto-Repair Phase [/bold {YELLOW}]",
+ title_align="left",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+
+ repair_success = self._handle_test_failures(test_results, run)
+
+ if repair_success:
+ console.print(f"[{GRAY}]Re-running verification tests...[/{GRAY}]")
+ all_tests_passed, test_results = self._verification_runner.run_verification_tests(
+ run.commands, user_query
+ )
+
+ run.completed_at = datetime.datetime.now().isoformat()
+ run.summary = self._generate_summary(run)
+
+ if test_results:
+ passed = sum(1 for t in test_results if t["passed"])
+ run.summary += f" | Tests: {passed}/{len(test_results)} passed"
+
+ self.db.save_run(run)
+
+ # Generate LLM summary/answer
+ llm_answer = self._generate_llm_answer(run, user_query)
+
+ # Print condensed execution summary with answer
+ self._print_execution_summary(run, answer=llm_answer)
+
+ console.print()
+ console.print(f"[dim]Run ID: {run.run_id}[/dim]")
+
+ return run
+
+ def _handle_resource_conflict(
+ self,
+ idx: int,
+ cmd: str,
+ conflict: dict,
+ commands_to_skip: set,
+ cleanup_commands: list,
+ ) -> bool:
+ """Handle any resource conflict with user options.
+
+ This is a GENERAL handler for all resource types:
+ - Docker containers
+ - Services
+ - Files/directories
+ - Packages
+ - Ports
+ - Users/groups
+ - Virtual environments
+ - Databases
+ - Cron jobs
+ """
+ resource_type = conflict.get("resource_type", "resource")
+ resource_name = conflict.get("resource_name", "unknown")
+ conflict_type = conflict.get("conflict_type", "unknown")
+ suggestion = conflict.get("suggestion", "")
+ is_active = conflict.get("is_active", True)
+ alternatives = conflict.get("alternative_actions", [])
+
+ # Resource type icons
+ icons = {
+ "container": "🐳",
+ "compose": "🐳",
+ "service": "⚙️",
+ "file": "📄",
+ "directory": "📁",
+ "package": "📦",
+ "pip_package": "🐍",
+ "npm_package": "📦",
+ "port": "🔌",
+ "user": "👤",
+ "group": "👥",
+ "venv": "🐍",
+ "mysql_database": "🗄️",
+ "postgres_database": "🗄️",
+ "cron_job": "⏰",
+ }
+ icon = icons.get(resource_type, "📌")
+
+ # Display the conflict with visual grouping
+ from rich.panel import Panel
+
+ status_text = (
+ f"[bold {PURPLE_LIGHT}]Active[/bold {PURPLE_LIGHT}]"
+ if is_active
+ else f"[{GRAY}]Inactive[/{GRAY}]"
+ )
+ conflict_content = (
+ f"{icon} [bold {WHITE}]{resource_type.replace('_', ' ').title()}:[/bold {WHITE}] '{resource_name}'\n"
+ f"[{GRAY}]Status:[/{GRAY}] {status_text}\n"
+ f"[{GRAY}]{suggestion}[/{GRAY}]"
+ )
+
+ console.print()
+ console.print(
+ Panel(
+ conflict_content,
+ title=f"[bold {YELLOW}] ⚠️ Resource Conflict [/bold {YELLOW}]",
+ title_align="left",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+
+ # If there are alternatives, show them
+ if alternatives:
+ options_content = [f"[bold {WHITE}]What would you like to do?[/bold {WHITE}]", ""]
+ for j, alt in enumerate(alternatives, 1):
+ options_content.append(f" [{WHITE}]{j}. {alt['description']}[/{WHITE}]")
+
+ console.print(
+ Panel(
+ "\n".join(options_content),
+ border_style=GRAY,
+ padding=(0, 1),
+ )
+ )
+
+ from rich.prompt import Prompt
+
+ choice = Prompt.ask(
+ " Choose an option",
+ choices=[str(k) for k in range(1, len(alternatives) + 1)],
+ default="1",
+ )
+
+ selected = alternatives[int(choice) - 1]
+ action = selected["action"]
+ action_commands = selected.get("commands", [])
+
+ # Handle different actions
+ if action in ["use_existing", "use_different"]:
+ console.print(
+ f"[{GREEN}] {ICON_SUCCESS} Using existing {resource_type} '{resource_name}'[/{GREEN}]"
+ )
+ commands_to_skip.add(idx)
+ return True
+
+ elif action == "start_existing":
+ console.print(
+ f"[{PURPLE_LIGHT}] Starting existing {resource_type}...[/{PURPLE_LIGHT}]"
+ )
+ for start_cmd in action_commands:
+ needs_sudo = start_cmd.startswith("sudo")
+ success, _, stderr = self._execute_single_command(
+ start_cmd, needs_sudo=needs_sudo
+ )
+ if success:
+ console.print(f"[{GREEN}] {ICON_SUCCESS} {start_cmd}[/{GREEN}]")
+ else:
+ console.print(f"[{RED}] {ICON_ERROR} {start_cmd}: {stderr[:50]}[/{RED}]")
+ commands_to_skip.add(idx)
+ return True
+
+ elif action in ["restart", "upgrade", "reinstall"]:
+ console.print(
+ f"[{PURPLE_LIGHT}] {action.title()}ing {resource_type}...[/{PURPLE_LIGHT}]"
+ )
+ for action_cmd in action_commands:
+ needs_sudo = action_cmd.startswith("sudo")
+ success, _, stderr = self._execute_single_command(
+ action_cmd, needs_sudo=needs_sudo
+ )
+ if success:
+ console.print(f"[{GREEN}] {ICON_SUCCESS} {action_cmd}[/{GREEN}]")
+ else:
+ console.print(f"[{RED}] {ICON_ERROR} {action_cmd}: {stderr[:50]}[/{RED}]")
+ commands_to_skip.add(idx)
+ return True
+
+ elif action in ["recreate", "backup", "replace", "stop_existing"]:
+ console.print(
+ f"[{PURPLE_LIGHT}] Preparing to {action.replace('_', ' ')}...[/{PURPLE_LIGHT}]"
+ )
+ for action_cmd in action_commands:
+ needs_sudo = action_cmd.startswith("sudo")
+ success, _, stderr = self._execute_single_command(
+ action_cmd, needs_sudo=needs_sudo
+ )
+ if success:
+ console.print(f"[{GREEN}] {ICON_SUCCESS} {action_cmd}[/{GREEN}]")
+ else:
+ console.print(f"[{RED}] {ICON_ERROR} {action_cmd}: {stderr[:50]}[/{RED}]")
+ # Don't skip - let the original command run after cleanup
+ return True
+
+ elif action == "modify":
+ console.print(
+ f"[{PURPLE_LIGHT}] Will modify existing {resource_type}[/{PURPLE_LIGHT}]"
+ )
+ # Don't skip - let the original command run to modify
+ return True
+
+ elif action == "install_first":
+ # Install a missing tool/dependency first
+ console.print(
+ f"[{PURPLE_LIGHT}] Installing required dependency '{resource_name}'...[/{PURPLE_LIGHT}]"
+ )
+ all_success = True
+ for action_cmd in action_commands:
+ needs_sudo = action_cmd.startswith("sudo")
+ success, stdout, stderr = self._execute_single_command(
+ action_cmd, needs_sudo=needs_sudo
+ )
+ if success:
+ console.print(f"[{GREEN}] {ICON_SUCCESS} {action_cmd}[/{GREEN}]")
+ else:
+ console.print(f"[{RED}] {ICON_ERROR} {action_cmd}: {stderr[:50]}[/{RED}]")
+ all_success = False
+
+ if all_success:
+ console.print(
+ f"[{GREEN}] {ICON_SUCCESS} '{resource_name}' installed. Continuing with original command...[/{GREEN}]"
+ )
+ # Don't skip - run the original command now that the tool is installed
+ return True
+ else:
+ console.print(
+ f"[{RED}] {ICON_ERROR} Failed to install '{resource_name}'[/{RED}]"
+ )
+ commands_to_skip.add(idx)
+ return True
+
+ elif action == "use_apt":
+ # User chose to use apt instead of snap
+ console.print(
+ f"[{PURPLE_LIGHT}] Skipping snap command - use apt instead[/{PURPLE_LIGHT}]"
+ )
+ commands_to_skip.add(idx)
+ return True
+
+ elif action == "refresh":
+ # Refresh snap package
+ console.print(f"[{PURPLE_LIGHT}] Refreshing snap package...[/{PURPLE_LIGHT}]")
+ for action_cmd in action_commands:
+ needs_sudo = action_cmd.startswith("sudo")
+ success, _, stderr = self._execute_single_command(
+ action_cmd, needs_sudo=needs_sudo
+ )
+ if success:
+ console.print(f"[{GREEN}] {ICON_SUCCESS} {action_cmd}[/{GREEN}]")
+ else:
+ console.print(f"[{RED}] {ICON_ERROR} {action_cmd}: {stderr[:50]}[/{RED}]")
+ commands_to_skip.add(idx)
+ return True
+
+ # No alternatives - use default behavior (add to cleanup if available)
+ if conflict.get("cleanup_commands"):
+ cleanup_commands.extend(conflict["cleanup_commands"])
+
+ return False
+
+ def _handle_test_failures(
+ self,
+ test_results: list[dict[str, Any]],
+ run: DoRun,
+ ) -> bool:
+ """Handle failed verification tests by attempting auto-repair."""
+ failed_tests = [t for t in test_results if not t["passed"]]
+
+ if not failed_tests:
+ return True
+
+ console.print()
+ console.print(f"[bold {YELLOW}]🔧 Attempting to fix test failures...[/bold {YELLOW}]")
+
+ all_fixed = True
+
+ for test in failed_tests:
+ test_name = test["test"]
+ output = test["output"]
+
+ console.print(f"[{GRAY}] Fixing: {test_name}[/{GRAY}]")
+
+ if "nginx -t" in test_name:
+ diagnosis = self._diagnoser.diagnose_error("nginx -t", output)
+ fixed, msg, _ = self._auto_fixer.auto_fix_error(
+ "nginx -t", output, diagnosis, max_attempts=3
+ )
+ if fixed:
+ console.print(f"[{GREEN}] {ICON_SUCCESS} Fixed: {msg}[/{GREEN}]")
+ else:
+ console.print(f"[{RED}] {ICON_ERROR} Could not fix: {msg}[/{RED}]")
+ all_fixed = False
+
+ elif "apache2ctl" in test_name:
+ diagnosis = self._diagnoser.diagnose_error("apache2ctl configtest", output)
+ fixed, msg, _ = self._auto_fixer.auto_fix_error(
+ "apache2ctl configtest", output, diagnosis, max_attempts=3
+ )
+ if fixed:
+ console.print(f"[{GREEN}] {ICON_SUCCESS} Fixed: {msg}[/{GREEN}]")
+ else:
+ all_fixed = False
+
+ elif "systemctl is-active" in test_name:
+ import re
+
+ svc_match = re.search(r"is-active\s+(\S+)", test_name)
+ if svc_match:
+ service = svc_match.group(1)
+ success, _, err = self._execute_single_command(
+ f"sudo systemctl start {service}", needs_sudo=True
+ )
+ if success:
+ console.print(
+ f"[{GREEN}] {ICON_SUCCESS} Started service {service}[/{GREEN}]"
+ )
+ else:
+ console.print(
+ f"[{YELLOW}] ⚠ Could not start {service}: {err[:50]}[/{YELLOW}]"
+ )
+
+ elif "file exists" in test_name:
+ import re
+
+ path_match = re.search(r"file exists: (.+)", test_name)
+ if path_match:
+ path = path_match.group(1)
+ parent = os.path.dirname(path)
+ if parent and not os.path.exists(parent):
+ self._execute_single_command(f"sudo mkdir -p {parent}", needs_sudo=True)
+ console.print(
+ f"[{GREEN}] {ICON_SUCCESS} Created directory {parent}[/{GREEN}]"
+ )
+
+ return all_fixed
+
+ def execute_with_task_tree(
+ self,
+ commands: list[tuple[str, str, list[str]]],
+ user_query: str,
+ ) -> DoRun:
+ """Execute commands using the task tree system with advanced auto-repair."""
+ # Reset execution state for new run
+ self._reset_execution_state()
+
+ run = DoRun(
+ run_id=self.db._generate_run_id(),
+ summary="",
+ mode=RunMode.CORTEX_EXEC,
+ user_query=user_query,
+ started_at=datetime.datetime.now().isoformat(),
+ session_id=self.current_session_id or "",
+ )
+ self.current_run = run
+ self._permission_requests_count = 0
+
+ self._task_tree = TaskTree()
+ for cmd, purpose, protected in commands:
+ task = self._task_tree.add_root_task(cmd, purpose)
+ task.reasoning = f"Protected paths: {', '.join(protected)}" if protected else ""
+
+ console.print()
+ console.print(
+ Panel(
+ f"[bold {PURPLE_LIGHT}]🌳 Task Tree Execution Mode[/bold {PURPLE_LIGHT}]\n"
+ f"[{GRAY}]Commands will be executed with auto-repair capabilities.[/{GRAY}]\n"
+ f"[{GRAY}]Conflict detection and verification tests enabled.[/{GRAY}]\n"
+ f"[{YELLOW}]Press Ctrl+Z or Ctrl+C to stop execution at any time.[/{YELLOW}]",
+ expand=False,
+ border_style=PURPLE,
+ )
+ )
+ console.print()
+
+ # Set up signal handlers for Ctrl+Z and Ctrl+C
+ self._setup_signal_handlers()
+
+ # Phase 1: Conflict Detection - Claude-like header
+ console.print(
+ f"[bold {PURPLE}]━━━[/bold {PURPLE}] [bold {WHITE}]Checking for Conflicts[/bold {WHITE}]"
+ )
+
+ conflicts_found = []
+ cleanup_commands = []
+ commands_to_skip = set() # Track commands that should be skipped (use existing)
+ commands_to_replace = {} # Track commands that should be replaced
+ resource_decisions = {} # Track user decisions for each resource to avoid duplicate prompts
+
+ for i, (cmd, purpose, protected) in enumerate(commands):
+ conflict = self._conflict_detector.check_for_conflicts(cmd, purpose)
+ if conflict["has_conflict"]:
+ conflicts_found.append((i, cmd, conflict))
+
+ if conflicts_found:
+ # Deduplicate conflicts by resource name
+ unique_resources = {}
+ for idx, cmd, conflict in conflicts_found:
+ resource_name = conflict.get("resource_name", cmd)
+ if resource_name not in unique_resources:
+ unique_resources[resource_name] = []
+ unique_resources[resource_name].append((idx, cmd, conflict))
+
+ console.print(
+ f" [{YELLOW}]{ICON_PENDING}[/{YELLOW}] Found [bold {WHITE}]{len(unique_resources)}[/bold {WHITE}] unique conflict(s)"
+ )
+
+ for resource_name, resource_conflicts in unique_resources.items():
+ # Only ask once per unique resource
+ first_idx, first_cmd, first_conflict = resource_conflicts[0]
+
+ # Handle the first conflict to get user's decision
+ decision = self._handle_resource_conflict(
+ first_idx, first_cmd, first_conflict, commands_to_skip, cleanup_commands
+ )
+ resource_decisions[resource_name] = decision
+
+ # Apply the same decision to all other commands affecting this resource
+ if len(resource_conflicts) > 1:
+ for idx, cmd, conflict in resource_conflicts[1:]:
+ if first_idx in commands_to_skip:
+ commands_to_skip.add(idx)
+
+ # Run cleanup commands for non-Docker conflicts
+ if cleanup_commands:
+ console.print("[dim] Running cleanup commands...[/dim]")
+ for cleanup_cmd in cleanup_commands:
+ self._execute_single_command(cleanup_cmd, needs_sudo=True)
+ console.print(f"[dim] ✓ {cleanup_cmd}[/dim]")
+
+ # Filter out skipped commands
+ if commands_to_skip:
+ filtered_commands = [
+ (cmd, purpose, protected)
+ for i, (cmd, purpose, protected) in enumerate(commands)
+ if i not in commands_to_skip
+ ]
+ # Update task tree to skip these tasks
+ for task in self._task_tree.root_tasks:
+ task_idx = next(
+ (i for i, (c, p, pr) in enumerate(commands) if c == task.command), None
+ )
+ if task_idx in commands_to_skip:
+ task.status = CommandStatus.SKIPPED
+ task.output = "Using existing resource"
+ commands = filtered_commands
+ else:
+ console.print(f" [{GREEN}]{ICON_SUCCESS}[/{GREEN}] No conflicts detected")
+
+ console.print()
+
+ all_protected = set()
+ for _, _, protected in commands:
+ all_protected.update(protected)
+
+ if all_protected:
+ console.print(f"[{GRAY}]📁 Protected paths: {', '.join(all_protected)}[/{GRAY}]")
+ console.print()
+
+ try:
+ # Phase 2: Execute Commands - Claude-like header
+ console.print()
+ console.print(
+ f"[bold {PURPLE}]━━━[/bold {PURPLE}] [bold {WHITE}]Executing Commands[/bold {WHITE}]"
+ )
+ console.print()
+
+ # Track remaining commands for resume functionality
+ executed_tasks = set()
+ for i, root_task in enumerate(self._task_tree.root_tasks):
+ if self._interrupted:
+ # Store remaining tasks for potential continuation
+ remaining_tasks = self._task_tree.root_tasks[i:]
+ self._remaining_commands = [
+ (t.command, t.purpose, [])
+ for t in remaining_tasks
+ if t.status not in (CommandStatus.SUCCESS, CommandStatus.SKIPPED)
+ ]
+ break
+ self._execute_task_node(root_task, run, commands)
+ executed_tasks.add(root_task.id)
+
+ if not self._interrupted:
+ # Phase 3: Verification Tests - Claude-like header
+ console.print()
+ console.print(
+ f"[bold {PURPLE}]━━━[/bold {PURPLE}] [bold {WHITE}]Verification[/bold {WHITE}]"
+ )
+
+ all_tests_passed, test_results = self._verification_runner.run_verification_tests(
+ run.commands, user_query
+ )
+
+ # Phase 4: Auto-repair if tests failed
+ if not all_tests_passed:
+ console.print()
+ console.print(
+ f"[bold {PURPLE}]━━━[/bold {PURPLE}] [bold {WHITE}]Auto-Repair[/bold {WHITE}]"
+ )
+
+ repair_success = self._handle_test_failures(test_results, run)
+
+ if repair_success:
+ console.print()
+ console.print(f"[{GRAY}] Re-running verification tests...[/{GRAY}]")
+ all_tests_passed, test_results = (
+ self._verification_runner.run_verification_tests(
+ run.commands, user_query
+ )
+ )
+ else:
+ all_tests_passed = False
+ test_results = []
+
+ run.completed_at = datetime.datetime.now().isoformat()
+
+ if self._interrupted:
+ run.summary = f"INTERRUPTED after {len(self._executed_commands)} command(s)"
+ else:
+ run.summary = self._generate_tree_summary(run)
+ if test_results:
+ passed = sum(1 for t in test_results if t["passed"])
+ run.summary += f" | Tests: {passed}/{len(test_results)} passed"
+
+ self.db.save_run(run)
+
+ console.print()
+ console.print("[bold]Task Execution Tree:[/bold]")
+ self._task_tree.print_tree()
+
+ # Generate LLM summary/answer if available
+ llm_answer = None
+ if not self._interrupted:
+ llm_answer = self._generate_llm_answer(run, user_query)
+
+ # Print condensed execution summary with answer
+ self._print_execution_summary(run, answer=llm_answer)
+
+ console.print()
+ if self._interrupted:
+ console.print(f"[dim]Run ID: {run.run_id} (interrupted)[/dim]")
+ elif all_tests_passed:
+ console.print(f"[dim]Run ID: {run.run_id}[/dim]")
+
+ if self._permission_requests_count > 1:
+ console.print(
+ f"[dim]Permission requests made: {self._permission_requests_count}[/dim]"
+ )
+
+ # Reset interrupted flag before interactive session
+ # This allows the user to continue the session even after stopping a command
+ was_interrupted = self._interrupted
+ self._interrupted = False
+
+ # Always go to interactive session - even after interruption
+ # User can decide what to do next (retry, skip, exit)
+ self._interactive_session(run, commands, user_query, was_interrupted=was_interrupted)
+
+ return run
+
+ finally:
+ # Always restore signal handlers
+ self._restore_signal_handlers()
+
+ def _interactive_session(
+ self,
+ run: DoRun,
+ commands: list[tuple[str, str, list[str]]],
+ user_query: str,
+ was_interrupted: bool = False,
+ ) -> None:
+ """Interactive session after task completion - suggest next steps.
+
+ If was_interrupted is True, the previous command execution was stopped
+ by Ctrl+Z/Ctrl+C. We still continue the session so the user can decide
+ what to do next (retry, skip remaining, run different command, etc).
+ """
+ import sys
+
+ from rich.prompt import Prompt
+
+ # Flush any pending output to ensure clean display
+ sys.stdout.flush()
+ sys.stderr.flush()
+
+ # Generate context-aware suggestions based on what was done
+ suggestions = self._generate_suggestions(run, commands, user_query)
+
+ # If interrupted, add special suggestions at the beginning
+ if was_interrupted:
+ interrupted_suggestions = [
+ {
+ "label": "🔄 Retry interrupted command",
+ "description": "Try running the interrupted command again",
+ "type": "retry_interrupted",
+ },
+ {
+ "label": "⏭️ Skip and continue",
+ "description": "Skip the interrupted command and continue with remaining tasks",
+ "type": "skip_and_continue",
+ },
+ ]
+ suggestions = interrupted_suggestions + suggestions
+
+ # Track context for natural language processing
+ context = {
+ "original_query": user_query,
+ "executed_commands": [cmd for cmd, _, _ in commands],
+ "session_actions": [],
+ "was_interrupted": was_interrupted,
+ }
+
+ console.print()
+ if was_interrupted:
+ console.print(
+ f"[bold {YELLOW}]━━━[/bold {YELLOW}] [bold {WHITE}]Execution Interrupted - What would you like to do?[/bold {WHITE}]"
+ )
+ else:
+ console.print(
+ f"[bold {PURPLE}]━━━[/bold {PURPLE}] [bold {WHITE}]Next Steps[/bold {WHITE}]"
+ )
+ console.print()
+
+ # Display suggestions
+ self._display_suggestions(suggestions)
+
+ console.print()
+ console.print(f"[{GRAY}]You can type any request in natural language[/{GRAY}]")
+ console.print()
+
+ # Ensure prompt is visible
+ sys.stdout.flush()
+
+ while True:
+ try:
+ response = Prompt.ask(
+ f"[bold {PURPLE_LIGHT}]{ICON_CMD}[/bold {PURPLE_LIGHT}]", default="exit"
+ )
+
+ response_stripped = response.strip()
+ response_lower = response_stripped.lower()
+
+ # Check for exit keywords
+ if response_lower in [
+ "exit",
+ "quit",
+ "done",
+ "no",
+ "n",
+ "bye",
+ "thanks",
+ "nothing",
+ "",
+ ]:
+ console.print(
+ "[dim]👋 Session ended. Run 'cortex do history' to see past runs.[/dim]"
+ )
+ break
+
+ # Try to parse as number (for suggestion selection)
+ try:
+ choice = int(response_stripped)
+ if suggestions and 1 <= choice <= len(suggestions):
+ suggestion = suggestions[choice - 1]
+ self._execute_suggestion(suggestion, run, user_query)
+ context["session_actions"].append(suggestion.get("label", ""))
+
+ # Update last query to the suggestion for context-aware follow-ups
+ suggestion_label = suggestion.get("label", "")
+ context["last_query"] = suggestion_label
+
+ # Continue the session with suggestions based on what was just done
+ console.print()
+ suggestions = self._generate_suggestions_for_query(
+ suggestion_label, context
+ )
+ self._display_suggestions(suggestions)
+ console.print()
+ continue
+ elif suggestions and choice == len(suggestions) + 1:
+ console.print("[dim]👋 Session ended.[/dim]")
+ break
+ except ValueError:
+ pass
+
+ # Handle natural language request
+ handled = self._handle_natural_language_request(
+ response_stripped, suggestions, context, run, commands
+ )
+
+ if handled:
+ context["session_actions"].append(response_stripped)
+ # Update context with the new query for better suggestions
+ context["last_query"] = response_stripped
+
+ # Refresh suggestions based on NEW query (not combined)
+ # This ensures suggestions are relevant to what user just asked
+ console.print()
+ suggestions = self._generate_suggestions_for_query(response_stripped, context)
+ self._display_suggestions(suggestions)
+ console.print()
+
+ except (EOFError, KeyboardInterrupt):
+ console.print("\n[dim]👋 Session ended.[/dim]")
+ break
+
+ # Cleanup: ensure any terminal monitors are stopped
+ if self._terminal_monitor:
+ self._terminal_monitor.stop()
+ self._terminal_monitor = None
+
+ def _generate_suggestions_for_query(self, query: str, context: dict) -> list[dict]:
+ """Generate suggestions based on the current query and context.
+
+ This generates follow-up suggestions relevant to what the user just asked/did,
+ not tied to the original task.
+ """
+ suggestions = []
+ query_lower = query.lower()
+
+ # User management related queries
+ if any(w in query_lower for w in ["user", "locked", "password", "account", "login"]):
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "👥",
+ "label": "List all users",
+ "description": "Show all system users",
+ "command": "cat /etc/passwd | cut -d: -f1",
+ "purpose": "List all users",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "🔐",
+ "label": "Check sudo users",
+ "description": "Show users with sudo access",
+ "command": "getent group sudo",
+ "purpose": "List sudo group members",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "action",
+ "icon": "🔓",
+ "label": "Unlock a user",
+ "description": "Unlock a locked user account",
+ "demo_type": "unlock_user",
+ }
+ )
+
+ # Service/process related queries
+ elif any(
+ w in query_lower for w in ["service", "systemctl", "running", "process", "status"]
+ ):
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "📊",
+ "label": "List running services",
+ "description": "Show all active services",
+ "command": "systemctl list-units --type=service --state=running",
+ "purpose": "List running services",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "🔍",
+ "label": "Check failed services",
+ "description": "Show services that failed to start",
+ "command": "systemctl list-units --type=service --state=failed",
+ "purpose": "List failed services",
+ }
+ )
+
+ # Disk/storage related queries
+ elif any(w in query_lower for w in ["disk", "storage", "space", "mount", "partition"]):
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "💾",
+ "label": "Check disk usage",
+ "description": "Show disk space by partition",
+ "command": "df -h",
+ "purpose": "Check disk usage",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "📁",
+ "label": "Find large files",
+ "description": "Show largest files on disk",
+ "command": "sudo du -ah / 2>/dev/null | sort -rh | head -20",
+ "purpose": "Find large files",
+ }
+ )
+
+ # Network related queries
+ elif any(w in query_lower for w in ["network", "ip", "port", "connection", "firewall"]):
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "🌐",
+ "label": "Show network interfaces",
+ "description": "Display IP addresses and interfaces",
+ "command": "ip addr show",
+ "purpose": "Show network interfaces",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "🔌",
+ "label": "List open ports",
+ "description": "Show listening ports",
+ "command": "sudo ss -tlnp",
+ "purpose": "List open ports",
+ }
+ )
+
+ # Security related queries
+ elif any(w in query_lower for w in ["security", "audit", "log", "auth", "fail"]):
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "🔒",
+ "label": "Check auth logs",
+ "description": "Show recent authentication attempts",
+ "command": "sudo tail -50 /var/log/auth.log",
+ "purpose": "Check auth logs",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "⚠️",
+ "label": "Check failed logins",
+ "description": "Show failed login attempts",
+ "command": "sudo lastb | head -20",
+ "purpose": "Check failed logins",
+ }
+ )
+
+ # Package/installation related queries
+ elif any(w in query_lower for w in ["install", "package", "apt", "update"]):
+ suggestions.append(
+ {
+ "type": "action",
+ "icon": "📦",
+ "label": "Update system",
+ "description": "Update package lists and upgrade",
+ "command": "sudo apt update && sudo apt upgrade -y",
+ "purpose": "Update system packages",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "📋",
+ "label": "List installed packages",
+ "description": "Show recently installed packages",
+ "command": "apt list --installed 2>/dev/null | tail -20",
+ "purpose": "List installed packages",
+ }
+ )
+
+ # Default: generic helpful suggestions
+ if not suggestions:
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "📊",
+ "label": "System overview",
+ "description": "Show system info and resource usage",
+ "command": "uname -a && uptime && free -h",
+ "purpose": "System overview",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "info",
+ "icon": "🔍",
+ "label": "Check system logs",
+ "description": "View recent system messages",
+ "command": "sudo journalctl -n 20 --no-pager",
+ "purpose": "Check system logs",
+ }
+ )
+
+ return suggestions
+
+ def _display_suggestions(self, suggestions: list[dict]) -> None:
+ """Display numbered suggestions."""
+ if not suggestions:
+ console.print(f"[{GRAY}]No specific suggestions available.[/{GRAY}]")
+ return
+
+ for i, suggestion in enumerate(suggestions, 1):
+ icon = suggestion.get("icon", "💡")
+ label = suggestion.get("label", "")
+ desc = suggestion.get("description", "")
+ console.print(
+ f" [{PURPLE_LIGHT}]{i}.[/{PURPLE_LIGHT}] {icon} [{WHITE}]{label}[/{WHITE}]"
+ )
+ if desc:
+ console.print(f" [{GRAY}]{desc}[/{GRAY}]")
+
+ console.print(f" [{PURPLE_LIGHT}]{len(suggestions) + 1}.[/{PURPLE_LIGHT}] 🚪 Exit session")
+
+ def _handle_natural_language_request(
+ self,
+ request: str,
+ suggestions: list[dict],
+ context: dict,
+ run: DoRun,
+ commands: list[tuple[str, str, list[str]]],
+ ) -> bool:
+ """Handle a natural language request from the user.
+
+ Uses LLM if available for full understanding, falls back to pattern matching.
+ Returns True if the request was handled, False otherwise.
+ """
+ request_lower = request.lower()
+
+ # Quick keyword matching for common actions (fast path)
+ keyword_handlers = [
+ (["start", "run", "begin", "launch", "execute"], "start"),
+ (["setup", "configure", "config", "set up"], "setup"),
+ (["demo", "example", "sample", "code"], "demo"),
+ (["test", "verify", "check", "validate"], "test"),
+ ]
+
+ # Check if request is a simple match to existing suggestions
+ for keywords, action_type in keyword_handlers:
+ if any(kw in request_lower for kw in keywords):
+ # Only use quick match if it's a very simple request
+ if len(request.split()) <= 4:
+ for suggestion in suggestions:
+ if suggestion.get("type") == action_type:
+ self._execute_suggestion(suggestion, run, context["original_query"])
+ return True
+
+ # Use LLM for full understanding if available
+ console.print()
+ console.print(f"[{PURPLE_LIGHT}]🤔 Understanding your request...[/{PURPLE_LIGHT}]")
+
+ if self.llm_callback:
+ return self._handle_request_with_llm(request, context, run, commands)
+ else:
+ # Fall back to pattern matching
+ return self._handle_request_with_patterns(request, context, run)
+
+ def _handle_request_with_llm(
+ self,
+ request: str,
+ context: dict,
+ run: DoRun,
+ commands: list[tuple[str, str, list[str]]],
+ ) -> bool:
+ """Handle request using LLM for full understanding."""
+ try:
+ # Call LLM to understand the request
+ llm_response = self.llm_callback(request, context)
+
+ if not llm_response or llm_response.get("response_type") == "error":
+ console.print(
+ f"[{YELLOW}]⚠ Could not process request: {llm_response.get('error', 'Unknown error')}[/{YELLOW}]"
+ )
+ return False
+
+ response_type = llm_response.get("response_type")
+
+ # HARD CHECK: Filter out any raw JSON from reasoning field
+ reasoning = llm_response.get("reasoning", "")
+ if reasoning:
+ # Remove any JSON-like content from reasoning
+ import re
+
+ # If reasoning looks like JSON or contains JSON patterns, clean it
+ if (
+ reasoning.strip().startswith(("{", "[", "]", '"response_type"'))
+ or re.search(r'"do_commands"\s*:', reasoning)
+ or re.search(r'"command"\s*:', reasoning)
+ or re.search(r'"requires_sudo"\s*:', reasoning)
+ ):
+ # Extract just the text explanation if possible
+ text_match = re.search(r'"reasoning"\s*:\s*"([^"]+)"', reasoning)
+ if text_match:
+ reasoning = text_match.group(1)
+ else:
+ reasoning = "Processing your request..."
+ llm_response["reasoning"] = reasoning
+
+ # Handle do_commands - execute with confirmation
+ if response_type == "do_commands" and llm_response.get("do_commands"):
+ do_commands = llm_response["do_commands"]
+ reasoning = llm_response.get("reasoning", "")
+
+ # Final safety check: don't print JSON-looking reasoning
+ if reasoning and not self._is_json_like(reasoning):
+ console.print()
+ console.print(f"[{PURPLE_LIGHT}]🤖 {reasoning}[/{PURPLE_LIGHT}]")
+ console.print()
+
+ # Show commands and ask for confirmation
+ console.print(f"[bold {WHITE}]📋 Commands to execute:[/bold {WHITE}]")
+ for i, cmd_info in enumerate(do_commands, 1):
+ cmd = cmd_info.get("command", "")
+ purpose = cmd_info.get("purpose", "")
+ sudo = "🔐 " if cmd_info.get("requires_sudo") else ""
+ console.print(f" {i}. {sudo}[{GREEN}]{cmd}[/{GREEN}]")
+ if purpose:
+ console.print(f" [{GRAY}]{purpose}[/{GRAY}]")
+ console.print()
+
+ if not Confirm.ask("Execute these commands?", default=True):
+ console.print(f"[{GRAY}]Skipped.[/{GRAY}]")
+ return False
+
+ # Execute the commands
+ console.print()
+ from rich.panel import Panel
+
+ executed_in_session = []
+ for idx, cmd_info in enumerate(do_commands, 1):
+ cmd = cmd_info.get("command", "")
+ purpose = cmd_info.get("purpose", "Execute command")
+ needs_sudo = cmd_info.get("requires_sudo", False) or self._needs_sudo(cmd, [])
+
+ # Create visual grouping for each command
+ console.print()
+ console.print(
+ Panel(
+ f"[bold {PURPLE_LIGHT}]{cmd}[/bold {PURPLE_LIGHT}]\n[{GRAY}]└─ {purpose}[/{GRAY}]",
+ title=f"[bold {WHITE}] Command {idx}/{len(do_commands)} [/bold {WHITE}]",
+ title_align="left",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+
+ success, stdout, stderr = self._execute_single_command(cmd, needs_sudo)
+
+ if success:
+ console.print(
+ Panel(
+ f"[bold {GREEN}]{ICON_SUCCESS} Success[/bold {GREEN}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+ if stdout:
+ output_preview = stdout[:300] + ("..." if len(stdout) > 300 else "")
+ console.print(f"[{GRAY}]{output_preview}[/{GRAY}]")
+ executed_in_session.append(cmd)
+ else:
+ console.print(
+ Panel(
+ f"[bold {RED}]{ICON_ERROR} Failed[/bold {RED}]\n[{GRAY}]{stderr[:150]}[/{GRAY}]",
+ border_style=RED,
+ padding=(0, 1),
+ )
+ )
+
+ # Offer to diagnose and fix
+ if Confirm.ask("Try to auto-fix?", default=True):
+ diagnosis = self._diagnoser.diagnose_error(cmd, stderr)
+ fixed, msg, _ = self._auto_fixer.auto_fix_error(cmd, stderr, diagnosis)
+ if fixed:
+ console.print(
+ Panel(
+ f"[bold {GREEN}]{ICON_SUCCESS} Fixed:[/bold {GREEN}] [{WHITE}]{msg}[/{WHITE}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+ executed_in_session.append(cmd)
+
+ # Track executed commands in context for suggestion generation
+ if "executed_commands" not in context:
+ context["executed_commands"] = []
+ context["executed_commands"].extend(executed_in_session)
+
+ return True
+
+ # Handle single command - execute directly
+ elif response_type == "command" and llm_response.get("command"):
+ cmd = llm_response["command"]
+ reasoning = llm_response.get("reasoning", "")
+
+ console.print()
+ console.print(
+ f"[{PURPLE_LIGHT}]📋 Running:[/{PURPLE_LIGHT}] [{GREEN}]{cmd}[/{GREEN}]"
+ )
+ if reasoning:
+ console.print(f" [{GRAY}]{reasoning}[/{GRAY}]")
+
+ needs_sudo = self._needs_sudo(cmd, [])
+ success, stdout, stderr = self._execute_single_command(cmd, needs_sudo)
+
+ if success:
+ console.print(f"[{GREEN}]{ICON_SUCCESS} Success[/{GREEN}]")
+ if stdout:
+ console.print(
+ f"[{GRAY}]{stdout[:500]}{'...' if len(stdout) > 500 else ''}[/{GRAY}]"
+ )
+ else:
+ console.print(f"[{RED}]{ICON_ERROR} Failed: {stderr[:200]}[/{RED}]")
+
+ return True
+
+ # Handle answer - just display it (filter raw JSON)
+ elif response_type == "answer" and llm_response.get("answer"):
+ answer = llm_response["answer"]
+ # Don't print raw JSON or internal processing messages
+ if not (
+ self._is_json_like(answer)
+ or "I'm processing your request" in answer
+ or "I have a plan to execute" in answer
+ ):
+ console.print()
+ console.print(answer)
+ return True
+
+ else:
+ console.print(f"[{YELLOW}]I didn't understand that. Could you rephrase?[/{YELLOW}]")
+ return False
+
+ except Exception as e:
+ console.print(f"[{YELLOW}]⚠ Error processing request: {e}[/{YELLOW}]")
+ # Fall back to pattern matching
+ return self._handle_request_with_patterns(request, context, run)
+
+ def _handle_request_with_patterns(
+ self,
+ request: str,
+ context: dict,
+ run: DoRun,
+ ) -> bool:
+ """Handle request using pattern matching (fallback when LLM not available)."""
+ # Try to generate a command from the natural language request
+ generated = self._generate_command_from_request(request, context)
+
+ if generated:
+ cmd = generated.get("command")
+ purpose = generated.get("purpose", "Execute user request")
+ needs_confirm = generated.get("needs_confirmation", True)
+
+ console.print()
+ console.print(f"[{PURPLE_LIGHT}]📋 I'll run this command:[/{PURPLE_LIGHT}]")
+ console.print(f" [{GREEN}]{cmd}[/{GREEN}]")
+ console.print(f" [{GRAY}]{purpose}[/{GRAY}]")
+ console.print()
+
+ if needs_confirm:
+ if not Confirm.ask("Proceed?", default=True):
+ console.print(f"[{GRAY}]Skipped.[/{GRAY}]")
+ return False
+
+ # Execute the command
+ needs_sudo = self._needs_sudo(cmd, [])
+ success, stdout, stderr = self._execute_single_command(cmd, needs_sudo)
+
+ if success:
+ console.print(f"[{GREEN}]{ICON_SUCCESS} Success[/{GREEN}]")
+ if stdout:
+ output_preview = stdout[:500] + ("..." if len(stdout) > 500 else "")
+ console.print(f"[{GRAY}]{output_preview}[/{GRAY}]")
+ else:
+ console.print(f"[{RED}]{ICON_ERROR} Failed: {stderr[:200]}[/{RED}]")
+
+ # Offer to diagnose the error
+ if Confirm.ask("Would you like me to try to fix this?", default=True):
+ diagnosis = self._diagnoser.diagnose_error(cmd, stderr)
+ fixed, msg, _ = self._auto_fixer.auto_fix_error(cmd, stderr, diagnosis)
+ if fixed:
+ console.print(f"[{GREEN}]{ICON_SUCCESS} Fixed: {msg}[/{GREEN}]")
+
+ return True
+
+ # Couldn't understand the request
+ console.print(
+ f"[{YELLOW}]I'm not sure how to do that. Could you be more specific?[/{YELLOW}]"
+ )
+ console.print(
+ "[dim]Try something like: 'run the container', 'show me the config', or select a number.[/dim]"
+ )
+ return False
+
+ def _generate_command_from_request(
+ self,
+ request: str,
+ context: dict,
+ ) -> dict | None:
+ """Generate a command from a natural language request."""
+ request_lower = request.lower()
+ executed_cmds = context.get("executed_commands", [])
+ cmd_context = " ".join(executed_cmds).lower()
+
+ # Pattern matching for common requests
+ patterns = [
+ # Docker patterns
+ (r"run.*(?:container|image|docker)(?:.*port\s*(\d+))?", self._gen_docker_run),
+ (r"stop.*(?:container|docker)", self._gen_docker_stop),
+ (r"remove.*(?:container|docker)", self._gen_docker_remove),
+ (r"(?:show|list).*(?:containers?|images?)", self._gen_docker_list),
+ (r"logs?(?:\s+of)?(?:\s+the)?(?:\s+container)?", self._gen_docker_logs),
+ (r"exec.*(?:container|docker)|shell.*(?:container|docker)", self._gen_docker_exec),
+ # Service patterns
+ (
+ r"(?:start|restart).*(?:service|nginx|apache|postgres|mysql|redis)",
+ self._gen_service_start,
+ ),
+ (r"stop.*(?:service|nginx|apache|postgres|mysql|redis)", self._gen_service_stop),
+ (r"status.*(?:service|nginx|apache|postgres|mysql|redis)", self._gen_service_status),
+ # Package patterns
+ (r"install\s+(.+)", self._gen_install_package),
+ (r"update\s+(?:packages?|system)", self._gen_update_packages),
+ # File patterns
+ (
+ r"(?:show|cat|view|read).*(?:config|file|log)(?:.*?([/\w\.\-]+))?",
+ self._gen_show_file,
+ ),
+ (r"edit.*(?:config|file)(?:.*?([/\w\.\-]+))?", self._gen_edit_file),
+ # Info patterns
+ (r"(?:check|show|what).*(?:version|status)", self._gen_check_version),
+ (r"(?:how|where).*(?:connect|access|use)", self._gen_show_connection_info),
+ ]
+
+ import re
+
+ for pattern, handler in patterns:
+ match = re.search(pattern, request_lower)
+ if match:
+ return handler(request, match, context)
+
+ # Use LLM if available to generate command
+ if self.llm_callback:
+ return self._llm_generate_command(request, context)
+
+ return None
+
+ # Command generators
+ def _gen_docker_run(self, request: str, match, context: dict) -> dict:
+ # Find the image from context
+ executed = context.get("executed_commands", [])
+ image = "your-image"
+ for cmd in executed:
+ if "docker pull" in cmd:
+ image = cmd.split("docker pull")[-1].strip()
+ break
+
+ # Check for port in request
+ port = match.group(1) if match.lastindex and match.group(1) else "8080"
+ container_name = image.split("/")[-1].split(":")[0]
+
+ return {
+ "command": f"docker run -d --name {container_name} -p {port}:{port} {image}",
+ "purpose": f"Run {image} container on port {port}",
+ "needs_confirmation": True,
+ }
+
+ def _gen_docker_stop(self, request: str, match, context: dict) -> dict:
+ return {
+ "command": "docker ps -q | xargs -r docker stop",
+ "purpose": "Stop all running containers",
+ "needs_confirmation": True,
+ }
+
+ def _gen_docker_remove(self, request: str, match, context: dict) -> dict:
+ return {
+ "command": "docker ps -aq | xargs -r docker rm",
+ "purpose": "Remove all containers",
+ "needs_confirmation": True,
+ }
+
+ def _gen_docker_list(self, request: str, match, context: dict) -> dict:
+ if "image" in request.lower():
+ return {
+ "command": "docker images",
+ "purpose": "List Docker images",
+ "needs_confirmation": False,
+ }
+ return {
+ "command": "docker ps -a",
+ "purpose": "List all containers",
+ "needs_confirmation": False,
+ }
+
+ def _gen_docker_logs(self, request: str, match, context: dict) -> dict:
+ return {
+ "command": "docker logs $(docker ps -lq) --tail 50",
+ "purpose": "Show logs of the most recent container",
+ "needs_confirmation": False,
+ }
+
+ def _gen_docker_exec(self, request: str, match, context: dict) -> dict:
+ return {
+ "command": "docker exec -it $(docker ps -lq) /bin/sh",
+ "purpose": "Open shell in the most recent container",
+ "needs_confirmation": True,
+ }
+
+ def _gen_service_start(self, request: str, match, context: dict) -> dict:
+ # Extract service name
+ services = ["nginx", "apache2", "postgresql", "mysql", "redis", "docker"]
+ service = "nginx" # default
+ for svc in services:
+ if svc in request.lower():
+ service = svc
+ break
+
+ if "restart" in request.lower():
+ return {
+ "command": f"sudo systemctl restart {service}",
+ "purpose": f"Restart {service}",
+ "needs_confirmation": True,
+ }
+ return {
+ "command": f"sudo systemctl start {service}",
+ "purpose": f"Start {service}",
+ "needs_confirmation": True,
+ }
+
+ def _gen_service_stop(self, request: str, match, context: dict) -> dict:
+ services = ["nginx", "apache2", "postgresql", "mysql", "redis", "docker"]
+ service = "nginx"
+ for svc in services:
+ if svc in request.lower():
+ service = svc
+ break
+ return {
+ "command": f"sudo systemctl stop {service}",
+ "purpose": f"Stop {service}",
+ "needs_confirmation": True,
+ }
+
+ def _gen_service_status(self, request: str, match, context: dict) -> dict:
+ services = ["nginx", "apache2", "postgresql", "mysql", "redis", "docker"]
+ service = "nginx"
+ for svc in services:
+ if svc in request.lower():
+ service = svc
+ break
+ return {
+ "command": f"systemctl status {service}",
+ "purpose": f"Check {service} status",
+ "needs_confirmation": False,
+ }
+
+ def _gen_install_package(self, request: str, match, context: dict) -> dict:
+ package = match.group(1).strip() if match.group(1) else "package-name"
+ # Clean up common words
+ package = package.replace("please", "").replace("the", "").replace("package", "").strip()
+ return {
+ "command": f"sudo apt install -y {package}",
+ "purpose": f"Install {package}",
+ "needs_confirmation": True,
+ }
+
+ def _gen_update_packages(self, request: str, match, context: dict) -> dict:
+ return {
+ "command": "sudo apt update && sudo apt upgrade -y",
+ "purpose": "Update all packages",
+ "needs_confirmation": True,
+ }
+
+ def _gen_show_file(self, request: str, match, context: dict) -> dict:
+ # Try to extract file path or use common config locations
+ file_path = match.group(1) if match.lastindex and match.group(1) else None
+
+ if not file_path:
+ if "nginx" in request.lower():
+ file_path = "/etc/nginx/nginx.conf"
+ elif "apache" in request.lower():
+ file_path = "/etc/apache2/apache2.conf"
+ elif "postgres" in request.lower():
+ file_path = "/etc/postgresql/*/main/postgresql.conf"
+ else:
+ file_path = "/etc/hosts"
+
+ return {
+ "command": f"cat {file_path}",
+ "purpose": f"Show {file_path}",
+ "needs_confirmation": False,
+ }
+
+ def _gen_edit_file(self, request: str, match, context: dict) -> dict:
+ file_path = match.group(1) if match.lastindex and match.group(1) else "/etc/hosts"
+ return {
+ "command": f"sudo nano {file_path}",
+ "purpose": f"Edit {file_path}",
+ "needs_confirmation": True,
+ }
+
+ def _gen_check_version(self, request: str, match, context: dict) -> dict:
+ # Try to determine what to check version of
+ tools = {
+ "docker": "docker --version",
+ "node": "node --version && npm --version",
+ "python": "python3 --version && pip3 --version",
+ "nginx": "nginx -v",
+ "postgres": "psql --version",
+ }
+
+ for tool, cmd in tools.items():
+ if tool in request.lower():
+ return {
+ "command": cmd,
+ "purpose": f"Check {tool} version",
+ "needs_confirmation": False,
+ }
+
+ # Default: show multiple versions
+ return {
+ "command": "docker --version; node --version 2>/dev/null; python3 --version",
+ "purpose": "Check installed tool versions",
+ "needs_confirmation": False,
+ }
+
+ def _gen_show_connection_info(self, request: str, match, context: dict) -> dict:
+ executed = context.get("executed_commands", [])
+
+ # Check what was installed to provide relevant connection info
+ if any("ollama" in cmd for cmd in executed):
+ return {
+ "command": "echo 'Ollama API: http://localhost:11434' && curl -s http://localhost:11434/api/tags 2>/dev/null | head -5",
+ "purpose": "Show Ollama connection info",
+ "needs_confirmation": False,
+ }
+ elif any("postgres" in cmd for cmd in executed):
+ return {
+ "command": "echo 'PostgreSQL: psql -U postgres -h localhost' && sudo -u postgres psql -c '\\conninfo'",
+ "purpose": "Show PostgreSQL connection info",
+ "needs_confirmation": False,
+ }
+ elif any("nginx" in cmd for cmd in executed):
+ return {
+ "command": "echo 'Nginx: http://localhost:80' && curl -I http://localhost 2>/dev/null | head -3",
+ "purpose": "Show Nginx connection info",
+ "needs_confirmation": False,
+ }
+
+ return {
+ "command": "ss -tlnp | head -20",
+ "purpose": "Show listening ports and services",
+ "needs_confirmation": False,
+ }
+
+ def _llm_generate_command(self, request: str, context: dict) -> dict | None:
+ """Use LLM to generate a command from the request."""
+ if not self.llm_callback:
+ return None
+
+ try:
+ prompt = f"""Given this context:
+- User originally asked: {context.get('original_query', 'N/A')}
+- Commands executed: {', '.join(context.get('executed_commands', [])[:5])}
+- Previous session actions: {', '.join(context.get('session_actions', [])[:3])}
+
+The user now asks: "{request}"
+
+Generate a single Linux command to fulfill this request.
+Respond with JSON: {{"command": "...", "purpose": "..."}}
+If you cannot generate a safe command, respond with: {{"error": "reason"}}"""
+
+ result = self.llm_callback(prompt)
+ if result and isinstance(result, dict):
+ if "command" in result:
+ return {
+ "command": result["command"],
+ "purpose": result.get("purpose", "Execute user request"),
+ "needs_confirmation": True,
+ }
+ except Exception:
+ pass
+
+ return None
+
+ def _generate_suggestions(
+ self,
+ run: DoRun,
+ commands: list[tuple[str, str, list[str]]],
+ user_query: str,
+ ) -> list[dict]:
+ """Generate context-aware suggestions based on what was installed/configured."""
+ suggestions = []
+
+ # Analyze what was done
+ executed_cmds = [cmd for cmd, _, _ in commands]
+ cmd_str = " ".join(executed_cmds).lower()
+ query_lower = user_query.lower()
+
+ # Docker-related suggestions
+ if "docker" in cmd_str or "docker" in query_lower:
+ if "pull" in cmd_str:
+ # Suggest running the container
+ for cmd, _, _ in commands:
+ if "docker pull" in cmd:
+ image = cmd.split("docker pull")[-1].strip()
+ suggestions.append(
+ {
+ "type": "start",
+ "icon": "🚀",
+ "label": "Start the container",
+ "description": f"Run {image} in a container",
+ "command": f"docker run -d --name {image.split('/')[-1].split(':')[0]} {image}",
+ "purpose": f"Start {image} container",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "demo",
+ "icon": "📝",
+ "label": "Show demo usage",
+ "description": "Example docker-compose and run commands",
+ "demo_type": "docker",
+ "image": image,
+ }
+ )
+ break
+
+ # Ollama/Model runner suggestions
+ if "ollama" in cmd_str or "ollama" in query_lower or "model" in query_lower:
+ suggestions.append(
+ {
+ "type": "start",
+ "icon": "🚀",
+ "label": "Start Ollama server",
+ "description": "Run Ollama in the background",
+ "command": "docker run -d --name ollama -p 11434:11434 -v ollama:/root/.ollama ollama/ollama",
+ "purpose": "Start Ollama server container",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "setup",
+ "icon": "⚙️",
+ "label": "Pull a model",
+ "description": "Download a model like llama2, mistral, or codellama",
+ "command": "docker exec ollama ollama pull llama2",
+ "purpose": "Download llama2 model",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "demo",
+ "icon": "📝",
+ "label": "Show API demo",
+ "description": "Example curl commands and Python code",
+ "demo_type": "ollama",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "test",
+ "icon": "🧪",
+ "label": "Test the installation",
+ "description": "Verify Ollama is running correctly",
+ "command": "curl http://localhost:11434/api/tags",
+ "purpose": "Check Ollama API",
+ }
+ )
+
+ # Nginx suggestions
+ if "nginx" in cmd_str or "nginx" in query_lower:
+ suggestions.append(
+ {
+ "type": "start",
+ "icon": "🚀",
+ "label": "Start Nginx",
+ "description": "Start the Nginx web server",
+ "command": "sudo systemctl start nginx",
+ "purpose": "Start Nginx service",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "setup",
+ "icon": "⚙️",
+ "label": "Configure a site",
+ "description": "Set up a new virtual host",
+ "demo_type": "nginx_config",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "test",
+ "icon": "🧪",
+ "label": "Test configuration",
+ "description": "Verify Nginx config is valid",
+ "command": "sudo nginx -t",
+ "purpose": "Test Nginx configuration",
+ }
+ )
+
+ # PostgreSQL suggestions
+ if "postgres" in cmd_str or "postgresql" in query_lower:
+ suggestions.append(
+ {
+ "type": "start",
+ "icon": "🚀",
+ "label": "Start PostgreSQL",
+ "description": "Start the database server",
+ "command": "sudo systemctl start postgresql",
+ "purpose": "Start PostgreSQL service",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "setup",
+ "icon": "⚙️",
+ "label": "Create a database",
+ "description": "Create a new database and user",
+ "demo_type": "postgres_setup",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "test",
+ "icon": "🧪",
+ "label": "Test connection",
+ "description": "Verify PostgreSQL is accessible",
+ "command": "sudo -u postgres psql -c '\\l'",
+ "purpose": "List PostgreSQL databases",
+ }
+ )
+
+ # Node.js/npm suggestions
+ if "node" in cmd_str or "npm" in cmd_str or "nodejs" in query_lower:
+ suggestions.append(
+ {
+ "type": "demo",
+ "icon": "📝",
+ "label": "Show starter code",
+ "description": "Example Express.js server",
+ "demo_type": "nodejs",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "test",
+ "icon": "🧪",
+ "label": "Verify installation",
+ "description": "Check Node.js and npm versions",
+ "command": "node --version && npm --version",
+ "purpose": "Check Node.js installation",
+ }
+ )
+
+ # Python/pip suggestions
+ if "python" in cmd_str or "pip" in cmd_str:
+ suggestions.append(
+ {
+ "type": "demo",
+ "icon": "📝",
+ "label": "Show example code",
+ "description": "Example Python usage",
+ "demo_type": "python",
+ }
+ )
+ suggestions.append(
+ {
+ "type": "test",
+ "icon": "🧪",
+ "label": "Test import",
+ "description": "Verify packages are importable",
+ "demo_type": "python_test",
+ }
+ )
+
+ # Generic suggestions if nothing specific matched
+ if not suggestions:
+ # Add a generic test suggestion
+ suggestions.append(
+ {
+ "type": "test",
+ "icon": "🧪",
+ "label": "Run a quick test",
+ "description": "Verify the installation works",
+ "demo_type": "generic_test",
+ }
+ )
+
+ return suggestions[:5] # Limit to 5 suggestions
+
+ def _execute_suggestion(
+ self,
+ suggestion: dict,
+ run: DoRun,
+ user_query: str,
+ ) -> None:
+ """Execute a suggestion."""
+ suggestion_type = suggestion.get("type")
+
+ if suggestion_type == "retry_interrupted":
+ # Retry the command that was interrupted
+ if self._interrupted_command:
+ console.print()
+ console.print(
+ f"[{PURPLE_LIGHT}]🔄 Retrying:[/{PURPLE_LIGHT}] [{WHITE}]{self._interrupted_command}[/{WHITE}]"
+ )
+ console.print()
+
+ needs_sudo = "sudo" in self._interrupted_command or self._needs_sudo(
+ self._interrupted_command, []
+ )
+ success, stdout, stderr = self._execute_single_command(
+ self._interrupted_command, needs_sudo=needs_sudo
+ )
+
+ if success:
+ console.print(f"[{GREEN}]{ICON_SUCCESS} Success[/{GREEN}]")
+ if stdout:
+ console.print(
+ f"[{GRAY}]{stdout[:500]}{'...' if len(stdout) > 500 else ''}[/{GRAY}]"
+ )
+ self._interrupted_command = None # Clear after successful retry
+ else:
+ console.print(f"[{RED}]{ICON_ERROR} Failed: {stderr[:200]}[/{RED}]")
+ else:
+ console.print(f"[{YELLOW}]No interrupted command to retry.[/{YELLOW}]")
+ elif suggestion_type == "skip_and_continue":
+ # Skip the interrupted command and continue with remaining
+ console.print()
+ console.print(
+ f"[{PURPLE_LIGHT}]⏭️ Skipping interrupted command and continuing...[/{PURPLE_LIGHT}]"
+ )
+ self._interrupted_command = None
+
+ if self._remaining_commands:
+ console.print(f"[dim]Remaining commands: {len(self._remaining_commands)}[/dim]")
+ for cmd, purpose, protected in self._remaining_commands:
+ console.print(f"[dim] • {cmd[:60]}{'...' if len(cmd) > 60 else ''}[/dim]")
+ console.print()
+ console.print(
+ "[dim]Use 'continue all' to execute remaining commands, or type a new request.[/dim]"
+ )
+ else:
+ console.print("[dim]No remaining commands to execute.[/dim]")
+ elif suggestion_type == "demo":
+ self._show_demo(suggestion.get("demo_type", "generic"), suggestion)
+ elif suggestion_type == "test":
+ # Show test commands based on what was installed
+ self._show_test_commands(run, user_query)
+ elif "command" in suggestion:
+ console.print()
+ console.print(
+ f"[{PURPLE_LIGHT}]Executing:[/{PURPLE_LIGHT}] [{WHITE}]{suggestion['command']}[/{WHITE}]"
+ )
+ console.print()
+
+ needs_sudo = "sudo" in suggestion["command"]
+ success, stdout, stderr = self._execute_single_command(
+ suggestion["command"], needs_sudo=needs_sudo
+ )
+
+ if success:
+ console.print(f"[{GREEN}]{ICON_SUCCESS} Success[/{GREEN}]")
+ if stdout:
+ console.print(
+ f"[{GRAY}]{stdout[:500]}{'...' if len(stdout) > 500 else ''}[/{GRAY}]"
+ )
+ else:
+ console.print(f"[{RED}]{ICON_ERROR} Failed: {stderr[:200]}[/{RED}]")
+ elif "manual_commands" in suggestion:
+ # Show manual commands
+ console.print()
+ console.print(f"[bold {PURPLE_LIGHT}]📋 Manual Commands:[/bold {PURPLE_LIGHT}]")
+ for cmd in suggestion["manual_commands"]:
+ console.print(f" [{GREEN}]$ {cmd}[/{GREEN}]")
+ console.print()
+ console.print(f"[{GRAY}]Copy and run these commands in your terminal.[/{GRAY}]")
+ else:
+ console.print(f"[{YELLOW}]No specific action available for this suggestion.[/{YELLOW}]")
+
+ def _show_test_commands(self, run: DoRun, user_query: str) -> None:
+ """Show test commands based on what was installed/configured."""
+ from rich.panel import Panel
+
+ console.print()
+ console.print("[bold cyan]🧪 Quick Test Commands[/bold cyan]")
+ console.print()
+
+ test_commands = []
+ query_lower = user_query.lower()
+
+ # Detect what was installed and suggest appropriate tests
+ executed_cmds = [c.command.lower() for c in run.commands if c.status.value == "success"]
+ all_cmds_str = " ".join(executed_cmds)
+
+ # Web server tests
+ if "apache" in all_cmds_str or "apache2" in query_lower:
+ test_commands.extend(
+ [
+ ("Check Apache status", "systemctl status apache2"),
+ ("Test Apache config", "sudo apache2ctl -t"),
+ ("View in browser", "curl -I http://localhost"),
+ ]
+ )
+
+ if "nginx" in all_cmds_str or "nginx" in query_lower:
+ test_commands.extend(
+ [
+ ("Check Nginx status", "systemctl status nginx"),
+ ("Test Nginx config", "sudo nginx -t"),
+ ("View in browser", "curl -I http://localhost"),
+ ]
+ )
+
+ # Database tests
+ if "mysql" in all_cmds_str or "mysql" in query_lower:
+ test_commands.extend(
+ [
+ ("Check MySQL status", "systemctl status mysql"),
+ ("Test MySQL connection", "sudo mysql -e 'SELECT VERSION();'"),
+ ]
+ )
+
+ if "postgresql" in all_cmds_str or "postgres" in query_lower:
+ test_commands.extend(
+ [
+ ("Check PostgreSQL status", "systemctl status postgresql"),
+ ("Test PostgreSQL", "sudo -u postgres psql -c 'SELECT version();'"),
+ ]
+ )
+
+ # Docker tests
+ if "docker" in all_cmds_str or "docker" in query_lower:
+ test_commands.extend(
+ [
+ ("Check Docker status", "systemctl status docker"),
+ ("List containers", "docker ps -a"),
+ ("Test Docker", "docker run hello-world"),
+ ]
+ )
+
+ # PHP tests
+ if "php" in all_cmds_str or "php" in query_lower or "lamp" in query_lower:
+ test_commands.extend(
+ [
+ ("Check PHP version", "php -v"),
+ ("Test PHP info", "php -i | head -20"),
+ ]
+ )
+
+ # Node.js tests
+ if "node" in all_cmds_str or "nodejs" in query_lower:
+ test_commands.extend(
+ [
+ ("Check Node version", "node -v"),
+ ("Check npm version", "npm -v"),
+ ]
+ )
+
+ # Python tests
+ if "python" in all_cmds_str or "python" in query_lower:
+ test_commands.extend(
+ [
+ ("Check Python version", "python3 --version"),
+ ("Check pip version", "pip3 --version"),
+ ]
+ )
+
+ # Generic service tests
+ if not test_commands:
+ # Try to extract service names from commands
+ for cmd_log in run.commands:
+ if "systemctl" in cmd_log.command and cmd_log.status.value == "success":
+ import re
+
+ match = re.search(
+ r"systemctl\s+(?:start|enable|restart)\s+(\S+)", cmd_log.command
+ )
+ if match:
+ service = match.group(1)
+ test_commands.append(
+ (f"Check {service} status", f"systemctl status {service}")
+ )
+
+ if not test_commands:
+ test_commands = [
+ ("Check system status", "systemctl --failed"),
+ ("View recent logs", "journalctl -n 20 --no-pager"),
+ ]
+
+ # Display test commands
+ for i, (desc, cmd) in enumerate(test_commands[:6], 1): # Limit to 6
+ console.print(f" [bold {WHITE}]{i}.[/bold {WHITE}] {desc}")
+ console.print(f" [{GREEN}]$ {cmd}[/{GREEN}]")
+ console.print()
+
+ console.print(f"[{GRAY}]Copy and run these commands to verify your installation.[/{GRAY}]")
+ console.print()
+
+ # Offer to run the first test
+ try:
+ response = input(f"[{GRAY}]Run first test? [y/N]: [/{GRAY}]").strip().lower()
+ if response in ["y", "yes"]:
+ if test_commands:
+ desc, cmd = test_commands[0]
+ console.print()
+ console.print(
+ f"[{PURPLE_LIGHT}]Running:[/{PURPLE_LIGHT}] [{WHITE}]{cmd}[/{WHITE}]"
+ )
+ needs_sudo = cmd.strip().startswith("sudo")
+ success, stdout, stderr = self._execute_single_command(
+ cmd, needs_sudo=needs_sudo
+ )
+ if success:
+ console.print(f"[{GREEN}]{ICON_SUCCESS} {desc} - Passed[/{GREEN}]")
+ if stdout:
+ console.print(
+ Panel(
+ stdout[:500],
+ title=f"[{GRAY}]Output[/{GRAY}]",
+ border_style=GRAY,
+ )
+ )
+ else:
+ console.print(f"[{RED}]{ICON_ERROR} {desc} - Failed[/{RED}]")
+ if stderr:
+ console.print(f"[{GRAY}]{stderr[:200]}[/{GRAY}]")
+ except (EOFError, KeyboardInterrupt):
+ pass
+
+ def _show_demo(self, demo_type: str, suggestion: dict) -> None:
+ """Show demo code/commands for a specific type."""
+ console.print()
+
+ if demo_type == "docker":
+ image = suggestion.get("image", "your-image")
+ console.print(f"[bold {PURPLE_LIGHT}]📝 Docker Usage Examples[/bold {PURPLE_LIGHT}]")
+ console.print()
+ console.print(f"[{GRAY}]# Run container in foreground:[/{GRAY}]")
+ console.print(f"[{GREEN}]docker run -it {image}[/{GREEN}]")
+ console.print()
+ console.print(f"[{GRAY}]# Run container in background:[/{GRAY}]")
+ console.print(f"[{GREEN}]docker run -d --name myapp {image}[/{GREEN}]")
+ console.print()
+ console.print(f"[{GRAY}]# Run with port mapping:[/{GRAY}]")
+ console.print(f"[{GREEN}]docker run -d -p 8080:8080 {image}[/{GREEN}]")
+ console.print()
+ console.print(f"[{GRAY}]# Run with volume mount:[/{GRAY}]")
+ console.print(f"[{GREEN}]docker run -d -v /host/path:/container/path {image}[/{GREEN}]")
+
+ elif demo_type == "ollama":
+ console.print(f"[bold {PURPLE_LIGHT}]📝 Ollama API Examples[/bold {PURPLE_LIGHT}]")
+ console.print()
+ console.print(f"[{GRAY}]# List available models:[/{GRAY}]")
+ console.print(f"[{GREEN}]curl http://localhost:11434/api/tags[/{GREEN}]")
+ console.print()
+ console.print(f"[{GRAY}]# Generate text:[/{GRAY}]")
+ console.print(f"""[{GREEN}]curl http://localhost:11434/api/generate -d '{{
+ "model": "llama2",
+ "prompt": "Hello, how are you?"
+}}'[/{GREEN}]""")
+ console.print()
+ console.print(f"[{GRAY}]# Python example:[/{GRAY}]")
+ console.print(f"""[{GREEN}]import requests
+
+response = requests.post('http://localhost:11434/api/generate',
+ json={{
+ 'model': 'llama2',
+ 'prompt': 'Explain quantum computing in simple terms',
+ 'stream': False
+ }})
+print(response.json()['response'])[/{GREEN}]""")
+
+ elif demo_type == "nginx_config":
+ console.print(
+ f"[bold {PURPLE_LIGHT}]📝 Nginx Configuration Example[/bold {PURPLE_LIGHT}]"
+ )
+ console.print()
+ console.print(f"[{GRAY}]# Create a new site config:[/{GRAY}]")
+ console.print(f"[{GREEN}]sudo nano /etc/nginx/sites-available/mysite[/{GREEN}]")
+ console.print()
+ console.print(f"[{GRAY}]# Example config:[/{GRAY}]")
+ console.print(f"""[{GREEN}]server {{
+ listen 80;
+ server_name example.com;
+
+ location / {{
+ proxy_pass http://localhost:3000;
+ proxy_http_version 1.1;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection 'upgrade';
+ proxy_set_header Host $host;
+ }}
+}}[/{GREEN}]""")
+ console.print()
+ console.print(f"[{GRAY}]# Enable the site:[/{GRAY}]")
+ console.print(
+ f"[{GREEN}]sudo ln -s /etc/nginx/sites-available/mysite /etc/nginx/sites-enabled/[/{GREEN}]"
+ )
+ console.print(f"[{GREEN}]sudo nginx -t && sudo systemctl reload nginx[/{GREEN}]")
+
+ elif demo_type == "postgres_setup":
+ console.print(f"[bold {PURPLE_LIGHT}]📝 PostgreSQL Setup Example[/bold {PURPLE_LIGHT}]")
+ console.print()
+ console.print("[dim]# Create a new user and database:[/dim]")
+ console.print("[{GREEN}]sudo -u postgres createuser --interactive myuser[/{GREEN}]")
+ console.print(f"[{GREEN}]sudo -u postgres createdb mydb -O myuser[/{GREEN}]")
+ console.print()
+ console.print("[dim]# Connect to the database:[/dim]")
+ console.print(f"[{GREEN}]psql -U myuser -d mydb[/{GREEN}]")
+ console.print()
+ console.print("[dim]# Python connection example:[/dim]")
+ console.print(f"""[{GREEN}]import psycopg2
+
+conn = psycopg2.connect(
+ dbname="mydb",
+ user="myuser",
+ password="mypassword",
+ host="localhost"
+)
+cursor = conn.cursor()
+cursor.execute("SELECT version();")
+print(cursor.fetchone())[/{GREEN}]""")
+
+ elif demo_type == "nodejs":
+ console.print(f"[bold {PURPLE_LIGHT}]📝 Node.js Example[/bold {PURPLE_LIGHT}]")
+ console.print()
+ console.print("[dim]# Create a simple Express server:[/dim]")
+ console.print(f"""[{GREEN}]// server.js
+const express = require('express');
+const app = express();
+
+app.get('/', (req, res) => {{
+ res.json({{ message: 'Hello from Node.js!' }});
+}});
+
+app.listen(3000, () => {{
+ console.log('Server running on http://localhost:3000');
+}});[/{GREEN}]""")
+ console.print()
+ console.print("[dim]# Run it:[/dim]")
+ console.print(
+ f"[{GREEN}]npm init -y && npm install express && node server.js[/{GREEN}]"
+ )
+
+ elif demo_type == "python":
+ console.print(f"[bold {PURPLE_LIGHT}]📝 Python Example[/bold {PURPLE_LIGHT}]")
+ console.print()
+ console.print("[dim]# Simple HTTP server:[/dim]")
+ console.print(f"[{GREEN}]python3 -m http.server 8000[/{GREEN}]")
+ console.print()
+ console.print("[dim]# Flask web app:[/dim]")
+ console.print(f"""[{GREEN}]from flask import Flask
+app = Flask(__name__)
+
+@app.route('/')
+def hello():
+ return {{'message': 'Hello from Python!'}}
+
+if __name__ == '__main__':
+ app.run(debug=True)[/{GREEN}]""")
+
+ else:
+ console.print(
+ "[dim]No specific demo available. Check the documentation for usage examples.[/dim]"
+ )
+
+ console.print()
+
+ def _execute_task_node(
+ self,
+ task: TaskNode,
+ run: DoRun,
+ original_commands: list[tuple[str, str, list[str]]],
+ depth: int = 0,
+ ):
+ """Execute a single task node with auto-repair capabilities."""
+ indent = " " * depth
+ task_num = f"[{task.task_type.value.upper()}]"
+
+ # Check if task was marked as skipped (e.g., using existing resource)
+ if task.status == CommandStatus.SKIPPED:
+ # Claude-like skipped output
+ console.print(
+ f"{indent}[{GRAY}]{ICON_INFO}[/{GRAY}] [{PURPLE_LIGHT}]{task.command[:65]}{'...' if len(task.command) > 65 else ''}[/{PURPLE_LIGHT}]"
+ )
+ console.print(
+ f"{indent} [{GRAY}]↳ Skipped: {task.output or 'Using existing resource'}[/{GRAY}]"
+ )
+
+ # Log the skipped command
+ cmd_log = CommandLog(
+ command=task.command,
+ purpose=task.purpose,
+ timestamp=datetime.datetime.now().isoformat(),
+ status=CommandStatus.SKIPPED,
+ output=task.output or "Using existing resource",
+ )
+ run.commands.append(cmd_log)
+ return
+
+ # Claude-like command output
+ console.print(
+ f"{indent}[bold {PURPLE_LIGHT}]{ICON_SUCCESS}[/bold {PURPLE_LIGHT}] [bold {WHITE}]{task.command[:65]}{'...' if len(task.command) > 65 else ''}[/bold {WHITE}]"
+ )
+ console.print(f"{indent} [{GRAY}]↳ {task.purpose}[/{GRAY}]")
+
+ protected_paths = []
+ user_query = run.user_query if run else ""
+ for cmd, _, protected in original_commands:
+ if cmd == task.command:
+ protected_paths = protected
+ break
+
+ file_check = self._file_analyzer.check_file_exists_and_usefulness(
+ task.command, task.purpose, user_query
+ )
+
+ if file_check["recommendations"]:
+ self._file_analyzer.apply_file_recommendations(file_check["recommendations"])
+
+ task.status = CommandStatus.RUNNING
+ start_time = time.time()
+
+ needs_sudo = self._needs_sudo(task.command, protected_paths)
+ success, stdout, stderr = self._execute_single_command(task.command, needs_sudo)
+
+ task.output = stdout
+ task.error = stderr
+ task.duration_seconds = time.time() - start_time
+
+ # Check if command was interrupted by Ctrl+Z/Ctrl+C
+ if self._interrupted:
+ task.status = CommandStatus.INTERRUPTED
+ cmd_log = CommandLog(
+ command=task.command,
+ purpose=task.purpose,
+ timestamp=datetime.datetime.now().isoformat(),
+ status=CommandStatus.INTERRUPTED,
+ output=stdout,
+ error="Command interrupted by user (Ctrl+Z/Ctrl+C)",
+ duration_seconds=task.duration_seconds,
+ )
+ console.print(
+ f"{indent} [{YELLOW}]⚠[/{YELLOW}] [{GRAY}]Interrupted ({task.duration_seconds:.2f}s)[/{GRAY}]"
+ )
+ run.commands.append(cmd_log)
+ return
+
+ cmd_log = CommandLog(
+ command=task.command,
+ purpose=task.purpose,
+ timestamp=datetime.datetime.now().isoformat(),
+ status=CommandStatus.SUCCESS if success else CommandStatus.FAILED,
+ output=stdout,
+ error=stderr,
+ duration_seconds=task.duration_seconds,
+ )
+
+ if success:
+ task.status = CommandStatus.SUCCESS
+ # Claude-like success output
+ console.print(
+ f"{indent} [{GREEN}]{ICON_SUCCESS}[/{GREEN}] [{GRAY}]Done ({task.duration_seconds:.2f}s)[/{GRAY}]"
+ )
+ if stdout:
+ output_preview = stdout[:100] + ("..." if len(stdout) > 100 else "")
+ console.print(f"{indent} [{GRAY}]{output_preview}[/{GRAY}]")
+ console.print()
+ run.commands.append(cmd_log)
+ return
+
+ task.status = CommandStatus.NEEDS_REPAIR
+ diagnosis = self._diagnoser.diagnose_error(task.command, stderr)
+ task.failure_reason = diagnosis.get("description", "Unknown error")
+
+ # Claude-like error output
+ console.print(
+ f"{indent} [{RED}]{ICON_ERROR}[/{RED}] [bold {RED}]{diagnosis['error_type']}[/bold {RED}]"
+ )
+ console.print(
+ f"{indent} [{GRAY}]{diagnosis['description'][:80]}{'...' if len(diagnosis['description']) > 80 else ''}[/{GRAY}]"
+ )
+
+ # Check if this is a login/credential required error
+ if diagnosis.get("category") == "login_required":
+ console.print(f"{indent}[{PURPLE_LIGHT}] 🔐 Authentication required[/{PURPLE_LIGHT}]")
+
+ login_success, login_msg = self._login_handler.handle_login(task.command, stderr)
+
+ if login_success:
+ console.print(f"{indent}[{GREEN}] {ICON_SUCCESS} {login_msg}[/{GREEN}]")
+ console.print(f"{indent}[{PURPLE_LIGHT}] Retrying command...[/{PURPLE_LIGHT}]")
+
+ # Retry the command
+ needs_sudo = self._needs_sudo(task.command, [])
+ success, new_stdout, new_stderr = self._execute_single_command(
+ task.command, needs_sudo
+ )
+
+ if success:
+ task.status = CommandStatus.SUCCESS
+ task.reasoning = "Succeeded after authentication"
+ cmd_log.status = CommandStatus.SUCCESS
+ cmd_log.stdout = new_stdout[:500] if new_stdout else ""
+ console.print(
+ f"{indent}[{GREEN}] {ICON_SUCCESS} Command succeeded after authentication![/{GREEN}]"
+ )
+ run.commands.append(cmd_log)
+ return
+ else:
+ # Still failed after login
+ stderr = new_stderr
+ diagnosis = self._diagnoser.diagnose_error(task.command, stderr)
+ console.print(
+ f"{indent}[{YELLOW}] Command still failed: {stderr[:100]}[/{YELLOW}]"
+ )
+ else:
+ console.print(f"{indent}[{YELLOW}] {login_msg}[/{YELLOW}]")
+
+ if diagnosis.get("extracted_path"):
+ console.print(f"{indent}[dim] Path: {diagnosis['extracted_path']}[/dim]")
+
+ # Handle timeout errors specially - don't blindly retry
+ if diagnosis.get("category") == "timeout" or "timed out" in stderr.lower():
+ console.print(f"{indent}[{YELLOW}] ⏱️ This operation timed out[/{YELLOW}]")
+
+ # Check if it's a docker pull - those might still be running
+ if "docker pull" in task.command.lower():
+ console.print(
+ f"{indent}[{PURPLE_LIGHT}] {ICON_INFO} Docker pull may still be downloading in background[/{PURPLE_LIGHT}]"
+ )
+ console.print(
+ f"{indent}[{GRAY}] Check with: docker images | grep [/{GRAY}]"
+ )
+ console.print(
+ f"{indent}[{GRAY}] Or retry with: docker pull --timeout=0 [/{GRAY}]"
+ )
+ elif "apt" in task.command.lower():
+ console.print(
+ f"{indent}[{PURPLE_LIGHT}] {ICON_INFO} Package installation timed out[/{PURPLE_LIGHT}]"
+ )
+ console.print(
+ f"{indent}[{GRAY}] Check apt status: sudo dpkg --configure -a[/{GRAY}]"
+ )
+ console.print(f"{indent}[{GRAY}] Then retry the command[/{GRAY}]")
+ else:
+ console.print(
+ f"{indent}[{PURPLE_LIGHT}] {ICON_INFO} You can retry this command manually[/{PURPLE_LIGHT}]"
+ )
+
+ # Mark as needing manual intervention, not auto-fix
+ task.status = CommandStatus.NEEDS_REPAIR
+ task.failure_reason = "Operation timed out - may need manual retry"
+ cmd_log.status = CommandStatus.FAILED
+ cmd_log.error = stderr
+ run.commands.append(cmd_log)
+ return
+
+ if task.repair_attempts < task.max_repair_attempts:
+ import sys
+
+ task.repair_attempts += 1
+ console.print(
+ f"{indent}[{PURPLE_LIGHT}] 🔧 Auto-fix attempt {task.repair_attempts}/{task.max_repair_attempts}[/{PURPLE_LIGHT}]"
+ )
+
+ # Flush output before auto-fix to ensure clean display after sudo prompts
+ sys.stdout.flush()
+
+ fixed, fix_message, fix_commands = self._auto_fixer.auto_fix_error(
+ task.command, stderr, diagnosis, max_attempts=3
+ )
+
+ for fix_cmd in fix_commands:
+ repair_task = self._task_tree.add_repair_task(
+ parent=task,
+ command=fix_cmd,
+ purpose=f"Auto-fix: {diagnosis['error_type']}",
+ reasoning=fix_message,
+ )
+ repair_task.status = CommandStatus.SUCCESS
+
+ if fixed:
+ task.status = CommandStatus.SUCCESS
+ task.reasoning = f"Auto-fixed: {fix_message}"
+ console.print(f"{indent}[{GREEN}] {ICON_SUCCESS} {fix_message}[/{GREEN}]")
+ cmd_log.status = CommandStatus.SUCCESS
+ run.commands.append(cmd_log)
+ return
+ else:
+ console.print(f"{indent}[{YELLOW}] Auto-fix incomplete: {fix_message}[/{YELLOW}]")
+
+ task.status = CommandStatus.FAILED
+ task.reasoning = self._generate_task_failure_reasoning(task, diagnosis)
+
+ error_type = diagnosis.get("error_type", "unknown")
+
+ # Check if this is a "soft failure" that shouldn't warrant manual intervention
+ # These are cases where a tool/command simply isn't available and that's OK
+ soft_failure_types = {
+ "command_not_found", # Tool not installed
+ "not_found", # File/command doesn't exist
+ "no_such_command",
+ "unable_to_locate_package", # Package doesn't exist in repos
+ }
+
+ # Also check for patterns in the error message that indicate optional tools
+ optional_tool_patterns = [
+ "sensors", # lm-sensors - optional hardware monitoring
+ "snap", # snapd - optional package manager
+ "flatpak", # optional package manager
+ "docker", # optional if not needed
+ "podman", # optional container runtime
+ "nmap", # optional network scanner
+ "htop", # optional system monitor
+ "iotop", # optional I/O monitor
+ "iftop", # optional network monitor
+ ]
+
+ cmd_base = task.command.split()[0] if task.command else ""
+ is_optional_tool = any(pattern in cmd_base.lower() for pattern in optional_tool_patterns)
+ is_soft_failure = error_type in soft_failure_types and is_optional_tool
+
+ if is_soft_failure:
+ # Mark as skipped instead of failed - this is an optional tool that's not available
+ task.status = CommandStatus.SKIPPED
+ task.reasoning = f"Tool '{cmd_base}' not available (optional)"
+ console.print(
+ f"{indent}[yellow] ○ Skipped: {cmd_base} not available (optional tool)[/yellow]"
+ )
+ console.print(
+ f"{indent}[dim] This tool provides additional info but isn't required[/dim]"
+ )
+ cmd_log.status = CommandStatus.SKIPPED
+ else:
+ console.print(f"{indent}[red] ✗ Failed: {diagnosis['description'][:100]}[/red]")
+ console.print(f"{indent}[dim] Reasoning: {task.reasoning}[/dim]")
+
+ # Only offer manual intervention for errors that could actually be fixed manually
+ # Don't offer for missing commands/packages that auto-fix couldn't resolve
+ should_offer_manual = (diagnosis.get("fix_commands") or stderr) and error_type not in {
+ "command_not_found",
+ "not_found",
+ "unable_to_locate_package",
+ }
+
+ if should_offer_manual:
+ console.print(f"\n{indent}[yellow]💡 Manual intervention available[/yellow]")
+
+ suggested_cmds = diagnosis.get("fix_commands", [f"sudo {task.command}"])
+ console.print(f"{indent}[dim] Suggested commands:[/dim]")
+ for cmd in suggested_cmds[:3]:
+ console.print(f"{indent}[cyan] $ {cmd}[/cyan]")
+
+ if Confirm.ask(f"{indent}Run manually while Cortex monitors?", default=False):
+ manual_success = self._supervise_manual_intervention_for_task(
+ task, suggested_cmds, run
+ )
+ if manual_success:
+ task.status = CommandStatus.SUCCESS
+ task.reasoning = "Completed via monitored manual intervention"
+ cmd_log.status = CommandStatus.SUCCESS
+
+ cmd_log.status = task.status
+ run.commands.append(cmd_log)
+
+ def _supervise_manual_intervention_for_task(
+ self,
+ task: TaskNode,
+ suggested_commands: list[str],
+ run: DoRun,
+ ) -> bool:
+ """Supervise manual intervention for a specific task with terminal monitoring."""
+ from rich.panel import Panel
+ from rich.prompt import Prompt
+
+ # If no suggested commands provided, use the task command with sudo
+ if not suggested_commands:
+ if task and task.command:
+ # Add sudo if not already present
+ cmd = task.command
+ if not cmd.strip().startswith("sudo"):
+ cmd = f"sudo {cmd}"
+ suggested_commands = [cmd]
+
+ # Claude-like manual intervention UI
+ console.print()
+ console.print("[bold blue]━━━[/bold blue] [bold]Manual Intervention[/bold]")
+ console.print()
+
+ # Show the task context
+ if task and task.purpose:
+ console.print(f"[bold]Task:[/bold] {task.purpose}")
+ console.print()
+
+ console.print("[dim]Run these commands in another terminal:[/dim]")
+ console.print()
+
+ # Show commands in a clear box
+ if suggested_commands:
+ from rich.panel import Panel
+
+ cmd_text = "\n".join(f" {i}. {cmd}" for i, cmd in enumerate(suggested_commands, 1))
+ console.print(
+ Panel(
+ cmd_text,
+ title="[bold cyan]📋 Commands to Run[/bold cyan]",
+ border_style="cyan",
+ padding=(0, 1),
+ )
+ )
+ else:
+ console.print(" [yellow]⚠ No specific commands - check the task above[/yellow]")
+
+ console.print()
+
+ # Track expected commands for matching
+ self._expected_manual_commands = suggested_commands.copy() if suggested_commands else []
+ self._completed_manual_commands: list[str] = []
+
+ # Start terminal monitoring with detailed output
+ self._terminal_monitor = TerminalMonitor(
+ notification_callback=lambda title, msg: self._send_notification(title, msg)
+ )
+ self._terminal_monitor.start(expected_commands=suggested_commands)
+
+ console.print()
+ console.print("[dim]Type 'done' when finished, 'help' for tips, or 'cancel' to abort[/dim]")
+ console.print()
+
+ try:
+ while True:
+ try:
+ user_input = Prompt.ask("[cyan]Status[/cyan]", default="done").strip().lower()
+ except (EOFError, KeyboardInterrupt):
+ console.print("\n[yellow]Manual intervention cancelled[/yellow]")
+ return False
+
+ # Handle natural language responses
+ if user_input in [
+ "done",
+ "finished",
+ "complete",
+ "completed",
+ "success",
+ "worked",
+ "yes",
+ "y",
+ ]:
+ # Show observed commands and check for matches
+ observed = self._terminal_monitor.get_observed_commands()
+ matched_commands = []
+ unmatched_commands = []
+
+ if observed:
+ console.print(f"\n[cyan]📊 Observed {len(observed)} command(s):[/cyan]")
+ for obs in observed[-5:]:
+ obs_cmd = obs["command"]
+ is_matched = False
+
+ # Check if this matches any expected command
+ for expected in self._expected_manual_commands:
+ if self._commands_match(obs_cmd, expected):
+ matched_commands.append(obs_cmd)
+ self._completed_manual_commands.append(expected)
+ console.print(f" • {obs_cmd[:60]}... [green]✓[/green]")
+ is_matched = True
+ break
+
+ if not is_matched:
+ unmatched_commands.append(obs_cmd)
+ console.print(f" • {obs_cmd[:60]}... [yellow]?[/yellow]")
+
+ # Check if expected commands were actually run
+ if self._expected_manual_commands and not matched_commands:
+ console.print()
+ console.print(
+ "[yellow]⚠ None of the expected commands were detected.[/yellow]"
+ )
+ console.print("[dim]Expected:[/dim]")
+ for cmd in self._expected_manual_commands[:3]:
+ console.print(f" [cyan]$ {cmd}[/cyan]")
+ console.print()
+
+ # Send notification with correct commands
+ self._send_notification(
+ "⚠️ Cortex: Expected Commands",
+ f"Run: {self._expected_manual_commands[0][:50]}...",
+ )
+
+ console.print(
+ "[dim]Type 'done' again to confirm, or run the expected commands first.[/dim]"
+ )
+ continue # Don't mark as success yet - let user try again
+
+ # Check if any observed commands had errors (check last few)
+ has_errors = False
+ if observed:
+ for obs in observed[-3:]:
+ if obs.get("has_error") or obs.get("status") == "failed":
+ has_errors = True
+ console.print(
+ "[yellow]⚠ Some commands may have failed. Please verify.[/yellow]"
+ )
+ break
+
+ if has_errors and user_input not in ["yes", "y", "worked", "success"]:
+ console.print("[dim]Type 'success' to confirm it worked anyway.[/dim]")
+ continue
+
+ console.print("[green]✓ Manual step completed successfully[/green]")
+
+ if self._task_tree:
+ verify_task = self._task_tree.add_verify_task(
+ parent=task,
+ command="# Manual verification",
+ purpose="User confirmed manual intervention success",
+ )
+ verify_task.status = CommandStatus.SUCCESS
+
+ # Mark matched commands as completed so they're not re-executed
+ if matched_commands:
+ task.manual_commands_completed = matched_commands
+
+ return True
+
+ elif user_input in ["help", "?", "hint", "tips"]:
+ console.print()
+ console.print("[bold]💡 Manual Intervention Tips:[/bold]")
+ console.print(" • Use [cyan]sudo[/cyan] if you see 'Permission denied'")
+ console.print(" • Use [cyan]sudo su -[/cyan] to become root")
+ console.print(" • Check paths with [cyan]ls -la [/cyan]")
+ console.print(" • Check services: [cyan]systemctl status [/cyan]")
+ console.print(" • View logs: [cyan]journalctl -u -n 50[/cyan]")
+ console.print()
+
+ elif user_input in ["cancel", "abort", "quit", "exit", "no", "n"]:
+ console.print("[yellow]Manual intervention cancelled[/yellow]")
+ return False
+
+ elif user_input in ["failed", "error", "problem", "issue"]:
+ console.print()
+ error_desc = Prompt.ask("[yellow]What error did you encounter?[/yellow]")
+ error_lower = error_desc.lower()
+
+ # Provide contextual help based on error description
+ if "permission" in error_lower or "denied" in error_lower:
+ console.print("\n[cyan]💡 Try running with sudo:[/cyan]")
+ for cmd in suggested_commands[:2]:
+ if not cmd.startswith("sudo"):
+ console.print(f" [green]sudo {cmd}[/green]")
+ elif "not found" in error_lower or "no such" in error_lower:
+ console.print("\n[cyan]💡 Check if path/command exists:[/cyan]")
+ console.print(" [green]which [/green]")
+ console.print(" [green]ls -la [/green]")
+ elif "service" in error_lower or "systemctl" in error_lower:
+ console.print("\n[cyan]💡 Service troubleshooting:[/cyan]")
+ console.print(" [green]sudo systemctl status [/green]")
+ console.print(" [green]sudo journalctl -u -n 50[/green]")
+ else:
+ console.print("\n[cyan]💡 General debugging:[/cyan]")
+ console.print(" • Check the error message carefully")
+ console.print(" • Try running with sudo")
+ console.print(" • Check if all required packages are installed")
+
+ console.print()
+ console.print("[dim]Type 'done' when fixed, or 'cancel' to abort[/dim]")
+
+ else:
+ # Any other input - show status
+ observed = self._terminal_monitor.get_observed_commands()
+ console.print(
+ f"[dim]Still monitoring... ({len(observed)} commands observed)[/dim]"
+ )
+ console.print("[dim]Type 'done' when finished, 'help' for tips[/dim]")
+
+ except KeyboardInterrupt:
+ console.print("\n[yellow]Manual intervention cancelled[/yellow]")
+ return False
+ finally:
+ if self._terminal_monitor:
+ observed = self._terminal_monitor.stop()
+ # Log observed commands to run
+ for obs in observed:
+ run.commands.append(
+ CommandLog(
+ command=obs["command"],
+ purpose=f"Manual execution ({obs['source']})",
+ timestamp=obs["timestamp"],
+ status=CommandStatus.SUCCESS,
+ )
+ )
+ self._terminal_monitor = None
+
+ # Clear tracking
+ self._expected_manual_commands = []
+
+ def _commands_match(self, observed: str, expected: str) -> bool:
+ """Check if an observed command matches an expected command.
+
+ Handles variations like:
+ - With/without sudo
+ - Different whitespace
+ - Same command with different args still counts
+ """
+ # Normalize commands
+ obs_normalized = observed.strip().lower()
+ exp_normalized = expected.strip().lower()
+
+ # Remove sudo prefix for comparison
+ if obs_normalized.startswith("sudo "):
+ obs_normalized = obs_normalized[5:].strip()
+ if exp_normalized.startswith("sudo "):
+ exp_normalized = exp_normalized[5:].strip()
+
+ # Exact match
+ if obs_normalized == exp_normalized:
+ return True
+
+ obs_parts = obs_normalized.split()
+ exp_parts = exp_normalized.split()
+
+ # Check for service management commands first (need full match including service name)
+ service_commands = ["systemctl", "service"]
+ for svc_cmd in service_commands:
+ if svc_cmd in obs_normalized and svc_cmd in exp_normalized:
+ # Extract action and service name
+ obs_action = None
+ exp_action = None
+ obs_service = None
+ exp_service = None
+
+ for i, part in enumerate(obs_parts):
+ if part in [
+ "restart",
+ "start",
+ "stop",
+ "reload",
+ "status",
+ "enable",
+ "disable",
+ ]:
+ obs_action = part
+ # Service name is usually the next word
+ if i + 1 < len(obs_parts):
+ obs_service = obs_parts[i + 1]
+ break
+
+ for i, part in enumerate(exp_parts):
+ if part in [
+ "restart",
+ "start",
+ "stop",
+ "reload",
+ "status",
+ "enable",
+ "disable",
+ ]:
+ exp_action = part
+ if i + 1 < len(exp_parts):
+ exp_service = exp_parts[i + 1]
+ break
+
+ if obs_action and exp_action and obs_service and exp_service:
+ if obs_action == exp_action and obs_service == exp_service:
+ return True
+ else:
+ return False # Different action or service
+
+ # For non-service commands, check if first 2-3 words match
+ if len(obs_parts) >= 2 and len(exp_parts) >= 2:
+ # Skip if either is a service command (handled above)
+ if obs_parts[0] not in ["systemctl", "service"] and exp_parts[0] not in [
+ "systemctl",
+ "service",
+ ]:
+ # Compare first two words (command and subcommand)
+ if obs_parts[:2] == exp_parts[:2]:
+ return True
+
+ return False
+
+ def get_completed_manual_commands(self) -> list[str]:
+ """Get list of commands completed during manual intervention."""
+ return getattr(self, "_completed_manual_commands", [])
+
+ def _generate_task_failure_reasoning(
+ self,
+ task: TaskNode,
+ diagnosis: dict,
+ ) -> str:
+ """Generate detailed reasoning for why a task failed."""
+ parts = []
+
+ parts.append(f"Error: {diagnosis.get('error_type', 'unknown')}")
+
+ if task.repair_attempts > 0:
+ parts.append(f"Repair attempts: {task.repair_attempts} (all failed)")
+
+ if diagnosis.get("extracted_path"):
+ parts.append(f"Problem path: {diagnosis['extracted_path']}")
+
+ error_type = diagnosis.get("error_type", "")
+ if "permission" in error_type.lower():
+ parts.append("Root cause: Insufficient file system permissions")
+ elif "not_found" in error_type.lower():
+ parts.append("Root cause: Required file or directory does not exist")
+ elif "service" in error_type.lower():
+ parts.append("Root cause: System service issue")
+
+ if diagnosis.get("fix_commands"):
+ parts.append(f"Suggested fix: {diagnosis['fix_commands'][0][:50]}...")
+
+ return " | ".join(parts)
+
+ def _generate_tree_summary(self, run: DoRun) -> str:
+ """Generate a summary from the task tree execution."""
+ if not self._task_tree:
+ return self._generate_summary(run)
+
+ summary = self._task_tree.get_summary()
+
+ total = sum(summary.values())
+ success = summary.get("success", 0)
+ failed = summary.get("failed", 0)
+ repaired = summary.get("needs_repair", 0)
+
+ parts = [
+ f"Total tasks: {total}",
+ f"Successful: {success}",
+ f"Failed: {failed}",
+ ]
+
+ if repaired > 0:
+ parts.append(f"Repair attempted: {repaired}")
+
+ if self._permission_requests_count > 1:
+ parts.append(f"Permission requests: {self._permission_requests_count}")
+
+ return " | ".join(parts)
+
+ def provide_manual_instructions(
+ self,
+ commands: list[tuple[str, str, list[str]]],
+ user_query: str,
+ ) -> DoRun:
+ """Provide instructions for manual execution and monitor progress."""
+ run = DoRun(
+ run_id=self.db._generate_run_id(),
+ summary="",
+ mode=RunMode.USER_MANUAL,
+ user_query=user_query,
+ started_at=datetime.datetime.now().isoformat(),
+ session_id=self.current_session_id or "",
+ )
+ self.current_run = run
+
+ console.print()
+ console.print(
+ Panel(
+ "[bold cyan]📋 Manual Execution Instructions[/bold cyan]",
+ expand=False,
+ )
+ )
+ console.print()
+
+ cwd = os.getcwd()
+ console.print("[bold]1. Open a new terminal and navigate to:[/bold]")
+ console.print(f" [cyan]cd {cwd}[/cyan]")
+ console.print()
+
+ console.print("[bold]2. Execute the following commands in order:[/bold]")
+ console.print()
+
+ for i, (cmd, purpose, protected) in enumerate(commands, 1):
+ console.print(f" [bold yellow]Step {i}:[/bold yellow] {purpose}")
+ needs_sudo = self._needs_sudo(cmd, protected)
+
+ if protected:
+ console.print(f" [red]⚠️ Accesses protected paths: {', '.join(protected)}[/red]")
+
+ if needs_sudo and not cmd.strip().startswith("sudo"):
+ console.print(f" [cyan]sudo {cmd}[/cyan]")
+ else:
+ console.print(f" [cyan]{cmd}[/cyan]")
+ console.print()
+
+ run.commands.append(
+ CommandLog(
+ command=cmd,
+ purpose=purpose,
+ timestamp=datetime.datetime.now().isoformat(),
+ status=CommandStatus.PENDING,
+ )
+ )
+
+ console.print("[bold]3. Once done, return to this terminal and press Enter.[/bold]")
+ console.print()
+
+ monitor = TerminalMonitor(
+ notification_callback=lambda title, msg: self._send_notification(title, msg, "normal")
+ )
+
+ expected_commands = [cmd for cmd, _, _ in commands]
+ monitor.start_monitoring(expected_commands)
+
+ console.print("[dim]🔍 Monitoring terminal activity... (press Enter when done)[/dim]")
+
+ try:
+ input()
+ except (EOFError, KeyboardInterrupt):
+ pass
+
+ observed = monitor.stop_monitoring()
+
+ # Add observed commands to the run
+ for obs in observed:
+ run.commands.append(
+ CommandLog(
+ command=obs["command"],
+ purpose="User-executed command",
+ timestamp=obs["timestamp"],
+ status=CommandStatus.SUCCESS,
+ )
+ )
+
+ run.completed_at = datetime.datetime.now().isoformat()
+ run.summary = self._generate_summary(run)
+
+ self.db.save_run(run)
+
+ # Generate LLM summary/answer
+ llm_answer = self._generate_llm_answer(run, user_query)
+
+ # Print condensed execution summary with answer
+ self._print_execution_summary(run, answer=llm_answer)
+
+ console.print()
+ console.print(f"[dim]Run ID: {run.run_id}[/dim]")
+
+ return run
+
+ def _generate_summary(self, run: DoRun) -> str:
+ """Generate a summary of what was done in the run."""
+ successful = sum(1 for c in run.commands if c.status == CommandStatus.SUCCESS)
+ failed = sum(1 for c in run.commands if c.status == CommandStatus.FAILED)
+
+ mode_str = "automated" if run.mode == RunMode.CORTEX_EXEC else "manual"
+
+ if failed == 0:
+ return f"Successfully executed {successful} commands ({mode_str}) for: {run.user_query[:50]}"
+ else:
+ return f"Executed {successful} commands with {failed} failures ({mode_str}) for: {run.user_query[:50]}"
+
+ def _generate_llm_answer(self, run: DoRun, user_query: str) -> str | None:
+ """Generate an LLM-based answer/summary after command execution."""
+ if not self.llm_callback:
+ return None
+
+ # Collect command outputs
+ command_results = []
+ for cmd in run.commands:
+ status = (
+ "✓"
+ if cmd.status == CommandStatus.SUCCESS
+ else "✗" if cmd.status == CommandStatus.FAILED else "○"
+ )
+ result = {
+ "command": cmd.command,
+ "purpose": cmd.purpose,
+ "status": status,
+ "output": (cmd.output[:500] if cmd.output else "")[:500], # Limit output size
+ }
+ if cmd.error:
+ result["error"] = cmd.error[:200]
+ command_results.append(result)
+
+ # Build prompt for LLM
+ prompt = f"""The user asked: "{user_query}"
+
+The following commands were executed:
+"""
+ for i, result in enumerate(command_results, 1):
+ prompt += f"\n{i}. [{result['status']}] {result['command']}"
+ prompt += f"\n Purpose: {result['purpose']}"
+ if result.get("output"):
+ # Only include meaningful output, not empty or whitespace-only
+ output_preview = result["output"].strip()[:200]
+ if output_preview:
+ prompt += f"\n Output: {output_preview}"
+ if result.get("error"):
+ prompt += f"\n Error: {result['error']}"
+
+ prompt += """
+
+Based on the above execution results, provide a helpful summary/answer for the user.
+Focus on:
+1. What was accomplished
+2. Any issues encountered and their impact
+3. Key findings or results from the commands
+4. Any recommendations for next steps
+
+Keep the response concise (2-4 paragraphs max). Do NOT include JSON in your response.
+Respond directly with the answer text only."""
+
+ try:
+ from rich.console import Console
+ from rich.status import Status
+
+ console = Console()
+ with Status("[cyan]Generating summary...[/cyan]", spinner="dots"):
+ result = self.llm_callback(prompt)
+
+ if result:
+ # Handle different response formats
+ if isinstance(result, dict):
+ # Extract answer from various possible keys
+ answer = (
+ result.get("answer") or result.get("response") or result.get("text") or ""
+ )
+ if not answer and "reasoning" in result:
+ answer = result.get("reasoning", "")
+ elif isinstance(result, str):
+ answer = result
+ else:
+ return None
+
+ # Clean the answer
+ answer = answer.strip()
+
+ # Filter out JSON-like responses
+ if answer.startswith("{") or answer.startswith("["):
+ return None
+
+ return answer if answer else None
+ except Exception as e:
+ # Silently fail - summary is optional
+ import logging
+
+ logging.debug(f"LLM summary generation failed: {e}")
+ return None
+
+ return None
+
+ def _print_execution_summary(self, run: DoRun, answer: str | None = None):
+ """Print a condensed execution summary with improved visual design."""
+ from rich import box
+ from rich.panel import Panel
+ from rich.text import Text
+
+ # Count statuses
+ successful = [c for c in run.commands if c.status == CommandStatus.SUCCESS]
+ failed = [c for c in run.commands if c.status == CommandStatus.FAILED]
+ skipped = [c for c in run.commands if c.status == CommandStatus.SKIPPED]
+ interrupted = [c for c in run.commands if c.status == CommandStatus.INTERRUPTED]
+
+ total = len(run.commands)
+
+ # Build status header
+ console.print()
+
+ # Create a status bar
+ if total > 0:
+ status_text = Text()
+ status_text.append(" ")
+ if successful:
+ status_text.append(f"✓ {len(successful)} ", style="bold green")
+ if failed:
+ status_text.append(f"✗ {len(failed)} ", style="bold red")
+ if skipped:
+ status_text.append(f"○ {len(skipped)} ", style="bold yellow")
+ if interrupted:
+ status_text.append(f"⚠ {len(interrupted)} ", style="bold yellow")
+
+ # Calculate success rate
+ success_rate = (len(successful) / total * 100) if total > 0 else 0
+ status_text.append(f" ({success_rate:.0f}% success)", style="dim")
+
+ console.print(
+ Panel(
+ status_text,
+ title="[bold white on blue] 📊 Execution Status [/bold white on blue]",
+ title_align="left",
+ border_style="blue",
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+
+ # Create a table for detailed results
+ if successful or failed or skipped:
+ result_table = Table(
+ show_header=True,
+ header_style="bold",
+ box=box.SIMPLE,
+ padding=(0, 1),
+ expand=True,
+ )
+ result_table.add_column("Status", width=8, justify="center")
+ result_table.add_column("Action", style="white")
+
+ # Add successful commands
+ for cmd in successful[:4]:
+ purpose = cmd.purpose[:60] + "..." if len(cmd.purpose) > 60 else cmd.purpose
+ result_table.add_row("[green]✓ Done[/green]", purpose)
+ if len(successful) > 4:
+ result_table.add_row(
+ "[dim]...[/dim]", f"[dim]and {len(successful) - 4} more completed[/dim]"
+ )
+
+ # Add failed commands
+ for cmd in failed[:2]:
+ error_short = (
+ (cmd.error[:40] + "...")
+ if cmd.error and len(cmd.error) > 40
+ else (cmd.error or "Unknown")
+ )
+ result_table.add_row(
+ "[red]✗ Failed[/red]", f"{cmd.command[:30]}... - {error_short}"
+ )
+
+ # Add skipped commands
+ for cmd in skipped[:2]:
+ purpose = cmd.purpose[:50] + "..." if len(cmd.purpose) > 50 else cmd.purpose
+ result_table.add_row("[yellow]○ Skip[/yellow]", purpose)
+
+ console.print(
+ Panel(
+ result_table,
+ title="[bold] 📋 Details [/bold]",
+ title_align="left",
+ border_style="dim",
+ padding=(0, 0),
+ )
+ )
+
+ # Answer section (for questions) - make it prominent
+ if answer:
+ # Clean the answer - remove any JSON-like content that might have leaked
+ clean_answer = answer
+ if clean_answer.startswith("{") or '{"' in clean_answer[:50]:
+ # Looks like JSON leaked through, try to extract readable parts
+ import re
+
+ # Try to extract just the answer field if present
+ answer_match = re.search(r'"answer"\s*:\s*"([^"]*)"', clean_answer)
+ if answer_match:
+ clean_answer = answer_match.group(1)
+
+ # Truncate very long answers
+ if len(clean_answer) > 500:
+ display_answer = clean_answer[:500] + "\n\n[dim]... (truncated)[/dim]"
+ else:
+ display_answer = clean_answer
+
+ console.print(
+ Panel(
+ display_answer,
+ title="[bold white on green] 💡 Answer [/bold white on green]",
+ title_align="left",
+ border_style="green",
+ padding=(1, 2),
+ )
+ )
+
+ def get_run_history(self, limit: int = 20) -> list[DoRun]:
+ """Get recent do run history."""
+ return self.db.get_recent_runs(limit)
+
+ def get_run(self, run_id: str) -> DoRun | None:
+ """Get a specific run by ID."""
+ return self.db.get_run(run_id)
+
+ # Expose diagnosis and auto-fix methods for external use
+ def _diagnose_error(self, cmd: str, stderr: str) -> dict[str, Any]:
+ """Diagnose a command failure."""
+ return self._diagnoser.diagnose_error(cmd, stderr)
+
+ def _auto_fix_error(
+ self,
+ cmd: str,
+ stderr: str,
+ diagnosis: dict[str, Any],
+ max_attempts: int = 5,
+ ) -> tuple[bool, str, list[str]]:
+ """Auto-fix an error."""
+ return self._auto_fixer.auto_fix_error(cmd, stderr, diagnosis, max_attempts)
+
+ def _check_for_conflicts(self, cmd: str, purpose: str) -> dict[str, Any]:
+ """Check for conflicts."""
+ return self._conflict_detector.check_for_conflicts(cmd, purpose)
+
+ def _run_verification_tests(
+ self,
+ commands_executed: list[CommandLog],
+ user_query: str,
+ ) -> tuple[bool, list[dict[str, Any]]]:
+ """Run verification tests."""
+ return self._verification_runner.run_verification_tests(commands_executed, user_query)
+
+ def _check_file_exists_and_usefulness(
+ self,
+ cmd: str,
+ purpose: str,
+ user_query: str,
+ ) -> dict[str, Any]:
+ """Check file existence and usefulness."""
+ return self._file_analyzer.check_file_exists_and_usefulness(cmd, purpose, user_query)
+
+ def _analyze_file_usefulness(
+ self,
+ content: str,
+ purpose: str,
+ user_query: str,
+ ) -> dict[str, Any]:
+ """Analyze file usefulness."""
+ return self._file_analyzer.analyze_file_usefulness(content, purpose, user_query)
+
+
+def setup_cortex_user() -> bool:
+ """Setup the cortex user if it doesn't exist."""
+ handler = DoHandler()
+ return handler.setup_cortex_user()
+
+
+def get_do_handler() -> DoHandler:
+ """Get a DoHandler instance."""
+ return DoHandler()
diff --git a/cortex/do_runner/managers.py b/cortex/do_runner/managers.py
new file mode 100644
index 00000000..412a7d5b
--- /dev/null
+++ b/cortex/do_runner/managers.py
@@ -0,0 +1,293 @@
+"""User and path management for the Do Runner module."""
+
+import json
+import os
+import pwd
+import subprocess
+from pathlib import Path
+
+from rich.console import Console
+
+console = Console()
+
+
+class ProtectedPathsManager:
+ """Manages the list of protected files and folders requiring user authentication."""
+
+ SYSTEM_PROTECTED_PATHS: set[str] = {
+ # System configuration
+ "/etc",
+ "/etc/passwd",
+ "/etc/shadow",
+ "/etc/sudoers",
+ "/etc/sudoers.d",
+ "/etc/ssh",
+ "/etc/ssl",
+ "/etc/pam.d",
+ "/etc/security",
+ "/etc/cron.d",
+ "/etc/cron.daily",
+ "/etc/crontab",
+ "/etc/systemd",
+ "/etc/init.d",
+ # Boot and kernel
+ "/boot",
+ "/boot/grub",
+ # System binaries
+ "/usr/bin",
+ "/usr/sbin",
+ "/sbin",
+ "/bin",
+ # Root directory
+ "/root",
+ # System libraries
+ "/lib",
+ "/lib64",
+ "/usr/lib",
+ # Var system data
+ "/var/log",
+ "/var/lib/apt",
+ "/var/lib/dpkg",
+ # Proc and sys (virtual filesystems)
+ "/proc",
+ "/sys",
+ }
+
+ USER_PROTECTED_PATHS: set[str] = set()
+
+ def __init__(self):
+ self.config_file = Path.home() / ".cortex" / "protected_paths.json"
+ self._ensure_config_dir()
+ self._load_user_paths()
+
+ def _ensure_config_dir(self):
+ """Ensure the config directory exists."""
+ try:
+ self.config_file.parent.mkdir(parents=True, exist_ok=True)
+ except OSError:
+ self.config_file = Path("/tmp") / ".cortex" / "protected_paths.json"
+ self.config_file.parent.mkdir(parents=True, exist_ok=True)
+
+ def _load_user_paths(self):
+ """Load user-configured protected paths."""
+ if self.config_file.exists():
+ try:
+ with open(self.config_file) as f:
+ data = json.load(f)
+ self.USER_PROTECTED_PATHS = set(data.get("paths", []))
+ except (json.JSONDecodeError, OSError):
+ pass
+
+ def _save_user_paths(self):
+ """Save user-configured protected paths."""
+ try:
+ self.config_file.parent.mkdir(parents=True, exist_ok=True)
+ with open(self.config_file, "w") as f:
+ json.dump({"paths": list(self.USER_PROTECTED_PATHS)}, f, indent=2)
+ except OSError as e:
+ console.print(f"[yellow]Warning: Could not save protected paths: {e}[/yellow]")
+
+ def add_protected_path(self, path: str) -> bool:
+ """Add a path to user-protected paths."""
+ self.USER_PROTECTED_PATHS.add(path)
+ self._save_user_paths()
+ return True
+
+ def remove_protected_path(self, path: str) -> bool:
+ """Remove a path from user-protected paths."""
+ if path in self.USER_PROTECTED_PATHS:
+ self.USER_PROTECTED_PATHS.discard(path)
+ self._save_user_paths()
+ return True
+ return False
+
+ def is_protected(self, path: str) -> bool:
+ """Check if a path requires authentication for access."""
+ path = os.path.abspath(path)
+ all_protected = self.SYSTEM_PROTECTED_PATHS | self.USER_PROTECTED_PATHS
+
+ if path in all_protected:
+ return True
+
+ for protected in all_protected:
+ if path.startswith(protected + "/") or path == protected:
+ return True
+
+ return False
+
+ def get_all_protected(self) -> list[str]:
+ """Get all protected paths."""
+ return sorted(self.SYSTEM_PROTECTED_PATHS | self.USER_PROTECTED_PATHS)
+
+
+class CortexUserManager:
+ """Manages the cortex system user for privilege-limited execution."""
+
+ CORTEX_USER = "cortex"
+ CORTEX_GROUP = "cortex"
+
+ @classmethod
+ def user_exists(cls) -> bool:
+ """Check if the cortex user exists."""
+ try:
+ pwd.getpwnam(cls.CORTEX_USER)
+ return True
+ except KeyError:
+ return False
+
+ @classmethod
+ def create_user(cls) -> tuple[bool, str]:
+ """Create the cortex user with basic privileges."""
+ if cls.user_exists():
+ return True, "Cortex user already exists"
+
+ try:
+ subprocess.run(
+ ["sudo", "groupadd", "-f", cls.CORTEX_GROUP],
+ check=True,
+ capture_output=True,
+ )
+
+ subprocess.run(
+ [
+ "sudo",
+ "useradd",
+ "-r",
+ "-g",
+ cls.CORTEX_GROUP,
+ "-d",
+ "/var/lib/cortex",
+ "-s",
+ "/bin/bash",
+ "-m",
+ cls.CORTEX_USER,
+ ],
+ check=True,
+ capture_output=True,
+ )
+
+ subprocess.run(
+ ["sudo", "mkdir", "-p", "/var/lib/cortex/workspace"],
+ check=True,
+ capture_output=True,
+ )
+ subprocess.run(
+ ["sudo", "chown", "-R", f"{cls.CORTEX_USER}:{cls.CORTEX_GROUP}", "/var/lib/cortex"],
+ check=True,
+ capture_output=True,
+ )
+
+ return True, "Cortex user created successfully"
+
+ except subprocess.CalledProcessError as e:
+ return (
+ False,
+ f"Failed to create cortex user: {e.stderr.decode() if e.stderr else str(e)}",
+ )
+
+ @classmethod
+ def grant_privilege(cls, file_path: str, mode: str = "rw") -> tuple[bool, str]:
+ """Grant cortex user privilege to access a specific file."""
+ if not cls.user_exists():
+ return False, "Cortex user does not exist. Run setup first."
+
+ try:
+ acl_mode = ""
+ if "r" in mode:
+ acl_mode += "r"
+ if "w" in mode:
+ acl_mode += "w"
+ if "x" in mode:
+ acl_mode += "x"
+
+ if not acl_mode:
+ acl_mode = "r"
+
+ subprocess.run(
+ ["sudo", "setfacl", "-m", f"u:{cls.CORTEX_USER}:{acl_mode}", file_path],
+ check=True,
+ capture_output=True,
+ )
+
+ return True, f"Granted {acl_mode} access to {file_path}"
+
+ except subprocess.CalledProcessError as e:
+ error_msg = e.stderr.decode() if e.stderr else str(e)
+ if "setfacl" in error_msg or "not found" in error_msg.lower():
+ return cls._grant_privilege_chmod(file_path, mode)
+ return False, f"Failed to grant privilege: {error_msg}"
+
+ @classmethod
+ def _grant_privilege_chmod(cls, file_path: str, mode: str) -> tuple[bool, str]:
+ """Fallback privilege granting using chmod."""
+ try:
+ chmod_mode = ""
+ if "r" in mode:
+ chmod_mode = "o+r"
+ if "w" in mode:
+ chmod_mode = "o+rw" if chmod_mode else "o+w"
+ if "x" in mode:
+ chmod_mode = chmod_mode + "x" if chmod_mode else "o+x"
+
+ subprocess.run(
+ ["sudo", "chmod", chmod_mode, file_path],
+ check=True,
+ capture_output=True,
+ )
+ return True, f"Granted {mode} access to {file_path} (chmod fallback)"
+
+ except subprocess.CalledProcessError as e:
+ return False, f"Failed to grant privilege: {e.stderr.decode() if e.stderr else str(e)}"
+
+ @classmethod
+ def revoke_privilege(cls, file_path: str) -> tuple[bool, str]:
+ """Revoke cortex user's privilege from a specific file."""
+ try:
+ subprocess.run(
+ ["sudo", "setfacl", "-x", f"u:{cls.CORTEX_USER}", file_path],
+ check=True,
+ capture_output=True,
+ )
+ return True, f"Revoked access to {file_path}"
+
+ except subprocess.CalledProcessError as e:
+ error_msg = e.stderr.decode() if e.stderr else str(e)
+ if "setfacl" in error_msg or "not found" in error_msg.lower():
+ return cls._revoke_privilege_chmod(file_path)
+ return False, f"Failed to revoke privilege: {error_msg}"
+
+ @classmethod
+ def _revoke_privilege_chmod(cls, file_path: str) -> tuple[bool, str]:
+ """Fallback privilege revocation using chmod."""
+ try:
+ subprocess.run(
+ ["sudo", "chmod", "o-rwx", file_path],
+ check=True,
+ capture_output=True,
+ )
+ return True, f"Revoked access to {file_path} (chmod fallback)"
+ except subprocess.CalledProcessError as e:
+ return False, f"Failed to revoke privilege: {e.stderr.decode() if e.stderr else str(e)}"
+
+ @classmethod
+ def run_as_cortex(cls, command: str, timeout: int = 60) -> tuple[bool, str, str]:
+ """Execute a command as the cortex user."""
+ if not cls.user_exists():
+ return False, "", "Cortex user does not exist"
+
+ try:
+ result = subprocess.run(
+ ["sudo", "-u", cls.CORTEX_USER, "bash", "-c", command],
+ capture_output=True,
+ text=True,
+ timeout=timeout,
+ )
+ return (
+ result.returncode == 0,
+ result.stdout.strip(),
+ result.stderr.strip(),
+ )
+ except subprocess.TimeoutExpired:
+ return False, "", f"Command timed out after {timeout} seconds"
+ except Exception as e:
+ return False, "", str(e)
diff --git a/cortex/do_runner/models.py b/cortex/do_runner/models.py
new file mode 100644
index 00000000..8bb5fe39
--- /dev/null
+++ b/cortex/do_runner/models.py
@@ -0,0 +1,361 @@
+"""Data models and enums for the Do Runner module."""
+
+from dataclasses import dataclass, field
+from enum import Enum
+from typing import Any
+
+from rich.console import Console
+
+console = Console()
+
+
+class CommandStatus(str, Enum):
+ """Status of a command execution."""
+
+ PENDING = "pending"
+ RUNNING = "running"
+ SUCCESS = "success"
+ FAILED = "failed"
+ SKIPPED = "skipped"
+ NEEDS_REPAIR = "needs_repair"
+ INTERRUPTED = "interrupted" # Command stopped by Ctrl+Z/Ctrl+C
+
+
+class RunMode(str, Enum):
+ """Mode of execution for a do run."""
+
+ CORTEX_EXEC = "cortex_exec"
+ USER_MANUAL = "user_manual"
+
+
+class TaskType(str, Enum):
+ """Type of task in the task tree."""
+
+ COMMAND = "command"
+ DIAGNOSTIC = "diagnostic"
+ REPAIR = "repair"
+ VERIFY = "verify"
+ ALTERNATIVE = "alternative"
+
+
+@dataclass
+class TaskNode:
+ """A node in the task tree representing a command or action."""
+
+ id: str
+ task_type: TaskType
+ command: str
+ purpose: str
+ status: CommandStatus = CommandStatus.PENDING
+
+ # Execution results
+ output: str = ""
+ error: str = ""
+ duration_seconds: float = 0.0
+
+ # Tree structure
+ parent_id: str | None = None
+ children: list["TaskNode"] = field(default_factory=list)
+
+ # Repair context
+ failure_reason: str = ""
+ repair_attempts: int = 0
+ max_repair_attempts: int = 3
+
+ # Reasoning
+ reasoning: str = ""
+
+ def to_dict(self) -> dict[str, Any]:
+ return {
+ "id": self.id,
+ "task_type": self.task_type.value,
+ "command": self.command,
+ "purpose": self.purpose,
+ "status": self.status.value,
+ "output": self.output,
+ "error": self.error,
+ "duration_seconds": self.duration_seconds,
+ "parent_id": self.parent_id,
+ "children": [c.to_dict() for c in self.children],
+ "failure_reason": self.failure_reason,
+ "repair_attempts": self.repair_attempts,
+ "reasoning": self.reasoning,
+ }
+
+ def add_child(self, child: "TaskNode"):
+ """Add a child task."""
+ child.parent_id = self.id
+ self.children.append(child)
+
+ def get_depth(self) -> int:
+ """Get the depth of this node in the tree."""
+ depth = 0
+ node = self
+ while node.parent_id:
+ depth += 1
+ node = node
+ return depth
+
+
+class TaskTree:
+ """A tree structure for managing commands with auto-repair capabilities."""
+
+ def __init__(self):
+ self.root_tasks: list[TaskNode] = []
+ self._task_counter = 0
+ self._all_tasks: dict[str, TaskNode] = {}
+
+ def _generate_task_id(self, prefix: str = "task") -> str:
+ """Generate a unique task ID."""
+ self._task_counter += 1
+ return f"{prefix}_{self._task_counter}"
+
+ def add_root_task(
+ self,
+ command: str,
+ purpose: str,
+ task_type: TaskType = TaskType.COMMAND,
+ ) -> TaskNode:
+ """Add a root-level task."""
+ task = TaskNode(
+ id=self._generate_task_id(task_type.value),
+ task_type=task_type,
+ command=command,
+ purpose=purpose,
+ )
+ self.root_tasks.append(task)
+ self._all_tasks[task.id] = task
+ return task
+
+ def add_repair_task(
+ self,
+ parent: TaskNode,
+ command: str,
+ purpose: str,
+ reasoning: str = "",
+ ) -> TaskNode:
+ """Add a repair sub-task to a failed task."""
+ task = TaskNode(
+ id=self._generate_task_id("repair"),
+ task_type=TaskType.REPAIR,
+ command=command,
+ purpose=purpose,
+ reasoning=reasoning,
+ )
+ parent.add_child(task)
+ self._all_tasks[task.id] = task
+ return task
+
+ def add_diagnostic_task(
+ self,
+ parent: TaskNode,
+ command: str,
+ purpose: str,
+ ) -> TaskNode:
+ """Add a diagnostic sub-task to investigate a failure."""
+ task = TaskNode(
+ id=self._generate_task_id("diag"),
+ task_type=TaskType.DIAGNOSTIC,
+ command=command,
+ purpose=purpose,
+ )
+ parent.add_child(task)
+ self._all_tasks[task.id] = task
+ return task
+
+ def add_verify_task(
+ self,
+ parent: TaskNode,
+ command: str,
+ purpose: str,
+ ) -> TaskNode:
+ """Add a verification task after a repair."""
+ task = TaskNode(
+ id=self._generate_task_id("verify"),
+ task_type=TaskType.VERIFY,
+ command=command,
+ purpose=purpose,
+ )
+ parent.add_child(task)
+ self._all_tasks[task.id] = task
+ return task
+
+ def add_alternative_task(
+ self,
+ parent: TaskNode,
+ command: str,
+ purpose: str,
+ reasoning: str = "",
+ ) -> TaskNode:
+ """Add an alternative approach when the original fails."""
+ task = TaskNode(
+ id=self._generate_task_id("alt"),
+ task_type=TaskType.ALTERNATIVE,
+ command=command,
+ purpose=purpose,
+ reasoning=reasoning,
+ )
+ parent.add_child(task)
+ self._all_tasks[task.id] = task
+ return task
+
+ def get_task(self, task_id: str) -> TaskNode | None:
+ """Get a task by ID."""
+ return self._all_tasks.get(task_id)
+
+ def get_pending_tasks(self) -> list[TaskNode]:
+ """Get all pending tasks in order."""
+ pending = []
+ for root in self.root_tasks:
+ self._collect_pending(root, pending)
+ return pending
+
+ def _collect_pending(self, node: TaskNode, pending: list[TaskNode]):
+ """Recursively collect pending tasks."""
+ if node.status == CommandStatus.PENDING:
+ pending.append(node)
+ for child in node.children:
+ self._collect_pending(child, pending)
+
+ def get_failed_tasks(self) -> list[TaskNode]:
+ """Get all failed tasks."""
+ return [t for t in self._all_tasks.values() if t.status == CommandStatus.FAILED]
+
+ def get_summary(self) -> dict[str, int]:
+ """Get a summary of task statuses."""
+ summary = {status.value: 0 for status in CommandStatus}
+ for task in self._all_tasks.values():
+ summary[task.status.value] += 1
+ return summary
+
+ def to_dict(self) -> dict[str, Any]:
+ """Convert tree to dictionary."""
+ return {
+ "root_tasks": [t.to_dict() for t in self.root_tasks],
+ "summary": self.get_summary(),
+ }
+
+ def print_tree(self, indent: str = ""):
+ """Print the task tree structure."""
+ for i, root in enumerate(self.root_tasks):
+ is_last = i == len(self.root_tasks) - 1
+ self._print_node(root, indent, is_last)
+
+ def _print_node(self, node: TaskNode, indent: str, is_last: bool):
+ """Print a single node with its children."""
+ status_icons = {
+ CommandStatus.PENDING: "[dim]○[/dim]",
+ CommandStatus.RUNNING: "[cyan]◐[/cyan]",
+ CommandStatus.SUCCESS: "[green]✓[/green]",
+ CommandStatus.FAILED: "[red]✗[/red]",
+ CommandStatus.SKIPPED: "[yellow]○[/yellow]",
+ CommandStatus.NEEDS_REPAIR: "[yellow]⚡[/yellow]",
+ }
+
+ type_colors = {
+ TaskType.COMMAND: "white",
+ TaskType.DIAGNOSTIC: "cyan",
+ TaskType.REPAIR: "yellow",
+ TaskType.VERIFY: "blue",
+ TaskType.ALTERNATIVE: "magenta",
+ }
+
+ icon = status_icons.get(node.status, "?")
+ color = type_colors.get(node.task_type, "white")
+ prefix = "└── " if is_last else "├── "
+
+ console.print(
+ f"{indent}{prefix}{icon} [{color}][{node.task_type.value}][/{color}] {node.command[:50]}..."
+ )
+
+ if node.reasoning:
+ console.print(
+ f"{indent}{' ' if is_last else '│ '}[dim]Reason: {node.reasoning}[/dim]"
+ )
+
+ child_indent = indent + (" " if is_last else "│ ")
+ for j, child in enumerate(node.children):
+ self._print_node(child, child_indent, j == len(node.children) - 1)
+
+
+@dataclass
+class CommandLog:
+ """Log entry for a single command execution."""
+
+ command: str
+ purpose: str
+ timestamp: str
+ status: CommandStatus
+ output: str = ""
+ error: str = ""
+ duration_seconds: float = 0.0
+ useful: bool = True
+
+ def to_dict(self) -> dict[str, Any]:
+ return {
+ "command": self.command,
+ "purpose": self.purpose,
+ "timestamp": self.timestamp,
+ "status": self.status.value,
+ "output": self.output,
+ "error": self.error,
+ "duration_seconds": self.duration_seconds,
+ "useful": self.useful,
+ }
+
+ @classmethod
+ def from_dict(cls, data: dict[str, Any]) -> "CommandLog":
+ return cls(
+ command=data["command"],
+ purpose=data["purpose"],
+ timestamp=data["timestamp"],
+ status=CommandStatus(data["status"]),
+ output=data.get("output", ""),
+ error=data.get("error", ""),
+ duration_seconds=data.get("duration_seconds", 0.0),
+ useful=data.get("useful", True),
+ )
+
+
+@dataclass
+class DoRun:
+ """Represents a complete do run session."""
+
+ run_id: str
+ summary: str
+ mode: RunMode
+ commands: list[CommandLog] = field(default_factory=list)
+ started_at: str = ""
+ completed_at: str = ""
+ user_query: str = ""
+ files_accessed: list[str] = field(default_factory=list)
+ privileges_granted: list[str] = field(default_factory=list)
+ session_id: str = ""
+
+ def to_dict(self) -> dict[str, Any]:
+ return {
+ "run_id": self.run_id,
+ "summary": self.summary,
+ "mode": self.mode.value,
+ "commands": [cmd.to_dict() for cmd in self.commands],
+ "started_at": self.started_at,
+ "completed_at": self.completed_at,
+ "user_query": self.user_query,
+ "files_accessed": self.files_accessed,
+ "privileges_granted": self.privileges_granted,
+ "session_id": self.session_id,
+ }
+
+ def get_commands_log_string(self) -> str:
+ """Get all commands as a formatted string for storage."""
+ lines = []
+ for cmd in self.commands:
+ lines.append(f"[{cmd.timestamp}] [{cmd.status.value.upper()}] {cmd.command}")
+ lines.append(f" Purpose: {cmd.purpose}")
+ if cmd.output:
+ lines.append(f" Output: {cmd.output[:500]}...")
+ if cmd.error:
+ lines.append(f" Error: {cmd.error}")
+ lines.append(f" Duration: {cmd.duration_seconds:.2f}s | Useful: {cmd.useful}")
+ lines.append("")
+ return "\n".join(lines)
diff --git a/cortex/do_runner/terminal.py b/cortex/do_runner/terminal.py
new file mode 100644
index 00000000..d7fc9c0d
--- /dev/null
+++ b/cortex/do_runner/terminal.py
@@ -0,0 +1,2573 @@
+"""Terminal monitoring for the manual execution flow."""
+
+import datetime
+import json
+import os
+import re
+import subprocess
+import threading
+import time
+from collections.abc import Callable
+from pathlib import Path
+from typing import Any
+
+from rich.console import Console
+
+console = Console()
+
+# Dracula-Inspired Theme Colors
+PURPLE = "#bd93f9" # Dracula purple
+PURPLE_LIGHT = "#ff79c6" # Dracula pink
+PURPLE_DARK = "#6272a4" # Dracula comment
+WHITE = "#f8f8f2" # Dracula foreground
+GRAY = "#6272a4" # Dracula comment
+GREEN = "#50fa7b" # Dracula green
+RED = "#ff5555" # Dracula red
+YELLOW = "#f1fa8c" # Dracula yellow
+CYAN = "#8be9fd" # Dracula cyan
+ORANGE = "#ffb86c" # Dracula orange
+
+# Round Icons
+ICON_MONITOR = "◉"
+ICON_SUCCESS = "●"
+ICON_ERROR = "●"
+ICON_INFO = "○"
+ICON_PENDING = "◐"
+ICON_ARROW = "→"
+
+
+class ClaudeLLM:
+ """Claude LLM client using the LLMRouter for intelligent error analysis."""
+
+ def __init__(self):
+ self._router = None
+ self._available: bool | None = None
+
+ def _get_router(self):
+ """Lazy initialize the router."""
+ if self._router is None:
+ try:
+ from cortex.llm_router import LLMRouter, TaskType
+
+ self._router = LLMRouter()
+ self._task_type = TaskType
+ except Exception:
+ self._router = False # Mark as failed
+ return self._router if self._router else None
+
+ def is_available(self) -> bool:
+ """Check if Claude API is available."""
+ if self._available is not None:
+ return self._available
+
+ router = self._get_router()
+ self._available = router is not None and router.claude_client is not None
+ return self._available
+
+ def analyze_error(self, command: str, error_output: str, max_tokens: int = 300) -> dict | None:
+ """Analyze an error using Claude and return diagnosis with solution."""
+ router = self._get_router()
+ if not router:
+ return None
+
+ try:
+ messages = [
+ {
+ "role": "system",
+ "content": """You are a Linux system debugging expert. Analyze the command error and provide:
+1. Root cause (1 sentence)
+2. Solution (1-2 specific commands to fix it)
+
+IMPORTANT: Do NOT suggest commands that require sudo/root privileges, as they cannot be auto-executed.
+Only suggest commands that can run as a regular user, such as:
+- Checking status (docker ps, systemctl status --user, etc.)
+- User-level config fixes
+- Environment variable exports
+- File operations in user directories
+
+If the ONLY fix requires sudo, explain what needs to be done but prefix the command with "# MANUAL: "
+
+Be concise. Output format:
+CAUSE:
+FIX:
+FIX: """,
+ },
+ {"role": "user", "content": f"Command: {command}\n\nError:\n{error_output[:500]}"},
+ ]
+
+ response = router.complete(
+ messages=messages,
+ task_type=self._task_type.ERROR_DEBUGGING,
+ max_tokens=max_tokens,
+ temperature=0.3,
+ )
+
+ # Parse response
+ content = response.content
+ result = {"cause": "", "fixes": [], "raw": content}
+
+ for line in content.split("\n"):
+ line = line.strip()
+ if line.upper().startswith("CAUSE:"):
+ result["cause"] = line[6:].strip()
+ elif line.upper().startswith("FIX:"):
+ fix = line[4:].strip()
+ if fix and not fix.startswith("#"):
+ result["fixes"].append(fix)
+
+ return result
+
+ except Exception as e:
+ console.print(f"[{GRAY}]Claude analysis error: {e}[/{GRAY}]")
+ return None
+
+
+class LocalLLM:
+ """Local LLM client using Ollama with Mistral (fallback)."""
+
+ def __init__(self, model: str = "mistral"):
+ self.model = model
+ self._available: bool | None = None
+
+ def is_available(self) -> bool:
+ """Check if Ollama with the model is available."""
+ if self._available is not None:
+ return self._available
+
+ try:
+ result = subprocess.run(["ollama", "list"], capture_output=True, text=True, timeout=5)
+ self._available = result.returncode == 0 and self.model in result.stdout
+ if not self._available:
+ # Try to check if ollama is running at least
+ result = subprocess.run(
+ ["curl", "-s", "http://localhost:11434/api/tags"],
+ capture_output=True,
+ text=True,
+ timeout=5,
+ )
+ if result.returncode == 0:
+ self._available = self.model in result.stdout
+ except (subprocess.TimeoutExpired, FileNotFoundError, Exception):
+ self._available = False
+
+ return self._available
+
+ def analyze(self, prompt: str, max_tokens: int = 200, timeout: int = 10) -> str | None:
+ """Call the local LLM for analysis."""
+ if not self.is_available():
+ return None
+
+ try:
+ import urllib.error
+ import urllib.request
+
+ # Use Ollama API directly via urllib (faster than curl subprocess)
+ data = json.dumps(
+ {
+ "model": self.model,
+ "prompt": prompt,
+ "stream": False,
+ "options": {
+ "num_predict": max_tokens,
+ "temperature": 0.3,
+ },
+ }
+ ).encode("utf-8")
+
+ req = urllib.request.Request(
+ "http://localhost:11434/api/generate",
+ data=data,
+ headers={"Content-Type": "application/json"},
+ )
+
+ with urllib.request.urlopen(req, timeout=timeout) as response:
+ result = json.loads(response.read().decode("utf-8"))
+ return result.get("response", "").strip()
+
+ except (urllib.error.URLError, json.JSONDecodeError, TimeoutError, Exception):
+ pass
+
+ return None
+
+
+class TerminalMonitor:
+ """
+ Monitors terminal commands for the manual execution flow.
+
+ Monitors ALL terminal sources by default:
+ - Bash history file (~/.bash_history)
+ - Zsh history file (~/.zsh_history)
+ - Fish history file (~/.local/share/fish/fish_history)
+ - ALL Cursor terminal files (all projects)
+ - External terminal output files
+ """
+
+ def __init__(
+ self, notification_callback: Callable[[str, str], None] | None = None, use_llm: bool = True
+ ):
+ self.notification_callback = notification_callback
+ self._monitoring = False
+ self._monitor_thread: threading.Thread | None = None
+ self._commands_observed: list[dict[str, Any]] = []
+ self._lock = threading.Lock()
+ self._cursor_terminals_dirs: list[Path] = []
+ self._expected_commands: list[str] = []
+ self._shell_history_files: list[Path] = []
+ self._output_buffer: list[dict[str, Any]] = [] # Buffer for terminal output
+ self._show_live_output = True # Whether to print live output
+
+ # Claude LLM for intelligent error analysis (primary)
+ self._use_llm = use_llm
+ self._claude: ClaudeLLM | None = None
+ self._llm: LocalLLM | None = None # Fallback
+ if use_llm:
+ self._claude = ClaudeLLM()
+ self._llm = LocalLLM(model="mistral") # Keep as fallback
+
+ # Context for LLM
+ self._session_context: list[str] = [] # Recent commands for context
+
+ # Use existing auto-fix architecture
+ from cortex.do_runner.diagnosis import AutoFixer, ErrorDiagnoser
+
+ self._diagnoser = ErrorDiagnoser()
+ self._auto_fixer = AutoFixer(llm_callback=self._llm_for_autofix if use_llm else None)
+
+ # Notification manager for desktop notifications
+ self.notifier = self._create_notifier()
+
+ # Discover all terminal sources
+ self._discover_terminal_sources()
+
+ def _create_notifier(self):
+ """Create notification manager for desktop notifications."""
+ try:
+ from cortex.notification_manager import NotificationManager
+
+ return NotificationManager()
+ except ImportError:
+ return None
+
+ def _llm_for_autofix(self, prompt: str) -> dict:
+ """LLM callback for the AutoFixer."""
+ if not self._llm or not self._llm.is_available():
+ return {}
+
+ result = self._llm.analyze(prompt, max_tokens=200, timeout=15)
+ if result:
+ return {"response": result, "fix_commands": []}
+ return {}
+
+ def _discover_terminal_sources(self, verbose: bool = False):
+ """Discover all available terminal sources to monitor."""
+ home = Path.home()
+
+ # Reset lists
+ self._shell_history_files = []
+ self._cursor_terminals_dirs = []
+
+ # Shell history files
+ shell_histories = [
+ home / ".bash_history", # Bash
+ home / ".zsh_history", # Zsh
+ home / ".history", # Generic
+ home / ".sh_history", # Sh
+ home / ".local" / "share" / "fish" / "fish_history", # Fish
+ home / ".ksh_history", # Korn shell
+ home / ".tcsh_history", # Tcsh
+ ]
+
+ for hist_file in shell_histories:
+ if hist_file.exists():
+ self._shell_history_files.append(hist_file)
+ if verbose:
+ console.print(f"[{GRAY}]{ICON_INFO} Monitoring: {hist_file}[/{GRAY}]")
+
+ # Find ALL Cursor terminal directories (all projects)
+ cursor_base = home / ".cursor" / "projects"
+ if cursor_base.exists():
+ for project_dir in cursor_base.iterdir():
+ if project_dir.is_dir():
+ terminals_path = project_dir / "terminals"
+ if terminals_path.exists():
+ self._cursor_terminals_dirs.append(terminals_path)
+ if verbose:
+ console.print(
+ f"[{GRAY}]{ICON_INFO} Monitoring Cursor terminals: {terminals_path.parent.name}[/{GRAY}]"
+ )
+
+ # Also check for tmux/screen panes
+ self._tmux_available = self._check_command_exists("tmux")
+ self._screen_available = self._check_command_exists("screen")
+
+ if verbose:
+ if self._tmux_available:
+ console.print(
+ f"[{GRAY}]{ICON_INFO} Tmux detected - will monitor tmux panes[/{GRAY}]"
+ )
+ if self._screen_available:
+ console.print(
+ f"[{GRAY}]{ICON_INFO} Screen detected - will monitor screen sessions[/{GRAY}]"
+ )
+
+ def _check_command_exists(self, cmd: str) -> bool:
+ """Check if a command exists in PATH."""
+ import shutil
+
+ return shutil.which(cmd) is not None
+
+ def start(
+ self,
+ verbose: bool = True,
+ show_live: bool = True,
+ expected_commands: list[str] | None = None,
+ ):
+ """Start monitoring terminal for commands."""
+ self.start_monitoring(
+ expected_commands=expected_commands, verbose=verbose, show_live=show_live
+ )
+
+ def _is_service_running(self) -> bool:
+ """Check if the Cortex Watch systemd service is running."""
+ try:
+ result = subprocess.run(
+ ["systemctl", "--user", "is-active", "cortex-watch.service"],
+ capture_output=True,
+ text=True,
+ timeout=3,
+ )
+ return result.stdout.strip() == "active"
+ except Exception:
+ return False
+
+ def start_monitoring(
+ self,
+ expected_commands: list[str] | None = None,
+ verbose: bool = True,
+ show_live: bool = True,
+ clear_old_logs: bool = True,
+ ):
+ """Start monitoring ALL terminal sources for commands."""
+ self._monitoring = True
+ self._expected_commands = expected_commands or []
+ self._show_live_output = show_live
+ self._output_buffer = []
+ self._session_context = []
+
+ # Mark this terminal as the Cortex terminal so watch hook won't log its commands
+ os.environ["CORTEX_TERMINAL"] = "1"
+
+ # Record the monitoring start time to filter out old commands
+ self._monitoring_start_time = datetime.datetime.now()
+
+ # Always clear old watch log to start fresh - this prevents reading old session commands
+ watch_file = self.get_watch_file_path()
+ if watch_file.exists():
+ # Truncate the file to clear old commands from previous sessions
+ watch_file.write_text("")
+
+ # Also record starting positions for bash/zsh history files
+ self._history_start_positions: dict[str, int] = {}
+ for hist_file in [Path.home() / ".bash_history", Path.home() / ".zsh_history"]:
+ if hist_file.exists():
+ self._history_start_positions[str(hist_file)] = hist_file.stat().st_size
+
+ # Re-discover sources in case new terminals opened
+ self._discover_terminal_sources(verbose=verbose)
+
+ # Check LLM availability
+ llm_status = ""
+ if self._llm and self._use_llm:
+ if self._llm.is_available():
+ llm_status = (
+ f"\n[{GREEN}]{ICON_SUCCESS} AI Analysis: Mistral (local) - Active[/{GREEN}]"
+ )
+ else:
+ llm_status = f"\n[{YELLOW}]{ICON_PENDING} AI Analysis: Mistral not available (install with: ollama pull mistral)[/{YELLOW}]"
+
+ if verbose:
+ from rich.panel import Panel
+
+ watch_file = self.get_watch_file_path()
+ source_file = Path.home() / ".cortex" / "watch_hook.sh"
+
+ # Check if systemd service is running (best option)
+ service_running = self._is_service_running()
+
+ # Check if auto-watch is already set up
+ bashrc = Path.home() / ".bashrc"
+ hook_installed = False
+ if bashrc.exists() and "Cortex Terminal Watch Hook" in bashrc.read_text():
+ hook_installed = True
+
+ # If service is running, we don't need the hook
+ if service_running:
+ setup_info = (
+ f"[{GREEN}]{ICON_SUCCESS} Cortex Watch Service is running[/{GREEN}]\n"
+ f"[{GRAY}]All terminal activity is being monitored automatically![/{GRAY}]"
+ )
+ else:
+ # Not using the service, need to set up hooks
+ if not hook_installed:
+ # Auto-install the hook to .bashrc
+ self.setup_auto_watch(permanent=True)
+ hook_installed = True # Now installed
+
+ # Ensure source file exists
+ self.setup_auto_watch(permanent=False)
+
+ # Create a super short activation command
+ short_cmd = f"source {source_file}"
+
+ # Try to copy to clipboard
+ clipboard_copied = False
+ try:
+ # Try xclip first, then xsel
+ for clip_cmd in [
+ ["xclip", "-selection", "clipboard"],
+ ["xsel", "--clipboard", "--input"],
+ ]:
+ try:
+ proc = subprocess.run(
+ clip_cmd, input=short_cmd.encode(), capture_output=True, timeout=2
+ )
+ if proc.returncode == 0:
+ clipboard_copied = True
+ break
+ except (FileNotFoundError, subprocess.TimeoutExpired):
+ continue
+ except Exception:
+ pass
+
+ if hook_installed:
+ clipboard_msg = (
+ f"[{GREEN}]📋 Copied to clipboard![/{GREEN}] " if clipboard_copied else ""
+ )
+ setup_info = (
+ f"[{GREEN}]{ICON_SUCCESS} Terminal watch hook is installed in .bashrc[/{GREEN}]\n"
+ f"[{GRAY}](New terminals will auto-activate)[/{GRAY}]\n\n"
+ f"[bold {YELLOW}]For EXISTING terminals, paste this:[/bold {YELLOW}]\n"
+ f"[bold {PURPLE_LIGHT}]{short_cmd}[/bold {PURPLE_LIGHT}]\n"
+ f"{clipboard_msg}\n"
+ f"[{GRAY}]Or type [/{GRAY}][{GREEN}]cortex watch --install --service[/{GREEN}][{GRAY}] for automatic monitoring![/{GRAY}]"
+ )
+
+ # Send desktop notification with the command
+ try:
+ msg = f"Paste in your OTHER terminal:\n\n{short_cmd}"
+ if clipboard_copied:
+ msg += "\n\n(Already copied to clipboard!)"
+ subprocess.run(
+ [
+ "notify-send",
+ "--urgency=critical",
+ "--icon=dialog-warning",
+ "--expire-time=15000",
+ "⚠️ Cortex: Activate Terminal Watching",
+ msg,
+ ],
+ capture_output=True,
+ timeout=2,
+ )
+ except Exception:
+ pass
+ else:
+ setup_info = (
+ f"[bold {YELLOW}]⚠ For real-time monitoring in OTHER terminals:[/bold {YELLOW}]\n\n"
+ f"[bold {PURPLE_LIGHT}]{short_cmd}[/bold {PURPLE_LIGHT}]\n\n"
+ f"[{GRAY}]Or install the watch service: [/{GRAY}][{GREEN}]cortex watch --install --service[/{GREEN}]"
+ )
+
+ console.print()
+ console.print(
+ Panel(
+ f"[bold {PURPLE_LIGHT}]{ICON_MONITOR} Terminal Monitoring Active[/bold {PURPLE_LIGHT}]\n\n"
+ f"[{WHITE}]Watching {len(self._shell_history_files)} shell history files\n"
+ f"Watching {len(self._cursor_terminals_dirs)} Cursor terminal directories\n"
+ + ("Watching Tmux panes\n" if self._tmux_available else "")
+ + llm_status
+ + "\n\n"
+ + setup_info
+ + f"[/{WHITE}]",
+ title=f"[bold {PURPLE}]Live Terminal Monitor[/bold {PURPLE}]",
+ border_style=PURPLE,
+ )
+ )
+ console.print()
+ console.print(f"[{GRAY}]─" * 60 + f"[/{GRAY}]")
+ console.print(f"[bold {WHITE}]📡 Live Terminal Feed:[/bold {WHITE}]")
+ console.print(f"[{GRAY}]─" * 60 + f"[/{GRAY}]")
+ console.print(f"[{GRAY}]Waiting for commands from other terminals...[/{GRAY}]")
+ console.print()
+
+ self._monitor_thread = threading.Thread(target=self._monitor_loop, daemon=True)
+ self._monitor_thread.start()
+
+ def stop_monitoring(self) -> list[dict[str, Any]]:
+ """Stop monitoring and return observed commands."""
+ self._monitoring = False
+ if self._monitor_thread:
+ self._monitor_thread.join(timeout=2)
+ self._monitor_thread = None
+
+ with self._lock:
+ result = list(self._commands_observed)
+ return result
+
+ def stop(self) -> list[dict[str, Any]]:
+ """Stop monitoring terminal."""
+ return self.stop_monitoring()
+
+ def get_observed_commands(self) -> list[dict[str, Any]]:
+ """Get all observed commands so far."""
+ with self._lock:
+ return list(self._commands_observed)
+
+ def test_monitoring(self):
+ """Test that monitoring is working by showing what files are being watched."""
+ console.print(
+ f"\n[bold {PURPLE_LIGHT}]{ICON_MONITOR} Terminal Monitoring Test[/bold {PURPLE_LIGHT}]\n"
+ )
+
+ # Check shell history files
+ console.print(f"[bold {WHITE}]Shell History Files:[/bold {WHITE}]")
+ for hist_file in self._shell_history_files:
+ exists = hist_file.exists()
+ size = hist_file.stat().st_size if exists else 0
+ status = (
+ f"[{GREEN}]{ICON_SUCCESS}[/{GREEN}]" if exists else f"[{RED}]{ICON_ERROR}[/{RED}]"
+ )
+ console.print(f" {status} [{WHITE}]{hist_file} ({size} bytes)[/{WHITE}]")
+
+ # Check Cursor terminal directories
+ console.print(f"\n[bold {WHITE}]Cursor Terminal Directories:[/bold {WHITE}]")
+ for terminals_dir in self._cursor_terminals_dirs:
+ if terminals_dir.exists():
+ files = list(terminals_dir.glob("*.txt"))
+ console.print(
+ f" [{GREEN}]{ICON_SUCCESS}[/{GREEN}] [{WHITE}]{terminals_dir} ({len(files)} files)[/{WHITE}]"
+ )
+ for f in files[:5]: # Show first 5
+ size = f.stat().st_size
+ console.print(f" [{GRAY}]- {f.name} ({size} bytes)[/{GRAY}]")
+ if len(files) > 5:
+ console.print(f" [{GRAY}]... and {len(files) - 5} more[/{GRAY}]")
+ else:
+ console.print(
+ f" [{RED}]{ICON_ERROR}[/{RED}] [{WHITE}]{terminals_dir} (not found)[/{WHITE}]"
+ )
+
+ # Check tmux
+ console.print(f"\n[bold {WHITE}]Other Sources:[/bold {WHITE}]")
+ console.print(
+ f" [{WHITE}]Tmux: [/{WHITE}]{f'[{GREEN}]{ICON_SUCCESS} available[/{GREEN}]' if self._tmux_available else f'[{GRAY}]not available[/{GRAY}]'}"
+ )
+ console.print(
+ f" [{WHITE}]Screen: [/{WHITE}]{f'[{GREEN}]{ICON_SUCCESS} available[/{GREEN}]' if self._screen_available else f'[{GRAY}]not available[/{GRAY}]'}"
+ )
+
+ console.print(
+ f"\n[{YELLOW}]Tip: For bash history to update in real-time, run in your terminal:[/{YELLOW}]"
+ )
+ console.print(f"[{GREEN}]export PROMPT_COMMAND='history -a'[/{GREEN}]")
+ console.print()
+
+ def inject_test_command(self, command: str, source: str = "test"):
+ """Inject a test command to verify the display is working."""
+ self._process_observed_command(command, source)
+
+ def get_watch_file_path(self) -> Path:
+ """Get the path to the cortex watch file."""
+ return Path.home() / ".cortex" / "terminal_watch.log"
+
+ def setup_terminal_hook(self) -> str:
+ """Generate a bash command to set up real-time terminal watching.
+
+ Returns the command the user should run in their terminal.
+ """
+ watch_file = self.get_watch_file_path()
+ watch_file.parent.mkdir(parents=True, exist_ok=True)
+
+ # Create a bash function that logs commands
+ hook_command = f"""
+# Cortex Terminal Hook - paste this in your terminal:
+export CORTEX_WATCH_FILE="{watch_file}"
+export PROMPT_COMMAND='history -a; echo "$(date +%H:%M:%S) $(history 1 | sed "s/^[ ]*[0-9]*[ ]*//")" >> "$CORTEX_WATCH_FILE"'
+echo "✓ Cortex is now watching this terminal"
+"""
+ return hook_command.strip()
+
+ def print_setup_instructions(self):
+ """Print instructions for setting up real-time terminal watching."""
+ from rich.panel import Panel
+
+ watch_file = self.get_watch_file_path()
+
+ console.print()
+ console.print(
+ Panel(
+ f"[bold {YELLOW}]⚠ For real-time terminal monitoring, run this in your OTHER terminal:[/bold {YELLOW}]\n\n"
+ f'[{GREEN}]export PROMPT_COMMAND=\'history -a; echo "$(date +%H:%M:%S) $(history 1 | sed "s/^[ ]*[0-9]*[ ]*//")" >> {watch_file}\'[/{GREEN}]\n\n'
+ f"[{GRAY}]This makes bash write commands immediately so Cortex can see them.[/{GRAY}]",
+ title=f"[{PURPLE_LIGHT}]Setup Required[/{PURPLE_LIGHT}]",
+ border_style=PURPLE,
+ )
+ )
+ console.print()
+
+ def setup_system_wide_watch(self) -> tuple[bool, str]:
+ """
+ Install the terminal watch hook system-wide in /etc/profile.d/.
+
+ This makes the hook active for ALL users and ALL new terminals automatically.
+ Requires sudo.
+
+ Returns:
+ Tuple of (success, message)
+ """
+ import subprocess
+
+ watch_file = self.get_watch_file_path()
+ profile_script = "/etc/profile.d/cortex-watch.sh"
+
+ # The system-wide hook script
+ hook_content = """#!/bin/bash
+# Cortex Terminal Watch Hook - System Wide
+# Installed by: cortex do watch --system
+# This enables real-time terminal command monitoring for Cortex AI
+
+# Only run in interactive shells
+[[ $- != *i* ]] && return
+
+# Skip if already set up or if this is the Cortex terminal
+[[ -n "$CORTEX_TERMINAL" ]] && return
+[[ -n "$__CORTEX_WATCH_ACTIVE" ]] && return
+export __CORTEX_WATCH_ACTIVE=1
+
+# Watch file location (user-specific)
+CORTEX_WATCH_FILE="$HOME/.cortex/terminal_watch.log"
+mkdir -p "$HOME/.cortex" 2>/dev/null
+
+__cortex_last_histnum=""
+__cortex_log_cmd() {
+ local histnum="$(history 1 2>/dev/null | awk '{print $1}')"
+ [[ "$histnum" == "$__cortex_last_histnum" ]] && return
+ __cortex_last_histnum="$histnum"
+
+ local cmd="$(history 1 2>/dev/null | sed "s/^[ ]*[0-9]*[ ]*//")"
+ [[ -z "${cmd// /}" ]] && return
+ [[ "$cmd" == cortex* ]] && return
+ [[ "$cmd" == *"watch_hook"* ]] && return
+
+ echo "$cmd" >> "$CORTEX_WATCH_FILE" 2>/dev/null
+}
+
+# Add to PROMPT_COMMAND (preserve existing)
+if [[ -z "$PROMPT_COMMAND" ]]; then
+ export PROMPT_COMMAND='history -a; __cortex_log_cmd'
+else
+ export PROMPT_COMMAND="${PROMPT_COMMAND}; __cortex_log_cmd"
+fi
+"""
+
+ try:
+ # Write to a temp file first
+ import tempfile
+
+ with tempfile.NamedTemporaryFile(mode="w", suffix=".sh", delete=False) as f:
+ f.write(hook_content)
+ temp_file = f.name
+
+ # Use sudo to copy to /etc/profile.d/
+ result = subprocess.run(
+ ["sudo", "cp", temp_file, profile_script],
+ capture_output=True,
+ text=True,
+ timeout=30,
+ )
+
+ if result.returncode != 0:
+ return False, f"Failed to install: {result.stderr}"
+
+ # Make it executable
+ subprocess.run(["sudo", "chmod", "+x", profile_script], capture_output=True, timeout=10)
+
+ # Clean up temp file
+ Path(temp_file).unlink(missing_ok=True)
+
+ return (
+ True,
+ f"✓ Installed system-wide to {profile_script}\n"
+ "All NEW terminals will automatically have Cortex watching enabled.\n"
+ "For current terminals, run: source /etc/profile.d/cortex-watch.sh",
+ )
+
+ except subprocess.TimeoutExpired:
+ return False, "Timeout waiting for sudo"
+ except Exception as e:
+ return False, f"Error: {e}"
+
+ def uninstall_system_wide_watch(self) -> tuple[bool, str]:
+ """Remove the system-wide terminal watch hook."""
+ import subprocess
+
+ profile_script = "/etc/profile.d/cortex-watch.sh"
+
+ try:
+ if not Path(profile_script).exists():
+ return True, "System-wide hook not installed"
+
+ result = subprocess.run(
+ ["sudo", "rm", profile_script], capture_output=True, text=True, timeout=30
+ )
+
+ if result.returncode != 0:
+ return False, f"Failed to remove: {result.stderr}"
+
+ return True, f"✓ Removed {profile_script}"
+
+ except Exception as e:
+ return False, f"Error: {e}"
+
+ def is_system_wide_installed(self) -> bool:
+ """Check if system-wide hook is installed."""
+ return Path("/etc/profile.d/cortex-watch.sh").exists()
+
+ def setup_auto_watch(self, permanent: bool = True) -> tuple[bool, str]:
+ """
+ Set up automatic terminal watching for new and existing terminals.
+
+ Args:
+ permanent: If True, adds the hook to ~/.bashrc for future terminals
+
+ Returns:
+ Tuple of (success, message)
+ """
+ watch_file = self.get_watch_file_path()
+ watch_file.parent.mkdir(parents=True, exist_ok=True)
+
+ # The hook command - excludes cortex commands and source commands
+ # Uses a function to filter out Cortex terminal commands
+ # Added: tracks last logged command and history number to avoid duplicates
+ hook_line = f"""
+__cortex_last_histnum=""
+__cortex_log_cmd() {{
+ # Get current history number
+ local histnum="$(history 1 | awk '{{print $1}}')"
+ # Skip if same as last logged (prevents duplicate on terminal init)
+ [[ "$histnum" == "$__cortex_last_histnum" ]] && return
+ __cortex_last_histnum="$histnum"
+
+ local cmd="$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//")"
+ # Skip empty or whitespace-only commands
+ [[ -z "${{cmd// /}}" ]] && return
+ # Skip if this is the cortex terminal or cortex-related commands
+ [[ "$cmd" == cortex* ]] && return
+ [[ "$cmd" == *"source"*".cortex"* ]] && return
+ [[ "$cmd" == *"watch_hook"* ]] && return
+ [[ -n "$CORTEX_TERMINAL" ]] && return
+ # Include terminal ID (TTY) in the log - format: TTY|COMMAND
+ local tty_name="$(tty 2>/dev/null | sed 's|/dev/||' | tr '/' '_')"
+ echo "${{tty_name:-unknown}}|$cmd" >> {watch_file}
+}}
+export PROMPT_COMMAND='history -a; __cortex_log_cmd'
+"""
+ marker = "# Cortex Terminal Watch Hook"
+
+ bashrc = Path.home() / ".bashrc"
+ zshrc = Path.home() / ".zshrc"
+
+ added_to = []
+
+ if permanent:
+ # Add to .bashrc if it exists and doesn't already have the hook
+ if bashrc.exists():
+ content = bashrc.read_text()
+ if marker not in content:
+ # Add hook AND a short alias for easy activation
+ alias_line = f'\nalias cw="source {watch_file.parent}/watch_hook.sh" # Quick Cortex watch activation\n'
+ with open(bashrc, "a") as f:
+ f.write(f"\n{marker}\n{hook_line}\n{alias_line}")
+ added_to.append(".bashrc")
+ else:
+ added_to.append(".bashrc (already configured)")
+
+ # Add to .zshrc if it exists
+ if zshrc.exists():
+ content = zshrc.read_text()
+ if marker not in content:
+ # Zsh uses precmd instead of PROMPT_COMMAND
+ # Added tracking to avoid duplicates
+ zsh_hook = f"""
+{marker}
+typeset -g __cortex_last_cmd=""
+cortex_watch_hook() {{
+ local cmd="$(fc -ln -1 | sed 's/^[[:space:]]*//')"
+ [[ -z "$cmd" ]] && return
+ [[ "$cmd" == "$__cortex_last_cmd" ]] && return
+ __cortex_last_cmd="$cmd"
+ [[ "$cmd" == cortex* ]] && return
+ [[ "$cmd" == *".cortex"* ]] && return
+ [[ -n "$CORTEX_TERMINAL" ]] && return
+ # Include terminal ID (TTY) in the log - format: TTY|COMMAND
+ local tty_name="$(tty 2>/dev/null | sed 's|/dev/||' | tr '/' '_')"
+ echo "${{tty_name:-unknown}}|$cmd" >> {watch_file}
+}}
+precmd_functions+=(cortex_watch_hook)
+"""
+ with open(zshrc, "a") as f:
+ f.write(zsh_hook)
+ added_to.append(".zshrc")
+ else:
+ added_to.append(".zshrc (already configured)")
+
+ # Create a source file for existing terminals
+ source_file = Path.home() / ".cortex" / "watch_hook.sh"
+ source_file.write_text(f"""#!/bin/bash
+{marker}
+{hook_line}
+echo "✓ Cortex is now watching this terminal"
+""")
+ source_file.chmod(0o755)
+ source_file.chmod(0o755)
+
+ if added_to:
+ msg = f"Added to: {', '.join(added_to)}\n"
+ msg += f"For existing terminals, run: source {source_file}"
+ return True, msg
+ else:
+ return True, f"Source file created: {source_file}\nRun: source {source_file}"
+
+ def remove_auto_watch(self) -> tuple[bool, str]:
+ """Remove the automatic terminal watching hook from shell configs."""
+ marker = "# Cortex Terminal Watch Hook"
+ removed_from = []
+
+ for rc_file in [Path.home() / ".bashrc", Path.home() / ".zshrc"]:
+ if rc_file.exists():
+ content = rc_file.read_text()
+ if marker in content:
+ # Remove the hook section
+ lines = content.split("\n")
+ new_lines = []
+ skip_until_blank = False
+
+ for line in lines:
+ if marker in line:
+ skip_until_blank = True
+ continue
+ if skip_until_blank:
+ if (
+ line.strip() == ""
+ or line.startswith("export PROMPT")
+ or line.startswith("cortex_watch")
+ or line.startswith("precmd_functions")
+ ):
+ continue
+ if line.startswith("}"):
+ continue
+ skip_until_blank = False
+ new_lines.append(line)
+
+ rc_file.write_text("\n".join(new_lines))
+ removed_from.append(rc_file.name)
+
+ # Remove source file
+ source_file = Path.home() / ".cortex" / "watch_hook.sh"
+ if source_file.exists():
+ source_file.unlink()
+ removed_from.append("watch_hook.sh")
+
+ if removed_from:
+ return True, f"Removed from: {', '.join(removed_from)}"
+ return True, "No hooks found to remove"
+
+ def broadcast_hook_to_terminals(self) -> int:
+ """
+ Attempt to set up the hook in all running bash terminals.
+ Uses various methods to inject the hook.
+
+ Returns the number of terminals that were set up.
+ """
+ watch_file = self.get_watch_file_path()
+ hook_cmd = f'export PROMPT_COMMAND=\'history -a; echo "$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//")" >> {watch_file}\''
+
+ count = 0
+
+ # Method 1: Write to all pts devices (requires proper permissions)
+ try:
+ pts_dir = Path("/dev/pts")
+ if pts_dir.exists():
+ for pts in pts_dir.iterdir():
+ if pts.name.isdigit():
+ try:
+ # This usually requires the same user
+ with open(pts, "w") as f:
+ f.write("\n# Cortex: Setting up terminal watch...\n")
+ f.write("source ~/.cortex/watch_hook.sh\n")
+ count += 1
+ except (PermissionError, OSError):
+ pass
+ except Exception:
+ pass
+
+ return count
+
+ def _monitor_loop(self):
+ """Monitor loop that watches ALL terminal sources for activity."""
+ file_positions: dict[str, int] = {}
+ last_check_time: dict[str, float] = {}
+
+ # Cortex watch file (real-time if user sets up the hook)
+ watch_file = self.get_watch_file_path()
+
+ # Ensure watch file directory exists
+ watch_file.parent.mkdir(parents=True, exist_ok=True)
+
+ # Initialize positions for all shell history files - start at END to only see NEW commands
+ for hist_file in self._shell_history_files:
+ if hist_file.exists():
+ try:
+ file_positions[str(hist_file)] = hist_file.stat().st_size
+ last_check_time[str(hist_file)] = time.time()
+ except OSError:
+ pass
+
+ # Initialize watch file position - ALWAYS start from END of existing content
+ # This ensures we only see commands written AFTER monitoring starts
+ if watch_file.exists():
+ try:
+ # Start from current end position (skip ALL existing content)
+ file_positions[str(watch_file)] = watch_file.stat().st_size
+ except OSError:
+ file_positions[str(watch_file)] = 0
+ else:
+ # File doesn't exist yet - will be created, start from 0
+ file_positions[str(watch_file)] = 0
+
+ # Initialize positions for all Cursor terminal files
+ for terminals_dir in self._cursor_terminals_dirs:
+ if terminals_dir.exists():
+ for term_file in terminals_dir.glob("*.txt"):
+ try:
+ file_positions[str(term_file)] = term_file.stat().st_size
+ except OSError:
+ pass
+ # Also check for ext-*.txt files (external terminals)
+ for term_file in terminals_dir.glob("ext-*.txt"):
+ try:
+ file_positions[str(term_file)] = term_file.stat().st_size
+ except OSError:
+ pass
+
+ check_count = 0
+ while self._monitoring:
+ time.sleep(0.2) # Check very frequently (5 times per second)
+ check_count += 1
+
+ # Check Cortex watch file FIRST (this is the real-time one)
+ if watch_file.exists():
+ self._check_watch_file(watch_file, file_positions)
+
+ # Check all shell history files
+ for hist_file in self._shell_history_files:
+ if hist_file.exists():
+ shell_name = hist_file.stem.replace("_history", "").replace(".", "")
+ self._check_file_for_new_commands(
+ hist_file, file_positions, source=f"{shell_name}_history"
+ )
+
+ # Check ALL Cursor terminal directories (these update in real-time!)
+ for terminals_dir in self._cursor_terminals_dirs:
+ if terminals_dir.exists():
+ project_name = terminals_dir.parent.name
+
+ # IDE terminals - check ALL txt files
+ for term_file in terminals_dir.glob("*.txt"):
+ if not term_file.name.startswith("ext-"):
+ self._check_file_for_new_commands(
+ term_file,
+ file_positions,
+ source=f"cursor:{project_name}:{term_file.stem}",
+ )
+
+ # External terminals (iTerm, gnome-terminal, etc.)
+ for term_file in terminals_dir.glob("ext-*.txt"):
+ self._check_file_for_new_commands(
+ term_file,
+ file_positions,
+ source=f"external:{project_name}:{term_file.stem}",
+ )
+
+ # Check tmux panes if available (every 5 checks = 1 second)
+ if self._tmux_available and check_count % 5 == 0:
+ self._check_tmux_panes()
+
+ # Periodically show we're still monitoring (every 30 seconds)
+ if check_count % 150 == 0 and self._show_live_output:
+ console.print(
+ f"[{GRAY}]{ICON_PENDING} still monitoring ({len(self._commands_observed)} commands observed so far)[/{GRAY}]"
+ )
+
+ def _is_cortex_terminal_command(self, command: str) -> bool:
+ """Check if a command is from the Cortex terminal itself (should be ignored).
+
+ This should be very conservative - only filter out commands that are
+ DEFINITELY from Cortex's own terminal, not user commands.
+ """
+ cmd_lower = command.lower().strip()
+
+ # Only filter out commands that are clearly from Cortex terminal
+ cortex_patterns = [
+ "cortex ask",
+ "cortex watch",
+ "cortex do ",
+ "cortex info",
+ "source ~/.cortex/watch_hook", # Setting up the watch hook
+ ".cortex/watch_hook",
+ ]
+
+ for pattern in cortex_patterns:
+ if pattern in cmd_lower:
+ return True
+
+ # Check if command starts with "cortex " (the CLI)
+ if cmd_lower.startswith("cortex "):
+ return True
+
+ # Don't filter out general commands - let them through!
+ return False
+
+ def _check_watch_file(self, watch_file: Path, positions: dict[str, int]):
+ """Check the Cortex watch file for new commands (real-time)."""
+ try:
+ current_size = watch_file.stat().st_size
+ key = str(watch_file)
+
+ # Initialize position if not set
+ # Start from 0 because we clear the file when monitoring starts
+ # This ensures we capture all commands written after monitoring begins
+ if key not in positions:
+ positions[key] = 0 # Start from beginning since file was cleared
+
+ # If file is smaller than our position (was truncated), reset
+ if current_size < positions[key]:
+ positions[key] = 0
+
+ if current_size > positions[key]:
+ with open(watch_file) as f:
+ f.seek(positions[key])
+ new_content = f.read()
+
+ # Parse watch file - each line is a command
+ for line in new_content.split("\n"):
+ line = line.strip()
+ if not line:
+ continue
+
+ # Skip very short lines or common noise
+ if len(line) < 2:
+ continue
+
+ # Skip if we've already seen this exact command recently
+ if hasattr(self, "_recent_watch_commands"):
+ if line in self._recent_watch_commands:
+ continue
+ else:
+ self._recent_watch_commands = []
+
+ # Keep track of recent commands to avoid duplicates
+ self._recent_watch_commands.append(line)
+ if len(self._recent_watch_commands) > 20:
+ self._recent_watch_commands.pop(0)
+
+ # Handle format with timestamp: "HH:MM:SS command"
+ if re.match(r"^\d{2}:\d{2}:\d{2}\s+", line):
+ parts = line.split(" ", 1)
+ if len(parts) == 2 and parts[1].strip():
+ self._process_observed_command(parts[1].strip(), "live_terminal")
+ else:
+ # Plain command
+ self._process_observed_command(line, "live_terminal")
+
+ positions[key] = current_size
+
+ except OSError:
+ pass
+
+ def _check_tmux_panes(self):
+ """Check tmux panes for recent commands."""
+ import subprocess
+
+ try:
+ # Get list of tmux sessions
+ result = subprocess.run(
+ ["tmux", "list-panes", "-a", "-F", "#{pane_id}:#{pane_current_command}"],
+ capture_output=True,
+ text=True,
+ timeout=1,
+ )
+ if result.returncode == 0:
+ for line in result.stdout.strip().split("\n"):
+ if ":" in line:
+ pane_id, cmd = line.split(":", 1)
+ if cmd and cmd not in ["bash", "zsh", "fish", "sh"]:
+ self._process_observed_command(cmd, source=f"tmux:{pane_id}")
+ except (subprocess.TimeoutExpired, FileNotFoundError, subprocess.SubprocessError):
+ pass
+
+ def _check_file_for_new_commands(
+ self,
+ file_path: Path,
+ positions: dict[str, int],
+ source: str,
+ ):
+ """Check a file for new commands and process them."""
+ try:
+ current_size = file_path.stat().st_size
+ key = str(file_path)
+
+ if key not in positions:
+ positions[key] = current_size
+ return
+
+ if current_size > positions[key]:
+ with open(file_path) as f:
+ f.seek(positions[key])
+ new_content = f.read()
+
+ # For Cursor terminals, also extract output
+ if "cursor" in source or "external" in source:
+ self._process_terminal_content(new_content, source)
+ else:
+ new_commands = self._extract_commands_from_content(new_content, source)
+ for cmd in new_commands:
+ self._process_observed_command(cmd, source)
+
+ positions[key] = current_size
+
+ except OSError:
+ pass
+
+ def _process_terminal_content(self, content: str, source: str):
+ """Process terminal content including commands and their output."""
+ lines = content.split("\n")
+ current_command = None
+ output_lines = []
+
+ for line in lines:
+ line_stripped = line.strip()
+ if not line_stripped:
+ continue
+
+ # Check if this is a command line (has prompt)
+ is_command = False
+ for pattern in [
+ r"^\$ (.+)$",
+ r"^[a-zA-Z0-9_-]+@[a-zA-Z0-9_-]+:.+\$ (.+)$",
+ r"^[a-zA-Z0-9_-]+@[a-zA-Z0-9_-]+:.+# (.+)$",
+ r"^\(.*\)\s*\$ (.+)$",
+ ]:
+ match = re.match(pattern, line_stripped)
+ if match:
+ # Save previous command with its output
+ if current_command:
+ self._process_observed_command_with_output(
+ current_command, "\n".join(output_lines), source
+ )
+
+ current_command = match.group(1).strip()
+ output_lines = []
+ is_command = True
+ break
+
+ if not is_command and current_command:
+ # This is output from the current command
+ output_lines.append(line_stripped)
+
+ # Process the last command
+ if current_command:
+ self._process_observed_command_with_output(
+ current_command, "\n".join(output_lines), source
+ )
+
+ def _process_observed_command_with_output(self, command: str, output: str, source: str):
+ """Process a command with its output for better feedback."""
+ # First process the command normally
+ self._process_observed_command(command, source)
+
+ if not self._show_live_output:
+ return
+
+ # Then show relevant output if there is any
+ if output and len(output) > 5:
+ # Check for errors in output
+ error_patterns = [
+ (r"error:", "Error detected"),
+ (r"Error:", "Error detected"),
+ (r"ERROR", "Error detected"),
+ (r"failed", "Operation failed"),
+ (r"Failed", "Operation failed"),
+ (r"permission denied", "Permission denied"),
+ (r"Permission denied", "Permission denied"),
+ (r"not found", "Not found"),
+ (r"No such file", "File not found"),
+ (r"command not found", "Command not found"),
+ (r"Cannot connect", "Connection failed"),
+ (r"Connection refused", "Connection refused"),
+ (r"Unable to", "Operation failed"),
+ (r"denied", "Access denied"),
+ (r"Denied", "Access denied"),
+ (r"timed out", "Timeout"),
+ (r"timeout", "Timeout"),
+ (r"fatal:", "Fatal error"),
+ (r"FATAL", "Fatal error"),
+ (r"panic", "Panic"),
+ (r"segfault", "Crash"),
+ (r"Segmentation fault", "Crash"),
+ (r"killed", "Process killed"),
+ (r"Killed", "Process killed"),
+ (r"cannot", "Cannot complete"),
+ (r"Could not", "Could not complete"),
+ (r"Invalid", "Invalid input"),
+ (r"Conflict", "Conflict detected"),
+ (r"\[emerg\]", "Config error"),
+ (r"\[error\]", "Error"),
+ (r"\[crit\]", "Critical error"),
+ (r"\[alert\]", "Alert"),
+ (r"syntax error", "Syntax error"),
+ (r"unknown directive", "Unknown directive"),
+ (r"unexpected", "Unexpected error"),
+ ]
+
+ for pattern, msg in error_patterns:
+ if re.search(pattern, output, re.IGNORECASE):
+ # Show error in bordered panel
+ from rich.panel import Panel
+ from rich.text import Text
+
+ output_preview = output[:200] + "..." if len(output) > 200 else output
+
+ error_text = Text()
+ error_text.append(f"{ICON_ERROR} {msg}\n\n", style=f"bold {RED}")
+ for line in output_preview.split("\n")[:3]:
+ if line.strip():
+ error_text.append(f" {line.strip()[:80]}\n", style=GRAY)
+
+ console.print()
+ console.print(
+ Panel(
+ error_text,
+ title=f"[bold {RED}]Error[/bold {RED}]",
+ border_style=RED,
+ padding=(0, 1),
+ )
+ )
+
+ # Get AI-powered help
+ self._provide_error_help(command, output)
+ break
+ else:
+ # Show success indicator for commands that completed
+ if "✓" in output or "success" in output.lower() or "complete" in output.lower():
+ console.print(
+ f"[{GREEN}] {ICON_SUCCESS} Command completed successfully[/{GREEN}]"
+ )
+ elif len(output.strip()) > 0:
+ # Show a preview of the output
+ output_lines = [l for l in output.split("\n") if l.strip()][:3]
+ if output_lines:
+ console.print(
+ f"[{GRAY}] Output: {output_lines[0][:60]}{'...' if len(output_lines[0]) > 60 else ''}[/{GRAY}]"
+ )
+
+ def _provide_error_help(self, command: str, output: str):
+ """Provide contextual help for errors using Claude LLM and send solutions via notifications."""
+ import subprocess
+
+ from rich.panel import Panel
+ from rich.table import Table
+
+ console.print()
+
+ # First, try Claude for intelligent analysis
+ claude_analysis = None
+ if self._claude and self._use_llm and self._claude.is_available():
+ claude_analysis = self._claude.analyze_error(command, output)
+
+ # Also use the existing ErrorDiagnoser for pattern-based analysis
+ diagnosis = self._diagnoser.diagnose_error(command, output)
+
+ error_type = diagnosis.get("error_type", "unknown")
+ category = diagnosis.get("category", "unknown")
+ description = diagnosis.get("description", output[:200])
+ fix_commands = diagnosis.get("fix_commands", [])
+ can_auto_fix = diagnosis.get("can_auto_fix", False)
+ fix_strategy = diagnosis.get("fix_strategy", "")
+ extracted_info = diagnosis.get("extracted_info", {})
+
+ # If Claude provided analysis, use it to enhance diagnosis
+ if claude_analysis:
+ cause = claude_analysis.get("cause", "")
+ claude_fixes = claude_analysis.get("fixes", [])
+
+ # Show Claude's analysis in bordered panel
+ if cause or claude_fixes:
+ from rich.panel import Panel
+ from rich.text import Text
+
+ analysis_text = Text()
+ if cause:
+ analysis_text.append("Cause: ", style="bold cyan")
+ analysis_text.append(f"{cause}\n\n", style="white")
+ if claude_fixes:
+ analysis_text.append("Solution:\n", style=f"bold {GREEN}")
+ for fix in claude_fixes[:3]:
+ analysis_text.append(f" $ {fix}\n", style=GREEN)
+
+ console.print()
+ console.print(
+ Panel(
+ analysis_text,
+ title=f"[bold {PURPLE_LIGHT}]{ICON_MONITOR} Claude Analysis[/bold {PURPLE_LIGHT}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+
+ # Send notification with Claude's solution
+ if cause or claude_fixes:
+ notif_title = f"🔧 Cortex: {error_type if error_type != 'unknown' else 'Error'}"
+ notif_body = cause[:100] if cause else description[:100]
+ if claude_fixes:
+ notif_body += f"\n\nFix: {claude_fixes[0]}"
+ self._send_solution_notification(notif_title, notif_body)
+
+ # Use Claude's fixes if pattern-based analysis didn't find any
+ if not fix_commands and claude_fixes:
+ fix_commands = claude_fixes
+ can_auto_fix = True
+
+ # Show diagnosis in panel (only if no Claude analysis)
+ if not claude_analysis:
+ from rich.panel import Panel
+ from rich.table import Table
+ from rich.text import Text
+
+ diag_table = Table(show_header=False, box=None, padding=(0, 1))
+ diag_table.add_column("Key", style="dim")
+ diag_table.add_column("Value", style="bold")
+
+ diag_table.add_row("Type", error_type)
+ diag_table.add_row("Category", category)
+ if can_auto_fix:
+ diag_table.add_row(
+ "Auto-Fix",
+ (
+ f"[{GREEN}]{ICON_SUCCESS} Yes[/{GREEN}] [{GRAY}]({fix_strategy})[/{GRAY}]"
+ if fix_strategy
+ else f"[{GREEN}]{ICON_SUCCESS} Yes[/{GREEN}]"
+ ),
+ )
+ else:
+ diag_table.add_row("Auto-Fix", f"[{RED}]{ICON_INFO} No[/{RED}]")
+
+ console.print()
+ console.print(
+ Panel(
+ diag_table,
+ title=f"[bold {PURPLE_LIGHT}]Diagnosis[/bold {PURPLE_LIGHT}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+
+ # If auto-fix is possible, attempt to run the fix commands
+ if can_auto_fix and fix_commands:
+ actionable_commands = [c for c in fix_commands if not c.startswith("#")]
+
+ if actionable_commands:
+ # Auto-fix with progress bar
+ from rich.panel import Panel
+ from rich.progress import BarColumn, Progress, SpinnerColumn, TextColumn
+
+ console.print()
+ console.print(
+ Panel(
+ f"[bold {WHITE}]Running {len(actionable_commands)} fix command(s)...[/bold {WHITE}]",
+ title=f"[bold {PURPLE_LIGHT}]{ICON_SUCCESS} Auto-Fix[/bold {PURPLE_LIGHT}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+
+ # Send notification that we're fixing the command
+ self._notify_fixing_command(command, actionable_commands[0])
+
+ # Run the fix commands
+ fix_success = self._run_auto_fix_commands(actionable_commands, command, error_type)
+
+ if fix_success:
+ # Success in bordered panel
+ from rich.panel import Panel
+
+ console.print()
+ console.print(
+ Panel(
+ f"[{GREEN}]{ICON_SUCCESS}[/{GREEN}] [{WHITE}]Auto-fix completed![/{WHITE}]\n\n[{GRAY}]Retry:[/{GRAY}] [{PURPLE_LIGHT}]{command}[/{PURPLE_LIGHT}]",
+ title=f"[bold {GREEN}]Success[/bold {GREEN}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+
+ # Send success notification
+ self._send_fix_success_notification(command, error_type)
+ else:
+ pass # Sudo commands shown separately
+
+ console.print()
+ return
+
+ # Show fix commands in bordered panel if we can't auto-fix
+ if fix_commands and not claude_analysis:
+ from rich.panel import Panel
+ from rich.text import Text
+
+ fix_text = Text()
+ for cmd in fix_commands[:3]:
+ if not cmd.startswith("#"):
+ fix_text.append(f" $ {cmd}\n", style="green")
+
+ console.print()
+ console.print(
+ Panel(
+ fix_text,
+ title="[bold]Manual Fix[/bold]",
+ border_style="blue",
+ padding=(0, 1),
+ )
+ )
+
+ # If error is unknown and no Claude, use local LLM
+ if (
+ error_type == "unknown"
+ and not claude_analysis
+ and self._llm
+ and self._use_llm
+ and self._llm.is_available()
+ ):
+ llm_help = self._llm_analyze_error(command, output)
+ if llm_help:
+ console.print()
+ console.print(f"[{GRAY}]{llm_help}[/{GRAY}]")
+
+ # Try to extract fix command from LLM response
+ llm_fix = self._extract_fix_from_llm(llm_help)
+ if llm_fix:
+ console.print()
+ console.print(
+ f"[bold {GREEN}]{ICON_SUCCESS} AI Suggested Fix:[/bold {GREEN}] [{PURPLE_LIGHT}]{llm_fix}[/{PURPLE_LIGHT}]"
+ )
+
+ # Attempt to run the LLM suggested fix
+ if self._is_safe_fix_command(llm_fix):
+ console.print(f"[{GRAY}]Attempting AI-suggested fix...[/{GRAY}]")
+ self._run_auto_fix_commands([llm_fix], command, "ai_suggested")
+
+ # Build notification message
+ notification_msg = ""
+ if fix_commands:
+ actionable = [c for c in fix_commands if not c.startswith("#")]
+ if actionable:
+ notification_msg = f"Manual fix needed: {actionable[0][:50]}"
+ else:
+ notification_msg = description[:100]
+ else:
+ notification_msg = description[:100]
+
+ # Send desktop notification
+ self._send_error_notification(command, notification_msg, error_type, can_auto_fix)
+
+ console.print()
+
+ def _run_auto_fix_commands(
+ self, commands: list[str], original_command: str, error_type: str
+ ) -> bool:
+ """Run auto-fix commands with progress bar and return True if successful."""
+ import subprocess
+
+ from rich.panel import Panel
+ from rich.progress import BarColumn, Progress, SpinnerColumn, TaskProgressColumn, TextColumn
+ from rich.table import Table
+
+ all_success = True
+ sudo_commands_pending = []
+ results = []
+
+ # Break down && commands into individual commands
+ expanded_commands = []
+ for cmd in commands[:3]:
+ if cmd.startswith("#"):
+ continue
+ # Split by && but preserve the individual commands
+ if " && " in cmd:
+ parts = [p.strip() for p in cmd.split(" && ") if p.strip()]
+ expanded_commands.extend(parts)
+ else:
+ expanded_commands.append(cmd)
+
+ actionable = expanded_commands
+
+ # Show each command being run with Rich Status (no raw ANSI codes)
+ from rich.status import Status
+
+ for i, fix_cmd in enumerate(actionable, 1):
+ # Check if this needs sudo
+ needs_sudo = fix_cmd.strip().startswith("sudo ")
+
+ if needs_sudo:
+ try:
+ check_sudo = subprocess.run(
+ ["sudo", "-n", "true"], capture_output=True, timeout=5
+ )
+
+ if check_sudo.returncode != 0:
+ sudo_commands_pending.append(fix_cmd)
+ results.append((fix_cmd, "sudo", None))
+ console.print(
+ f" [{GRAY}][{i}/{len(actionable)}][/{GRAY}] [{YELLOW}]![/{YELLOW}] [{WHITE}]{fix_cmd[:55]}...[/{WHITE}] [{GRAY}](needs sudo)[/{GRAY}]"
+ )
+ continue
+ except Exception:
+ sudo_commands_pending.append(fix_cmd)
+ results.append((fix_cmd, "sudo", None))
+ console.print(
+ f" [{GRAY}][{i}/{len(actionable)}][/{GRAY}] [{YELLOW}]![/{YELLOW}] [{WHITE}]{fix_cmd[:55]}...[/{WHITE}] [{GRAY}](needs sudo)[/{GRAY}]"
+ )
+ continue
+
+ # Run command with status spinner
+ cmd_display = fix_cmd[:55] + "..." if len(fix_cmd) > 55 else fix_cmd
+
+ try:
+ with Status(
+ f"[{PURPLE_LIGHT}]{cmd_display}[/{PURPLE_LIGHT}]",
+ console=console,
+ spinner="dots",
+ ):
+ result = subprocess.run(
+ fix_cmd, shell=True, capture_output=True, text=True, timeout=60
+ )
+
+ if result.returncode == 0:
+ results.append((fix_cmd, "success", None))
+ console.print(
+ f" [{GRAY}][{i}/{len(actionable)}][/{GRAY}] [{GREEN}]{ICON_SUCCESS}[/{GREEN}] [{WHITE}]{cmd_display}[/{WHITE}]"
+ )
+ else:
+ if (
+ "password" in (result.stderr or "").lower()
+ or "terminal is required" in (result.stderr or "").lower()
+ ):
+ sudo_commands_pending.append(fix_cmd)
+ results.append((fix_cmd, "sudo", None))
+ console.print(
+ f" [{GRAY}][{i}/{len(actionable)}][/{GRAY}] [{YELLOW}]![/{YELLOW}] [{WHITE}]{cmd_display}[/{WHITE}] [{GRAY}](needs sudo)[/{GRAY}]"
+ )
+ else:
+ results.append(
+ (fix_cmd, "failed", result.stderr[:60] if result.stderr else "failed")
+ )
+ all_success = False
+ console.print(
+ f" [{GRAY}][{i}/{len(actionable)}][/{GRAY}] [{RED}]{ICON_ERROR}[/{RED}] [{WHITE}]{cmd_display}[/{WHITE}]"
+ )
+ console.print(
+ f" [{GRAY}]{result.stderr[:80] if result.stderr else 'Command failed'}[/{GRAY}]"
+ )
+ break
+
+ except subprocess.TimeoutExpired:
+ results.append((fix_cmd, "timeout", None))
+ all_success = False
+ console.print(
+ f" [{GRAY}][{i}/{len(actionable)}][/{GRAY}] [{YELLOW}]{ICON_PENDING}[/{YELLOW}] [{WHITE}]{cmd_display}[/{WHITE}] [{GRAY}](timeout)[/{GRAY}]"
+ )
+ break
+ except Exception as e:
+ results.append((fix_cmd, "error", str(e)[:50]))
+ all_success = False
+ console.print(
+ f" [{GRAY}][{i}/{len(actionable)}][/{GRAY}] [{RED}]{ICON_ERROR}[/{RED}] [{WHITE}]{cmd_display}[/{WHITE}]"
+ )
+ break
+
+ # Show summary line
+ success_count = sum(1 for _, s, _ in results if s == "success")
+ if success_count > 0 and success_count == len([r for r in results if r[1] != "sudo"]):
+ console.print(
+ f"\n [{GREEN}]{ICON_SUCCESS} All {success_count} command(s) completed[/{GREEN}]"
+ )
+
+ # Show sudo commands in bordered panel
+ if sudo_commands_pending:
+ from rich.panel import Panel
+ from rich.text import Text
+
+ sudo_text = Text()
+ sudo_text.append("Run these commands manually:\n\n", style=GRAY)
+ for cmd in sudo_commands_pending:
+ sudo_text.append(f" $ {cmd}\n", style=GREEN)
+
+ console.print()
+ console.print(
+ Panel(
+ sudo_text,
+ title=f"[bold {YELLOW}]🔐 Sudo Required[/bold {YELLOW}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ )
+ )
+
+ # Send notification about pending sudo commands
+ self._send_sudo_pending_notification(sudo_commands_pending)
+
+ # Still consider it a partial success if we need manual sudo
+ return len(sudo_commands_pending) < len([c for c in commands if not c.startswith("#")])
+
+ return all_success
+
+ def _send_sudo_pending_notification(self, commands: list[str]):
+ """Send notification about pending sudo commands."""
+ try:
+ import subprocess
+
+ cmd_preview = commands[0][:40] + "..." if len(commands[0]) > 40 else commands[0]
+
+ subprocess.run(
+ [
+ "notify-send",
+ "--urgency=normal",
+ "--icon=dialog-password",
+ "🔐 Cortex: Sudo required",
+ f"Run in your terminal:\n{cmd_preview}",
+ ],
+ capture_output=True,
+ timeout=2,
+ )
+
+ except Exception:
+ pass
+
+ def _extract_fix_from_llm(self, llm_response: str) -> str | None:
+ """Extract a fix command from LLM response."""
+ import re
+
+ # Look for commands in common formats
+ patterns = [
+ r"`([^`]+)`", # Backtick enclosed
+ r"^\$ (.+)$", # Shell prompt format
+ r"^sudo (.+)$", # Sudo commands
+ r"run[:\s]+([^\n]+)", # "run: command" format
+ r"try[:\s]+([^\n]+)", # "try: command" format
+ ]
+
+ for pattern in patterns:
+ matches = re.findall(pattern, llm_response, re.MULTILINE | re.IGNORECASE)
+ for match in matches:
+ cmd = match.strip()
+ if cmd and len(cmd) > 3 and self._is_safe_fix_command(cmd):
+ return cmd
+
+ return None
+
+ def _is_safe_fix_command(self, command: str) -> bool:
+ """Check if a fix command is safe to run automatically."""
+ cmd_lower = command.lower().strip()
+
+ # Dangerous commands we should never auto-run
+ dangerous_patterns = [
+ "rm -rf /",
+ "rm -rf ~",
+ "rm -rf *",
+ "> /dev/",
+ "mkfs",
+ "dd if=",
+ "chmod -R 777 /",
+ "chmod 777 /",
+ ":(){:|:&};:", # Fork bomb
+ "wget|sh",
+ "curl|sh",
+ "curl|bash",
+ "wget|bash",
+ ]
+
+ for pattern in dangerous_patterns:
+ if pattern in cmd_lower:
+ return False
+
+ # Safe fix command patterns
+ safe_patterns = [
+ "sudo systemctl",
+ "sudo service",
+ "sudo apt",
+ "sudo apt-get",
+ "apt-cache",
+ "systemctl status",
+ "sudo nginx -t",
+ "sudo nginx -s reload",
+ "docker start",
+ "docker restart",
+ "pip install",
+ "npm install",
+ "sudo chmod",
+ "sudo chown",
+ "mkdir -p",
+ "touch",
+ ]
+
+ for pattern in safe_patterns:
+ if cmd_lower.startswith(pattern):
+ return True
+
+ # Allow sudo commands for common safe operations
+ if cmd_lower.startswith("sudo "):
+ rest = cmd_lower[5:].strip()
+ safe_sudo = [
+ "systemctl",
+ "service",
+ "apt",
+ "apt-get",
+ "nginx",
+ "chmod",
+ "chown",
+ "mkdir",
+ ]
+ if any(rest.startswith(s) for s in safe_sudo):
+ return True
+
+ return False
+
+ def _send_fix_success_notification(self, command: str, error_type: str):
+ """Send a desktop notification that the fix was successful."""
+ try:
+ import subprocess
+
+ cmd_short = command[:30] + "..." if len(command) > 30 else command
+
+ subprocess.run(
+ [
+ "notify-send",
+ "--urgency=normal",
+ "--icon=dialog-information",
+ f"✅ Cortex: Fixed {error_type}",
+ f"Auto-fix successful! You can now retry:\n{cmd_short}",
+ ],
+ capture_output=True,
+ timeout=2,
+ )
+
+ except Exception:
+ pass
+
+ def _send_solution_notification(self, title: str, body: str):
+ """Send a desktop notification with the solution from Claude."""
+ try:
+ import subprocess
+
+ # Use notify-send with high priority
+ subprocess.run(
+ [
+ "notify-send",
+ "--urgency=critical",
+ "--icon=dialog-information",
+ "--expire-time=15000", # 15 seconds
+ title,
+ body,
+ ],
+ capture_output=True,
+ timeout=2,
+ )
+
+ except Exception:
+ pass
+
+ def _send_error_notification(
+ self, command: str, solution: str, error_type: str = "", can_auto_fix: bool = False
+ ):
+ """Send a desktop notification with the error solution."""
+ try:
+ # Try to use notify-send (standard on Ubuntu)
+ import subprocess
+
+ # Truncate for notification
+ cmd_short = command[:30] + "..." if len(command) > 30 else command
+ solution_short = solution[:150] + "..." if len(solution) > 150 else solution
+
+ # Build title with error type
+ if error_type and error_type != "unknown":
+ title = f"🔧 Cortex: {error_type}"
+ else:
+ title = "🔧 Cortex: Error detected"
+
+ # Add auto-fix indicator
+ if can_auto_fix:
+ body = f"✓ Auto-fixable\n\n{solution_short}"
+ icon = "dialog-information"
+ else:
+ body = solution_short
+ icon = "dialog-warning"
+
+ # Send notification
+ subprocess.run(
+ ["notify-send", "--urgency=normal", f"--icon={icon}", title, body],
+ capture_output=True,
+ timeout=2,
+ )
+
+ except (FileNotFoundError, subprocess.TimeoutExpired, Exception):
+ # notify-send not available or failed, try callback
+ if self.notification_callback:
+ self.notification_callback(f"Error in: {command[:30]}", solution[:100])
+
+ def _llm_analyze_error(self, command: str, error_output: str) -> str | None:
+ """Use local LLM to analyze an error and provide a fix."""
+ if not self._llm:
+ return None
+
+ # Build context from recent commands
+ context = ""
+ if self._session_context:
+ context = "Recent commands:\n" + "\n".join(self._session_context[-5:]) + "\n\n"
+
+ prompt = f"""You are a Linux expert. A user ran a command and got an error.
+Provide a brief, actionable fix (2-3 sentences max).
+
+IMPORTANT: Do NOT suggest sudo commands - they cannot be auto-executed.
+Only suggest non-sudo commands. If sudo is required, say "requires manual sudo" instead.
+
+{context}Command: {command}
+
+Error output:
+{error_output[:500]}
+
+Fix (be specific, give the exact non-sudo command to run):"""
+
+ try:
+ result = self._llm.analyze(prompt, max_tokens=150, timeout=10)
+ if result:
+ return result.strip()
+ except Exception:
+ pass
+
+ return None
+
+ def analyze_session_intent(self) -> str | None:
+ """Use LLM to analyze what the user is trying to accomplish based on their commands."""
+ if not self._llm or not self._llm.is_available():
+ return None
+
+ if len(self._session_context) < 2:
+ return None
+
+ prompt = f"""Based on these terminal commands, what is the user trying to accomplish?
+Give a brief summary (1 sentence max).
+
+Commands:
+{chr(10).join(self._session_context[-5:])}
+
+The user is trying to:"""
+
+ try:
+ result = self._llm.analyze(prompt, max_tokens=50, timeout=15)
+ if result:
+ result = result.strip()
+ # Take only first sentence
+ if ". " in result:
+ result = result.split(". ")[0] + "."
+ return result
+ except Exception:
+ pass
+
+ return None
+
+ def get_next_step_suggestion(self) -> str | None:
+ """Use LLM to suggest the next logical step based on recent commands."""
+ if not self._llm or not self._llm.is_available():
+ return None
+
+ if len(self._session_context) < 1:
+ return None
+
+ prompt = f"""Based on these terminal commands, what single command should the user run next?
+Respond with ONLY the command, nothing else.
+
+Recent commands:
+{chr(10).join(self._session_context[-5:])}
+
+Next command:"""
+
+ try:
+ result = self._llm.analyze(prompt, max_tokens=30, timeout=15)
+ if result:
+ # Clean up - extract just the command
+ result = result.strip()
+ # Remove common prefixes
+ for prefix in ["$", "Run:", "Try:", "Next:", "Command:", "`"]:
+ if result.lower().startswith(prefix.lower()):
+ result = result[len(prefix) :].strip()
+ result = result.rstrip("`")
+ return result.split("\n")[0].strip()
+ except Exception:
+ pass
+
+ return None
+
+ def get_collected_context(self) -> str:
+ """Get a formatted summary of all collected terminal context."""
+ with self._lock:
+ if not self._commands_observed:
+ return "No commands observed yet."
+
+ lines = ["[bold]📋 Collected Terminal Context:[/bold]", ""]
+
+ for i, obs in enumerate(self._commands_observed, 1):
+ timestamp = obs.get("timestamp", "")[:19]
+ source = obs.get("source", "unknown")
+ command = obs.get("command", "")
+
+ lines.append(f"{i}. [{timestamp}] ({source})")
+ lines.append(f" $ {command}")
+ lines.append("")
+
+ return "\n".join(lines)
+
+ def print_collected_context(self):
+ """Print a summary of all collected terminal context with AI analysis."""
+ from rich.panel import Panel
+
+ with self._lock:
+ if not self._commands_observed:
+ console.print(f"[{GRAY}]No commands observed yet.[/{GRAY}]")
+ return
+
+ console.print()
+ console.print(
+ Panel(
+ f"[bold {WHITE}]Collected {len(self._commands_observed)} command(s) from other terminals[/bold {WHITE}]",
+ title=f"[{PURPLE_LIGHT}]📋 Terminal Context Summary[/{PURPLE_LIGHT}]",
+ border_style=PURPLE,
+ )
+ )
+
+ for i, obs in enumerate(self._commands_observed[-10:], 1): # Show last 10
+ timestamp = (
+ obs.get("timestamp", "")[:19].split("T")[-1]
+ if "T" in obs.get("timestamp", "")
+ else obs.get("timestamp", "")[:8]
+ )
+ source = obs.get("source", "unknown")
+ command = obs.get("command", "")
+
+ # Shorten source name
+ if ":" in source:
+ source = source.split(":")[-1]
+
+ console.print(
+ f" [{GRAY}]{timestamp}[/{GRAY}] [{PURPLE_LIGHT}]{source:12}[/{PURPLE_LIGHT}] [{WHITE}]{command[:50]}{'...' if len(command) > 50 else ''}[/{WHITE}]"
+ )
+
+ if len(self._commands_observed) > 10:
+ console.print(
+ f" [{GRAY}]... and {len(self._commands_observed) - 10} more commands[/{GRAY}]"
+ )
+
+ # Add AI analysis if available
+ if (
+ self._llm
+ and self._use_llm
+ and self._llm.is_available()
+ and len(self._session_context) >= 2
+ ):
+ console.print()
+ console.print(
+ f"[bold {PURPLE_LIGHT}]{ICON_MONITOR} AI Analysis:[/bold {PURPLE_LIGHT}]"
+ )
+
+ # Analyze intent
+ intent = self.analyze_session_intent()
+ if intent:
+ console.print(f"[{WHITE}] Intent: {intent}[/{WHITE}]")
+
+ # Suggest next step
+ next_step = self.get_next_step_suggestion()
+ if next_step:
+ console.print(f"[{GREEN}] Suggested next: {next_step}[/{GREEN}]")
+
+ console.print()
+
+ def _extract_commands_from_content(self, content: str, source: str) -> list[str]:
+ """Extract commands from terminal content based on source type."""
+ commands = []
+
+ # Shell history files - each line is a command
+ if "_history" in source or "history" in source:
+ for line in content.strip().split("\n"):
+ line = line.strip()
+ if not line:
+ continue
+ # Skip timestamps in zsh extended history format
+ if line.startswith(":"):
+ # Format: : timestamp:0;command
+ if ";" in line:
+ cmd = line.split(";", 1)[1]
+ if cmd:
+ commands.append(cmd)
+ # Skip fish history format markers
+ elif line.startswith("- cmd:"):
+ cmd = line[6:].strip()
+ if cmd:
+ commands.append(cmd)
+ elif not line.startswith("when:"):
+ commands.append(line)
+ else:
+ # Terminal output - look for command prompts
+ for line in content.split("\n"):
+ line = line.strip()
+ if not line:
+ continue
+
+ # Various prompt patterns
+ prompt_patterns = [
+ r"^\$ (.+)$", # Simple $ prompt
+ r"^[a-zA-Z0-9_-]+@[a-zA-Z0-9_-]+:.+\$ (.+)$", # user@host:path$ cmd
+ r"^[a-zA-Z0-9_-]+@[a-zA-Z0-9_-]+:.+# (.+)$", # root prompt
+ r"^>>> (.+)$", # Python REPL
+ r"^\(.*\)\s*\$ (.+)$", # (venv) $ cmd
+ r"^➜\s+.+\s+(.+)$", # Oh-my-zsh prompt
+ r"^❯ (.+)$", # Starship prompt
+ r"^▶ (.+)$", # Another prompt style
+ r"^\[.*\]\$ (.+)$", # [dir]$ cmd
+ r"^% (.+)$", # % prompt (zsh default)
+ ]
+
+ for pattern in prompt_patterns:
+ match = re.match(pattern, line)
+ if match:
+ cmd = match.group(1).strip()
+ if cmd:
+ commands.append(cmd)
+ break
+
+ return commands
+
+ def _process_observed_command(self, command: str, source: str = "unknown"):
+ """Process an observed command and notify about issues with real-time feedback."""
+ # Skip empty or very short commands
+ if not command or len(command.strip()) < 2:
+ return
+
+ command = command.strip()
+
+ # Skip commands from the Cortex terminal itself
+ if self._is_cortex_terminal_command(command):
+ return
+
+ # Skip common shell built-ins that aren't interesting (only if standalone)
+ skip_commands = ["cd", "ls", "pwd", "clear", "exit", "history", "fg", "bg", "jobs", "alias"]
+ parts = command.split()
+ cmd_base = parts[0] if parts else ""
+
+ # Also handle sudo prefix
+ if cmd_base == "sudo" and len(parts) > 1:
+ cmd_base = parts[1]
+
+ # Only skip if it's JUST the command with no args
+ if cmd_base in skip_commands and len(parts) == 1:
+ return
+
+ # Skip if it looks like a partial command or just an argument
+ if not any(c.isalpha() for c in cmd_base):
+ return
+
+ # Avoid duplicates within short time window
+ with self._lock:
+ recent = [
+ c
+ for c in self._commands_observed
+ if c["command"] == command
+ and (
+ datetime.datetime.now() - datetime.datetime.fromisoformat(c["timestamp"])
+ ).seconds
+ < 5
+ ]
+ if recent:
+ return
+
+ self._commands_observed.append(
+ {
+ "command": command,
+ "timestamp": datetime.datetime.now().isoformat(),
+ "source": source,
+ "has_error": False, # Will be updated if error is detected
+ "status": "pending", # pending, success, failed
+ }
+ )
+
+ # Add to session context for LLM
+ self._session_context.append(f"$ {command}")
+ # Keep only last 10 commands for context
+ if len(self._session_context) > 10:
+ self._session_context = self._session_context[-10:]
+
+ # Real-time feedback with visual emphasis
+ self._show_realtime_feedback(command, source)
+
+ # For live terminal commands, proactively check the result
+ if source == "live_terminal":
+ self._check_command_result(command)
+
+ # Check for issues and provide help
+ issues = self._check_command_issues(command)
+ if issues:
+ from rich.panel import Panel
+
+ console.print(
+ Panel(
+ f"[bold {YELLOW}]⚠ Issue:[/bold {YELLOW}] [{WHITE}]{issues}[/{WHITE}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+ if self.notification_callback:
+ self.notification_callback("Cortex: Issue detected", issues)
+
+ # Check if command matches expected commands
+ if self._expected_commands:
+ matched = self._check_command_match(command)
+ from rich.panel import Panel
+
+ if matched:
+ console.print(
+ Panel(
+ f"[bold {GREEN}]{ICON_SUCCESS} Matches expected command[/bold {GREEN}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+ else:
+ # User ran a DIFFERENT command than expected
+ console.print(
+ Panel(
+ f"[bold {YELLOW}]⚠ Not in expected commands[/bold {YELLOW}]",
+ border_style=PURPLE,
+ padding=(0, 1),
+ expand=False,
+ )
+ )
+ # Send notification with the correct command(s)
+ self._notify_wrong_command(command)
+
+ def _check_command_match(self, command: str) -> bool:
+ """Check if a command matches any expected command."""
+ if not self._expected_commands:
+ return True # No expected commands means anything goes
+
+ cmd_normalized = command.strip().lower()
+ # Remove sudo prefix for comparison
+ if cmd_normalized.startswith("sudo "):
+ cmd_normalized = cmd_normalized[5:].strip()
+
+ for expected in self._expected_commands:
+ exp_normalized = expected.strip().lower()
+ if exp_normalized.startswith("sudo "):
+ exp_normalized = exp_normalized[5:].strip()
+
+ # Check for exact match or if command contains the expected command
+ if cmd_normalized == exp_normalized:
+ return True
+ if exp_normalized in cmd_normalized:
+ return True
+ if cmd_normalized in exp_normalized:
+ return True
+
+ # Check if first words match (e.g., "systemctl restart nginx" vs "systemctl restart nginx.service")
+ cmd_parts = cmd_normalized.split()
+ exp_parts = exp_normalized.split()
+ if len(cmd_parts) >= 2 and len(exp_parts) >= 2:
+ if cmd_parts[0] == exp_parts[0] and cmd_parts[1] == exp_parts[1]:
+ return True
+
+ return False
+
+ def _notify_wrong_command(self, wrong_command: str):
+ """Send desktop notification when user runs wrong command."""
+ if not self._expected_commands:
+ return
+
+ # Find the most relevant expected command
+ correct_cmd = self._expected_commands[0] if self._expected_commands else None
+
+ if correct_cmd:
+ title = "⚠️ Cortex: Wrong Command"
+ body = f"You ran: {wrong_command[:40]}...\n\nExpected: {correct_cmd}"
+
+ try:
+ import subprocess
+
+ subprocess.run(
+ [
+ "notify-send",
+ "--urgency=critical",
+ "--icon=dialog-warning",
+ "--expire-time=10000",
+ title,
+ body,
+ ],
+ capture_output=True,
+ timeout=2,
+ )
+ except Exception:
+ pass
+
+ # Also show in console
+ console.print(
+ f" [bold {YELLOW}]📢 Expected command:[/bold {YELLOW}] [{PURPLE_LIGHT}]{correct_cmd}[/{PURPLE_LIGHT}]"
+ )
+
+ def _notify_fixing_command(self, original_cmd: str, fix_cmd: str):
+ """Send notification that Cortex is fixing a command error."""
+ title = "🔧 Cortex: Fixing Error"
+ body = f"Command failed: {original_cmd[:30]}...\n\nFix: {fix_cmd}"
+
+ try:
+ import subprocess
+
+ subprocess.run(
+ [
+ "notify-send",
+ "--urgency=normal",
+ "--icon=dialog-information",
+ "--expire-time=8000",
+ title,
+ body,
+ ],
+ capture_output=True,
+ timeout=2,
+ )
+ except Exception:
+ pass
+
+ def _check_command_result(self, command: str):
+ """Proactively check if a command succeeded by running verification commands."""
+ import subprocess
+ import time
+
+ # Wait a moment for the command to complete
+ time.sleep(0.5)
+
+ cmd_lower = command.lower().strip()
+ check_cmd = None
+ error_output = None
+
+ # Determine what check to run based on the command
+ if "systemctl" in cmd_lower:
+ # Extract service name
+ parts = command.split()
+ service_name = None
+ for i, p in enumerate(parts):
+ if p in ["start", "stop", "restart", "reload", "enable", "disable"]:
+ if i + 1 < len(parts):
+ service_name = parts[i + 1]
+ break
+
+ if service_name:
+ check_cmd = f"systemctl status {service_name} 2>&1 | head -5"
+
+ elif "service" in cmd_lower and "status" not in cmd_lower:
+ # Extract service name for service command
+ parts = command.split()
+ if len(parts) >= 3:
+ service_name = parts[1] if parts[0] != "sudo" else parts[2]
+ check_cmd = f"service {service_name} status 2>&1 | head -5"
+
+ elif "docker" in cmd_lower:
+ if "run" in cmd_lower or "start" in cmd_lower:
+ # Get container name if present
+ parts = command.split()
+ container_name = None
+ for i, p in enumerate(parts):
+ if p == "--name" and i + 1 < len(parts):
+ container_name = parts[i + 1]
+ break
+
+ if container_name:
+ check_cmd = (
+ f"docker ps -f name={container_name} --format '{{{{.Status}}}}' 2>&1"
+ )
+ else:
+ check_cmd = "docker ps -l --format '{{.Status}} {{.Names}}' 2>&1"
+ elif "stop" in cmd_lower or "rm" in cmd_lower:
+ check_cmd = "docker ps -a -l --format '{{.Status}} {{.Names}}' 2>&1"
+
+ elif "nginx" in cmd_lower and "-t" in cmd_lower:
+ check_cmd = "nginx -t 2>&1"
+
+ elif "apt" in cmd_lower or "apt-get" in cmd_lower:
+ # Check for recent apt errors
+ check_cmd = "tail -3 /var/log/apt/term.log 2>/dev/null || echo 'ok'"
+
+ # Run the check command if we have one
+ if check_cmd:
+ try:
+ result = subprocess.run(
+ check_cmd, shell=True, capture_output=True, text=True, timeout=5
+ )
+
+ output = result.stdout + result.stderr
+
+ # Check for error indicators in the output
+ error_indicators = [
+ "failed",
+ "error",
+ "not found",
+ "inactive",
+ "dead",
+ "could not",
+ "unable",
+ "denied",
+ "cannot",
+ "exited",
+ "not running",
+ "not loaded",
+ ]
+
+ has_error = any(ind in output.lower() for ind in error_indicators)
+
+ if has_error or result.returncode != 0:
+ error_output = output
+
+ except (subprocess.TimeoutExpired, Exception):
+ pass
+
+ # If we found an error, mark the command and process it with auto-fix
+ if error_output:
+ console.print(" [dim]checking...[/dim]")
+ # Mark this command as having an error
+ with self._lock:
+ for obs in self._commands_observed:
+ if obs["command"] == command:
+ obs["has_error"] = True
+ obs["status"] = "failed"
+ break
+ self._process_observed_command_with_output(command, error_output, "live_terminal_check")
+ else:
+ # Mark as success if check passed
+ with self._lock:
+ for obs in self._commands_observed:
+ if obs["command"] == command and obs["status"] == "pending":
+ obs["status"] = "success"
+ break
+
+ def _show_realtime_feedback(self, command: str, source: str):
+ """Show real-time visual feedback for detected commands."""
+ if not self._show_live_output:
+ return
+
+ from rich.panel import Panel
+ from rich.text import Text
+
+ # Source icons and labels
+ source_info = {
+ "cursor": ("🖥️", "Cursor IDE", "cyan"),
+ "external": ("🌐", "External Terminal", "blue"),
+ "tmux": ("📺", "Tmux", "magenta"),
+ "bash": ("📝", "Bash", "green"),
+ "zsh": ("📝", "Zsh", "green"),
+ "fish": ("🐟", "Fish", "yellow"),
+ }
+
+ # Determine source type
+ icon, label, color = "📝", "Terminal", "white"
+ for key, (i, l, c) in source_info.items():
+ if key in source.lower():
+ icon, label, color = i, l, c
+ break
+
+ # Categorize command
+ cmd_category = self._categorize_command(command)
+ category_icons = {
+ "docker": "🐳",
+ "git": "📦",
+ "apt": "📦",
+ "pip": "🐍",
+ "npm": "📦",
+ "systemctl": "⚙️",
+ "service": "⚙️",
+ "sudo": "🔐",
+ "ssh": "🔗",
+ "curl": "🌐",
+ "wget": "⬇️",
+ "mkdir": "📁",
+ "rm": "🗑️",
+ "cp": "📋",
+ "mv": "📋",
+ "cat": "📄",
+ "vim": "📝",
+ "nano": "📝",
+ "nginx": "🌐",
+ "python": "🐍",
+ "node": "📗",
+ }
+ cmd_icon = category_icons.get(cmd_category, "▶")
+
+ # Format timestamp
+ timestamp = datetime.datetime.now().strftime("%H:%M:%S")
+
+ # Store in buffer for later reference
+ self._output_buffer.append(
+ {
+ "timestamp": timestamp,
+ "source": source,
+ "label": label,
+ "icon": icon,
+ "color": color,
+ "command": command,
+ "cmd_icon": cmd_icon,
+ }
+ )
+
+ # Print real-time feedback with bordered section
+ analysis = self._analyze_command(command)
+
+ # Build command display
+ cmd_text = Text()
+ cmd_text.append(f"{cmd_icon} ", style="bold")
+ cmd_text.append(command, style="bold white")
+ if analysis:
+ cmd_text.append(f"\n {analysis}", style="dim italic")
+
+ console.print()
+ console.print(
+ Panel(
+ cmd_text,
+ title=f"[dim]{timestamp}[/dim]",
+ title_align="right",
+ border_style="blue",
+ padding=(0, 1),
+ )
+ )
+
+ def _categorize_command(self, command: str) -> str:
+ """Categorize a command by its base command."""
+ cmd_parts = command.split()
+ if not cmd_parts:
+ return "unknown"
+
+ base = cmd_parts[0]
+ if base == "sudo" and len(cmd_parts) > 1:
+ base = cmd_parts[1]
+
+ return base.lower()
+
+ def _analyze_command(self, command: str) -> str | None:
+ """Analyze a command and return a brief description using LLM or patterns."""
+ cmd_lower = command.lower()
+
+ # First try pattern matching for speed
+ patterns = [
+ (r"docker run", "Starting a Docker container"),
+ (r"docker pull", "Pulling a Docker image"),
+ (r"docker ps", "Listing Docker containers"),
+ (r"docker exec", "Executing command in container"),
+ (r"docker build", "Building Docker image"),
+ (r"docker stop", "Stopping container"),
+ (r"docker rm", "Removing container"),
+ (r"git clone", "Cloning a repository"),
+ (r"git pull", "Pulling latest changes"),
+ (r"git push", "Pushing changes"),
+ (r"git commit", "Committing changes"),
+ (r"git status", "Checking repository status"),
+ (r"apt install", "Installing package via apt"),
+ (r"apt update", "Updating package list"),
+ (r"pip install", "Installing Python package"),
+ (r"npm install", "Installing Node.js package"),
+ (r"systemctl start", "Starting a service"),
+ (r"systemctl stop", "Stopping a service"),
+ (r"systemctl restart", "Restarting a service"),
+ (r"systemctl status", "Checking service status"),
+ (r"nginx -t", "Testing Nginx configuration"),
+ (r"curl", "Making HTTP request"),
+ (r"wget", "Downloading file"),
+ (r"ssh", "SSH connection"),
+ (r"mkdir", "Creating directory"),
+ (r"rm -rf", "Removing files/directories recursively"),
+ (r"cp ", "Copying files"),
+ (r"mv ", "Moving/renaming files"),
+ (r"chmod", "Changing file permissions"),
+ (r"chown", "Changing file ownership"),
+ ]
+
+ for pattern, description in patterns:
+ if re.search(pattern, cmd_lower):
+ return description
+
+ # Use LLM for unknown commands
+ if self._llm and self._use_llm and self._llm.is_available():
+ return self._llm_analyze_command(command)
+
+ return None
+
+ def _llm_analyze_command(self, command: str) -> str | None:
+ """Use local LLM to analyze a command."""
+ if not self._llm:
+ return None
+
+ prompt = f"""Analyze this Linux command and respond with ONLY a brief description (max 10 words) of what it does:
+
+Command: {command}
+
+Brief description:"""
+
+ try:
+ result = self._llm.analyze(prompt, max_tokens=30, timeout=5)
+ if result:
+ # Clean up the response
+ result = result.strip().strip('"').strip("'")
+ # Take only first line
+ result = result.split("\n")[0].strip()
+ # Limit length
+ if len(result) > 60:
+ result = result[:57] + "..."
+ return result
+ except Exception:
+ pass
+
+ return None
+
+ def _check_command_issues(self, command: str) -> str | None:
+ """Check if a command has potential issues and return a warning."""
+ issues = []
+
+ if any(p in command for p in ["/etc/", "/var/", "/usr/"]):
+ if not command.startswith("sudo") and not command.startswith("cat"):
+ issues.append("May need sudo for system files")
+
+ if "rm -rf /" in command:
+ issues.append("DANGER: Destructive command detected!")
+
+ typo_checks = {
+ "sudp": "sudo",
+ "suod": "sudo",
+ "cta": "cat",
+ "mdir": "mkdir",
+ "mkidr": "mkdir",
+ }
+ for typo, correct in typo_checks.items():
+ if command.startswith(typo + " "):
+ issues.append(f"Typo? Did you mean '{correct}'?")
+
+ return "; ".join(issues) if issues else None
diff --git a/cortex/do_runner/verification.py b/cortex/do_runner/verification.py
new file mode 100644
index 00000000..f179c1ed
--- /dev/null
+++ b/cortex/do_runner/verification.py
@@ -0,0 +1,1262 @@
+"""Verification and conflict detection for the Do Runner module."""
+
+import os
+import re
+import subprocess
+import time
+from typing import Any
+
+from rich.console import Console
+
+from .models import CommandLog
+
+console = Console()
+
+
+class ConflictDetector:
+ """Detects conflicts with existing configurations."""
+
+ def _execute_command(
+ self, cmd: str, needs_sudo: bool = False, timeout: int = 120
+ ) -> tuple[bool, str, str]:
+ """Execute a single command."""
+ try:
+ if needs_sudo and not cmd.strip().startswith("sudo"):
+ cmd = f"sudo {cmd}"
+
+ result = subprocess.run(
+ ["sudo", "bash", "-c", cmd] if needs_sudo else cmd,
+ shell=not needs_sudo,
+ capture_output=True,
+ text=True,
+ timeout=timeout,
+ )
+ return result.returncode == 0, result.stdout.strip(), result.stderr.strip()
+ except subprocess.TimeoutExpired:
+ return False, "", f"Command timed out after {timeout} seconds"
+ except Exception as e:
+ return False, "", str(e)
+
+ def check_for_conflicts(
+ self,
+ cmd: str,
+ purpose: str,
+ ) -> dict[str, Any]:
+ """
+ Check if the command might conflict with existing resources.
+
+ This is a GENERAL conflict detector that works for:
+ - Docker containers
+ - Services (systemd)
+ - Files/directories
+ - Packages
+ - Databases
+ - Users/groups
+ - Ports
+ - Virtual environments
+ - And more...
+
+ Returns:
+ Dict with conflict info, alternatives, and cleanup commands.
+ """
+ # Check all resource types
+ checkers = [
+ self._check_docker_conflict,
+ self._check_service_conflict,
+ self._check_file_conflict,
+ self._check_package_conflict,
+ self._check_port_conflict,
+ self._check_user_conflict,
+ self._check_venv_conflict,
+ self._check_database_conflict,
+ self._check_cron_conflict,
+ ]
+
+ for checker in checkers:
+ result = checker(cmd, purpose)
+ if result["has_conflict"]:
+ return result
+
+ # Default: no conflict
+ return {
+ "has_conflict": False,
+ "conflict_type": None,
+ "resource_type": None,
+ "resource_name": None,
+ "suggestion": None,
+ "cleanup_commands": [],
+ "alternative_actions": [],
+ }
+
+ def _create_conflict_result(
+ self,
+ resource_type: str,
+ resource_name: str,
+ conflict_type: str,
+ suggestion: str,
+ is_active: bool = True,
+ alternative_actions: list[dict] | None = None,
+ ) -> dict[str, Any]:
+ """Create a standardized conflict result with alternatives."""
+
+ # Generate standard alternative actions based on resource type and state
+ if alternative_actions is None:
+ if is_active:
+ alternative_actions = [
+ {
+ "action": "use_existing",
+ "description": f"Use existing {resource_type} '{resource_name}'",
+ "commands": [],
+ },
+ {
+ "action": "restart",
+ "description": f"Restart {resource_type} '{resource_name}'",
+ "commands": self._get_restart_commands(resource_type, resource_name),
+ },
+ {
+ "action": "recreate",
+ "description": f"Remove and recreate {resource_type} '{resource_name}'",
+ "commands": self._get_remove_commands(resource_type, resource_name),
+ },
+ ]
+ else:
+ alternative_actions = [
+ {
+ "action": "start_existing",
+ "description": f"Start existing {resource_type} '{resource_name}'",
+ "commands": self._get_start_commands(resource_type, resource_name),
+ },
+ {
+ "action": "recreate",
+ "description": f"Remove and recreate {resource_type} '{resource_name}'",
+ "commands": self._get_remove_commands(resource_type, resource_name),
+ },
+ ]
+
+ return {
+ "has_conflict": True,
+ "conflict_type": conflict_type,
+ "resource_type": resource_type,
+ "resource_name": resource_name,
+ "suggestion": suggestion,
+ "is_active": is_active,
+ "alternative_actions": alternative_actions,
+ "cleanup_commands": [],
+ "use_existing": is_active,
+ }
+
+ def _get_restart_commands(self, resource_type: str, name: str) -> list[str]:
+ """Get restart commands for a resource type."""
+ commands = {
+ "container": [f"docker restart {name}"],
+ "service": [f"sudo systemctl restart {name}"],
+ "database": [f"sudo systemctl restart {name}"],
+ "webserver": [f"sudo systemctl restart {name}"],
+ }
+ return commands.get(resource_type, [])
+
+ def _get_start_commands(self, resource_type: str, name: str) -> list[str]:
+ """Get start commands for a resource type."""
+ commands = {
+ "container": [f"docker start {name}"],
+ "service": [f"sudo systemctl start {name}"],
+ "database": [f"sudo systemctl start {name}"],
+ "webserver": [f"sudo systemctl start {name}"],
+ }
+ return commands.get(resource_type, [])
+
+ def _get_remove_commands(self, resource_type: str, name: str) -> list[str]:
+ """Get remove/cleanup commands for a resource type."""
+ commands = {
+ "container": [f"docker rm -f {name}"],
+ "service": [f"sudo systemctl stop {name}"],
+ "file": [f"sudo rm -f {name}"],
+ "directory": [f"sudo rm -rf {name}"],
+ "package": [], # Don't auto-remove packages
+ "user": [], # Don't auto-remove users
+ "venv": [f"rm -rf {name}"],
+ "database": [], # Don't auto-remove databases
+ }
+ return commands.get(resource_type, [])
+
+ def _check_docker_conflict(self, cmd: str, purpose: str) -> dict[str, Any]:
+ """Check for Docker container/compose conflicts."""
+ result = {"has_conflict": False}
+
+ # Docker run with --name
+ if "docker run" in cmd.lower():
+ name_match = re.search(r"--name\s+([^\s]+)", cmd)
+ if name_match:
+ container_name = name_match.group(1)
+
+ # Check if container exists
+ success, container_id, _ = self._execute_command(
+ f"docker ps -aq --filter name=^{container_name}$", needs_sudo=False
+ )
+
+ if success and container_id.strip():
+ # Check if running
+ running_success, running_id, _ = self._execute_command(
+ f"docker ps -q --filter name=^{container_name}$", needs_sudo=False
+ )
+ is_running = running_success and running_id.strip()
+
+ # Get image info
+ _, image_info, _ = self._execute_command(
+ f"docker inspect --format '{{{{.Config.Image}}}}' {container_name}",
+ needs_sudo=False,
+ )
+ image = image_info.strip() if image_info else "unknown"
+
+ status = "running" if is_running else "stopped"
+ return self._create_conflict_result(
+ resource_type="container",
+ resource_name=container_name,
+ conflict_type=f"container_{status}",
+ suggestion=f"Container '{container_name}' already exists ({status}, image: {image})",
+ is_active=is_running,
+ )
+
+ # Docker compose
+ if "docker-compose" in cmd.lower() or "docker compose" in cmd.lower():
+ if "up" in cmd:
+ success, services, _ = self._execute_command(
+ "docker compose ps -q 2>/dev/null", needs_sudo=False
+ )
+ if success and services.strip():
+ return self._create_conflict_result(
+ resource_type="compose",
+ resource_name="docker-compose",
+ conflict_type="compose_running",
+ suggestion="Docker Compose services are already running",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": "Keep existing services",
+ "commands": [],
+ },
+ {
+ "action": "restart",
+ "description": "Restart services",
+ "commands": ["docker compose restart"],
+ },
+ {
+ "action": "recreate",
+ "description": "Recreate services",
+ "commands": ["docker compose down", "docker compose up -d"],
+ },
+ ],
+ )
+
+ return result
+
+ def _check_service_conflict(self, cmd: str, purpose: str) -> dict[str, Any]:
+ """Check for systemd service conflicts."""
+ result = {"has_conflict": False}
+
+ # systemctl start/enable
+ if "systemctl" in cmd:
+ service_match = re.search(r"systemctl\s+(start|enable|restart)\s+([^\s]+)", cmd)
+ if service_match:
+ action = service_match.group(1)
+ service = service_match.group(2).replace(".service", "")
+
+ success, status, _ = self._execute_command(
+ f"systemctl is-active {service} 2>/dev/null", needs_sudo=False
+ )
+
+ if action in ["start", "enable"] and status.strip() == "active":
+ return self._create_conflict_result(
+ resource_type="service",
+ resource_name=service,
+ conflict_type="service_running",
+ suggestion=f"Service '{service}' is already running",
+ is_active=True,
+ )
+
+ # service command
+ if cmd.startswith("service ") or " service " in cmd:
+ service_match = re.search(r"service\s+(\S+)\s+(start|restart)", cmd)
+ if service_match:
+ service = service_match.group(1)
+ success, status, _ = self._execute_command(
+ f"systemctl is-active {service} 2>/dev/null", needs_sudo=False
+ )
+ if status.strip() == "active":
+ return self._create_conflict_result(
+ resource_type="service",
+ resource_name=service,
+ conflict_type="service_running",
+ suggestion=f"Service '{service}' is already running",
+ is_active=True,
+ )
+
+ return result
+
+ def _check_file_conflict(self, cmd: str, purpose: str) -> dict[str, Any]:
+ """Check for file/directory conflicts."""
+ result = {"has_conflict": False}
+ paths_in_cmd = re.findall(r"(/[^\s>|]+)", cmd)
+
+ for path in paths_in_cmd:
+ # Skip common read paths
+ if path in ["/dev/null", "/etc/os-release", "/proc/", "/sys/"]:
+ continue
+
+ # Check for file creation/modification commands
+ is_write_cmd = any(
+ p in cmd for p in [">", "tee ", "cp ", "mv ", "touch ", "mkdir ", "echo "]
+ )
+
+ if is_write_cmd and os.path.exists(path):
+ is_dir = os.path.isdir(path)
+ resource_type = "directory" if is_dir else "file"
+
+ return self._create_conflict_result(
+ resource_type=resource_type,
+ resource_name=path,
+ conflict_type=f"{resource_type}_exists",
+ suggestion=f"{resource_type.title()} '{path}' already exists",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": f"Keep existing {resource_type}",
+ "commands": [],
+ },
+ {
+ "action": "backup",
+ "description": "Backup and overwrite",
+ "commands": [f"sudo cp -r {path} {path}.cortex.bak"],
+ },
+ {
+ "action": "recreate",
+ "description": "Remove and recreate",
+ "commands": [f"sudo rm -rf {path}" if is_dir else f"sudo rm -f {path}"],
+ },
+ ],
+ )
+
+ return result
+
+ def _check_package_conflict(self, cmd: str, purpose: str) -> dict[str, Any]:
+ """Check for package installation conflicts."""
+ result = {"has_conflict": False}
+
+ # apt install
+ if "apt install" in cmd or "apt-get install" in cmd:
+ pkg_match = re.search(r"(?:apt|apt-get)\s+install\s+(?:-y\s+)?(\S+)", cmd)
+ if pkg_match:
+ package = pkg_match.group(1)
+ success, _, _ = self._execute_command(
+ f"dpkg -l {package} 2>/dev/null | grep -q '^ii'", needs_sudo=False
+ )
+ if success:
+ # Get version
+ _, version_out, _ = self._execute_command(
+ f"dpkg -l {package} | grep '^ii' | awk '{{print $3}}'", needs_sudo=False
+ )
+ version = version_out.strip() if version_out else "unknown"
+
+ return self._create_conflict_result(
+ resource_type="package",
+ resource_name=package,
+ conflict_type="package_installed",
+ suggestion=f"Package '{package}' is already installed (version: {version})",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": f"Keep current version ({version})",
+ "commands": [],
+ },
+ {
+ "action": "upgrade",
+ "description": "Upgrade to latest version",
+ "commands": [f"sudo apt install --only-upgrade -y {package}"],
+ },
+ {
+ "action": "reinstall",
+ "description": "Reinstall package",
+ "commands": [f"sudo apt install --reinstall -y {package}"],
+ },
+ ],
+ )
+
+ # pip install
+ if "pip install" in cmd or "pip3 install" in cmd:
+ pkg_match = re.search(r"pip3?\s+install\s+(?:-[^\s]+\s+)*(\S+)", cmd)
+ if pkg_match:
+ package = pkg_match.group(1)
+ success, version_out, _ = self._execute_command(
+ f"pip3 show {package} 2>/dev/null | grep Version", needs_sudo=False
+ )
+ if success and version_out:
+ version = version_out.replace("Version:", "").strip()
+ return self._create_conflict_result(
+ resource_type="pip_package",
+ resource_name=package,
+ conflict_type="pip_package_installed",
+ suggestion=f"Python package '{package}' is already installed (version: {version})",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": f"Keep current version ({version})",
+ "commands": [],
+ },
+ {
+ "action": "upgrade",
+ "description": "Upgrade to latest",
+ "commands": [f"pip3 install --upgrade {package}"],
+ },
+ {
+ "action": "reinstall",
+ "description": "Reinstall package",
+ "commands": [f"pip3 install --force-reinstall {package}"],
+ },
+ ],
+ )
+
+ # npm install -g
+ if "npm install -g" in cmd or "npm i -g" in cmd:
+ pkg_match = re.search(r"npm\s+(?:install|i)\s+-g\s+(\S+)", cmd)
+ if pkg_match:
+ package = pkg_match.group(1)
+ success, version_out, _ = self._execute_command(
+ f"npm list -g {package} 2>/dev/null | grep {package}", needs_sudo=False
+ )
+ if success and version_out:
+ return self._create_conflict_result(
+ resource_type="npm_package",
+ resource_name=package,
+ conflict_type="npm_package_installed",
+ suggestion=f"npm package '{package}' is already installed globally",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": "Keep current version",
+ "commands": [],
+ },
+ {
+ "action": "upgrade",
+ "description": "Update to latest",
+ "commands": [f"npm update -g {package}"],
+ },
+ ],
+ )
+
+ # snap install - check if snap is available and package is installed
+ if "snap install" in cmd:
+ # First check if snap is available
+ snap_available = self._check_tool_available("snap")
+ if not snap_available:
+ return self._create_conflict_result(
+ resource_type="tool",
+ resource_name="snap",
+ conflict_type="tool_not_available",
+ suggestion="Snap package manager is not installed. Installing snap first.",
+ is_active=False,
+ alternative_actions=[
+ {
+ "action": "install_first",
+ "description": "Install snapd first",
+ "commands": ["sudo apt update", "sudo apt install -y snapd"],
+ },
+ {
+ "action": "use_apt",
+ "description": "Use apt instead of snap",
+ "commands": [],
+ },
+ ],
+ )
+
+ pkg_match = re.search(r"snap\s+install\s+(\S+)", cmd)
+ if pkg_match:
+ package = pkg_match.group(1)
+ success, version_out, _ = self._execute_command(
+ f"snap list {package} 2>/dev/null | grep {package}", needs_sudo=False
+ )
+ if success and version_out:
+ return self._create_conflict_result(
+ resource_type="snap_package",
+ resource_name=package,
+ conflict_type="snap_package_installed",
+ suggestion=f"Snap package '{package}' is already installed",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": "Keep current version",
+ "commands": [],
+ },
+ {
+ "action": "refresh",
+ "description": "Refresh to latest",
+ "commands": [f"sudo snap refresh {package}"],
+ },
+ ],
+ )
+
+ # flatpak install - check if flatpak is available and package is installed
+ if "flatpak install" in cmd:
+ # First check if flatpak is available
+ flatpak_available = self._check_tool_available("flatpak")
+ if not flatpak_available:
+ return self._create_conflict_result(
+ resource_type="tool",
+ resource_name="flatpak",
+ conflict_type="tool_not_available",
+ suggestion="Flatpak is not installed. Installing flatpak first.",
+ is_active=False,
+ alternative_actions=[
+ {
+ "action": "install_first",
+ "description": "Install flatpak first",
+ "commands": ["sudo apt update", "sudo apt install -y flatpak"],
+ },
+ {
+ "action": "use_apt",
+ "description": "Use apt instead of flatpak",
+ "commands": [],
+ },
+ ],
+ )
+
+ pkg_match = re.search(r"flatpak\s+install\s+(?:-y\s+)?(\S+)", cmd)
+ if pkg_match:
+ package = pkg_match.group(1)
+ success, version_out, _ = self._execute_command(
+ f"flatpak list | grep -i {package}", needs_sudo=False
+ )
+ if success and version_out:
+ return self._create_conflict_result(
+ resource_type="flatpak_package",
+ resource_name=package,
+ conflict_type="flatpak_package_installed",
+ suggestion=f"Flatpak application '{package}' is already installed",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": "Keep current version",
+ "commands": [],
+ },
+ {
+ "action": "upgrade",
+ "description": "Update to latest",
+ "commands": [f"flatpak update -y {package}"],
+ },
+ ],
+ )
+
+ return result
+
+ def _check_tool_available(self, tool: str) -> bool:
+ """Check if a command-line tool is available."""
+ success, output, _ = self._execute_command(f"which {tool} 2>/dev/null", needs_sudo=False)
+ return success and bool(output.strip())
+
+ def _check_port_conflict(self, cmd: str, purpose: str) -> dict[str, Any]:
+ """Check for port binding conflicts."""
+ result = {"has_conflict": False}
+
+ # Look for port mappings
+ port_patterns = [
+ r"-p\s+(\d+):\d+", # docker -p 8080:80
+ r"--port[=\s]+(\d+)", # --port 8080
+ r":(\d+)\s", # :8080
+ r"listen\s+(\d+)", # nginx listen 80
+ ]
+
+ for pattern in port_patterns:
+ match = re.search(pattern, cmd)
+ if match:
+ port = match.group(1)
+
+ # Check if port is in use
+ success, output, _ = self._execute_command(
+ f"ss -tlnp | grep ':{port} '", needs_sudo=True
+ )
+ if success and output:
+ # Get process using the port
+ process = "unknown"
+ proc_match = re.search(r'users:\(\("([^"]+)"', output)
+ if proc_match:
+ process = proc_match.group(1)
+
+ return self._create_conflict_result(
+ resource_type="port",
+ resource_name=port,
+ conflict_type="port_in_use",
+ suggestion=f"Port {port} is already in use by '{process}'",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_different",
+ "description": "Use a different port",
+ "commands": [],
+ },
+ {
+ "action": "stop_existing",
+ "description": f"Stop process using port {port}",
+ "commands": [f"sudo fuser -k {port}/tcp"],
+ },
+ ],
+ )
+
+ return result
+
+ def _check_user_conflict(self, cmd: str, purpose: str) -> dict[str, Any]:
+ """Check for user/group creation conflicts."""
+ result = {"has_conflict": False}
+
+ # useradd / adduser
+ if "useradd" in cmd or "adduser" in cmd:
+ user_match = re.search(r"(?:useradd|adduser)\s+(?:[^\s]+\s+)*(\S+)$", cmd)
+ if user_match:
+ username = user_match.group(1)
+ success, _, _ = self._execute_command(
+ f"id {username} 2>/dev/null", needs_sudo=False
+ )
+ if success:
+ return self._create_conflict_result(
+ resource_type="user",
+ resource_name=username,
+ conflict_type="user_exists",
+ suggestion=f"User '{username}' already exists",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": f"Use existing user '{username}'",
+ "commands": [],
+ },
+ {
+ "action": "modify",
+ "description": "Modify existing user",
+ "commands": [],
+ },
+ ],
+ )
+
+ # groupadd / addgroup
+ if "groupadd" in cmd or "addgroup" in cmd:
+ group_match = re.search(r"(?:groupadd|addgroup)\s+(\S+)$", cmd)
+ if group_match:
+ groupname = group_match.group(1)
+ success, _, _ = self._execute_command(
+ f"getent group {groupname} 2>/dev/null", needs_sudo=False
+ )
+ if success:
+ return self._create_conflict_result(
+ resource_type="group",
+ resource_name=groupname,
+ conflict_type="group_exists",
+ suggestion=f"Group '{groupname}' already exists",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": f"Use existing group '{groupname}'",
+ "commands": [],
+ },
+ ],
+ )
+
+ return result
+
+ def _check_venv_conflict(self, cmd: str, purpose: str) -> dict[str, Any]:
+ """Check for virtual environment conflicts."""
+ result = {"has_conflict": False}
+
+ # python -m venv / virtualenv
+ if "python" in cmd and "venv" in cmd:
+ venv_match = re.search(r"(?:venv|virtualenv)\s+(\S+)", cmd)
+ if venv_match:
+ venv_path = venv_match.group(1)
+ if os.path.exists(venv_path) and os.path.exists(
+ os.path.join(venv_path, "bin", "python")
+ ):
+ return self._create_conflict_result(
+ resource_type="venv",
+ resource_name=venv_path,
+ conflict_type="venv_exists",
+ suggestion=f"Virtual environment '{venv_path}' already exists",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": "Use existing venv",
+ "commands": [],
+ },
+ {
+ "action": "recreate",
+ "description": "Delete and recreate",
+ "commands": [f"rm -rf {venv_path}"],
+ },
+ ],
+ )
+
+ return result
+
+ def _check_database_conflict(self, cmd: str, purpose: str) -> dict[str, Any]:
+ """Check for database creation conflicts."""
+ result = {"has_conflict": False}
+
+ # MySQL/MariaDB create database
+ if "mysql" in cmd.lower() and "create database" in cmd.lower():
+ db_match = re.search(
+ r"create\s+database\s+(?:if\s+not\s+exists\s+)?(\S+)", cmd, re.IGNORECASE
+ )
+ if db_match:
+ dbname = db_match.group(1).strip("`\"'")
+ success, output, _ = self._execute_command(
+ f"mysql -e \"SHOW DATABASES LIKE '{dbname}'\" 2>/dev/null", needs_sudo=False
+ )
+ if success and dbname in output:
+ return self._create_conflict_result(
+ resource_type="mysql_database",
+ resource_name=dbname,
+ conflict_type="database_exists",
+ suggestion=f"MySQL database '{dbname}' already exists",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": "Use existing database",
+ "commands": [],
+ },
+ {
+ "action": "recreate",
+ "description": "Drop and recreate",
+ "commands": [f"mysql -e 'DROP DATABASE {dbname}'"],
+ },
+ ],
+ )
+
+ # PostgreSQL create database
+ if "createdb" in cmd or ("psql" in cmd and "create database" in cmd.lower()):
+ db_match = re.search(r"(?:createdb|create\s+database)\s+(\S+)", cmd, re.IGNORECASE)
+ if db_match:
+ dbname = db_match.group(1).strip("\"'")
+ success, _, _ = self._execute_command(
+ f"psql -lqt 2>/dev/null | cut -d \\| -f 1 | grep -qw {dbname}", needs_sudo=False
+ )
+ if success:
+ return self._create_conflict_result(
+ resource_type="postgres_database",
+ resource_name=dbname,
+ conflict_type="database_exists",
+ suggestion=f"PostgreSQL database '{dbname}' already exists",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": "Use existing database",
+ "commands": [],
+ },
+ {
+ "action": "recreate",
+ "description": "Drop and recreate",
+ "commands": [f"dropdb {dbname}"],
+ },
+ ],
+ )
+
+ return result
+
+ def _check_cron_conflict(self, cmd: str, purpose: str) -> dict[str, Any]:
+ """Check for cron job conflicts."""
+ result = {"has_conflict": False}
+
+ # crontab entries
+ if "crontab" in cmd or "/etc/cron" in cmd:
+ # Check if similar cron job exists
+ if "echo" in cmd and ">>" in cmd:
+ # Extract the command being added
+ job_match = re.search(r"echo\s+['\"]([^'\"]+)['\"]", cmd)
+ if job_match:
+ job_content = job_match.group(1)
+ # Check existing crontab
+ success, crontab, _ = self._execute_command(
+ "crontab -l 2>/dev/null", needs_sudo=False
+ )
+ if success and crontab:
+ # Check if similar job exists
+ job_cmd = job_content.split()[-1] if job_content else ""
+ if job_cmd and job_cmd in crontab:
+ return self._create_conflict_result(
+ resource_type="cron_job",
+ resource_name=job_cmd,
+ conflict_type="cron_exists",
+ suggestion=f"Similar cron job for '{job_cmd}' already exists",
+ is_active=True,
+ alternative_actions=[
+ {
+ "action": "use_existing",
+ "description": "Keep existing cron job",
+ "commands": [],
+ },
+ {
+ "action": "replace",
+ "description": "Replace existing job",
+ "commands": [],
+ },
+ ],
+ )
+
+ return result
+
+
+class VerificationRunner:
+ """Runs verification tests after command execution."""
+
+ def _execute_command(
+ self, cmd: str, needs_sudo: bool = False, timeout: int = 120
+ ) -> tuple[bool, str, str]:
+ """Execute a single command."""
+ try:
+ if needs_sudo and not cmd.strip().startswith("sudo"):
+ cmd = f"sudo {cmd}"
+
+ result = subprocess.run(
+ ["sudo", "bash", "-c", cmd] if needs_sudo else cmd,
+ shell=not needs_sudo,
+ capture_output=True,
+ text=True,
+ timeout=timeout,
+ )
+ return result.returncode == 0, result.stdout.strip(), result.stderr.strip()
+ except subprocess.TimeoutExpired:
+ return False, "", f"Command timed out after {timeout} seconds"
+ except Exception as e:
+ return False, "", str(e)
+
+ def run_verification_tests(
+ self,
+ commands_executed: list[CommandLog],
+ user_query: str,
+ ) -> tuple[bool, list[dict[str, Any]]]:
+ """
+ Run verification tests after all commands have been executed.
+
+ Returns:
+ Tuple of (all_passed, test_results)
+ """
+ console.print()
+ console.print("[bold cyan]🧪 Running verification tests...[/bold cyan]")
+
+ test_results = []
+ services_to_check = set()
+ configs_to_check = set()
+ files_to_check = set()
+
+ for cmd_log in commands_executed:
+ cmd = cmd_log.command.lower()
+
+ if "systemctl" in cmd or "service " in cmd:
+ svc_match = re.search(r"(?:systemctl|service)\s+\w+\s+([^\s]+)", cmd)
+ if svc_match:
+ services_to_check.add(svc_match.group(1).replace(".service", ""))
+
+ if "nginx" in cmd:
+ configs_to_check.add("nginx")
+ if "apache" in cmd or "a2ensite" in cmd:
+ configs_to_check.add("apache")
+
+ paths = re.findall(r"(/[^\s>|&]+)", cmd_log.command)
+ for path in paths:
+ if any(x in path for x in ["/etc/", "/var/", "/opt/"]):
+ files_to_check.add(path)
+
+ all_passed = True
+
+ # Config tests
+ if "nginx" in configs_to_check:
+ console.print("[dim] Testing nginx configuration...[/dim]")
+ success, stdout, stderr = self._execute_command("nginx -t", needs_sudo=True)
+ test_results.append(
+ {
+ "test": "nginx -t",
+ "passed": success,
+ "output": stdout if success else stderr,
+ }
+ )
+ if success:
+ console.print("[green] ✓ Nginx configuration is valid[/green]")
+ else:
+ console.print(f"[red] ✗ Nginx config test failed: {stderr[:100]}[/red]")
+ all_passed = False
+
+ if "apache" in configs_to_check:
+ console.print("[dim] Testing Apache configuration...[/dim]")
+ success, stdout, stderr = self._execute_command(
+ "apache2ctl configtest", needs_sudo=True
+ )
+ test_results.append(
+ {
+ "test": "apache2ctl configtest",
+ "passed": success,
+ "output": stdout if success else stderr,
+ }
+ )
+ if success:
+ console.print("[green] ✓ Apache configuration is valid[/green]")
+ else:
+ console.print(f"[red] ✗ Apache config test failed: {stderr[:100]}[/red]")
+ all_passed = False
+
+ # Service status tests
+ for service in services_to_check:
+ console.print(f"[dim] Checking service {service}...[/dim]")
+ success, stdout, stderr = self._execute_command(
+ f"systemctl is-active {service}", needs_sudo=False
+ )
+ is_active = stdout.strip() == "active"
+ test_results.append(
+ {
+ "test": f"systemctl is-active {service}",
+ "passed": is_active,
+ "output": stdout,
+ }
+ )
+ if is_active:
+ console.print(f"[green] ✓ Service {service} is running[/green]")
+ else:
+ console.print(f"[yellow] ⚠ Service {service} status: {stdout.strip()}[/yellow]")
+
+ # File existence tests
+ for file_path in list(files_to_check)[:5]:
+ if os.path.exists(file_path):
+ success, _, _ = self._execute_command(f"test -r {file_path}", needs_sudo=True)
+ test_results.append(
+ {
+ "test": f"file exists: {file_path}",
+ "passed": True,
+ "output": "File exists and is readable",
+ }
+ )
+ else:
+ test_results.append(
+ {
+ "test": f"file exists: {file_path}",
+ "passed": False,
+ "output": "File does not exist",
+ }
+ )
+ console.print(f"[yellow] ⚠ File not found: {file_path}[/yellow]")
+
+ # Connectivity tests
+ query_lower = user_query.lower()
+ if any(x in query_lower for x in ["proxy", "forward", "port", "listen"]):
+ port_match = re.search(r"port\s*(\d+)|:(\d+)", user_query)
+ if port_match:
+ port = port_match.group(1) or port_match.group(2)
+ console.print(f"[dim] Testing connectivity on port {port}...[/dim]")
+ success, stdout, stderr = self._execute_command(
+ f"curl -s -o /dev/null -w '%{{http_code}}' http://localhost:{port}/ 2>/dev/null || echo 'failed'",
+ needs_sudo=False,
+ )
+ if stdout.strip() not in ["failed", "000", ""]:
+ console.print(
+ f"[green] ✓ Port {port} responding (HTTP {stdout.strip()})[/green]"
+ )
+ test_results.append(
+ {
+ "test": f"curl localhost:{port}",
+ "passed": True,
+ "output": f"HTTP {stdout.strip()}",
+ }
+ )
+ else:
+ console.print(
+ f"[yellow] ⚠ Port {port} not responding (may be expected)[/yellow]"
+ )
+
+ # Summary
+ passed = sum(1 for t in test_results if t["passed"])
+ total = len(test_results)
+
+ console.print()
+ if all_passed:
+ console.print(f"[bold green]✓ All tests passed ({passed}/{total})[/bold green]")
+ else:
+ console.print(
+ f"[bold yellow]⚠ Some tests failed ({passed}/{total} passed)[/bold yellow]"
+ )
+
+ return all_passed, test_results
+
+
+class FileUsefulnessAnalyzer:
+ """Analyzes file content usefulness for modifications."""
+
+ def _execute_command(
+ self, cmd: str, needs_sudo: bool = False, timeout: int = 120
+ ) -> tuple[bool, str, str]:
+ """Execute a single command."""
+ try:
+ if needs_sudo and not cmd.strip().startswith("sudo"):
+ cmd = f"sudo {cmd}"
+
+ result = subprocess.run(
+ ["sudo", "bash", "-c", cmd] if needs_sudo else cmd,
+ shell=not needs_sudo,
+ capture_output=True,
+ text=True,
+ timeout=timeout,
+ )
+ return result.returncode == 0, result.stdout.strip(), result.stderr.strip()
+ except subprocess.TimeoutExpired:
+ return False, "", f"Command timed out after {timeout} seconds"
+ except Exception as e:
+ return False, "", str(e)
+
+ def check_file_exists_and_usefulness(
+ self,
+ cmd: str,
+ purpose: str,
+ user_query: str,
+ ) -> dict[str, Any]:
+ """Check if files the command creates already exist and analyze their usefulness."""
+ result = {
+ "files_checked": [],
+ "existing_files": [],
+ "useful_content": {},
+ "recommendations": [],
+ "modified_command": cmd,
+ }
+
+ file_creation_patterns = [
+ (r"(?:echo|printf)\s+.*?>\s*([^\s;|&]+)", "write"),
+ (r"(?:echo|printf)\s+.*?>>\s*([^\s;|&]+)", "append"),
+ (r"tee\s+(?:-a\s+)?([^\s;|&]+)", "write"),
+ (r"cp\s+[^\s]+\s+([^\s;|&]+)", "copy"),
+ (r"touch\s+([^\s;|&]+)", "create"),
+ (r"cat\s+.*?>\s*([^\s;|&]+)", "write"),
+ (r"sed\s+-i[^\s]*\s+.*?\s+([^\s;|&]+)$", "modify"),
+ (r"mv\s+[^\s]+\s+([^\s;|&]+)", "move"),
+ ]
+
+ target_files = []
+ operation_type = None
+
+ for pattern, op_type in file_creation_patterns:
+ matches = re.findall(pattern, cmd)
+ for match in matches:
+ if match.startswith("/") or match.startswith("~"):
+ target_files.append(match)
+ operation_type = op_type
+
+ result["files_checked"] = target_files
+
+ for file_path in target_files:
+ if file_path.startswith("~"):
+ file_path = os.path.expanduser(file_path)
+
+ if os.path.exists(file_path):
+ result["existing_files"].append(file_path)
+ console.print(f"[yellow]📁 File exists: {file_path}[/yellow]")
+
+ success, content, _ = self._execute_command(
+ f"cat '{file_path}' 2>/dev/null", needs_sudo=True
+ )
+
+ if success and content:
+ useful_parts = self.analyze_file_usefulness(content, purpose, user_query)
+
+ if useful_parts["is_useful"]:
+ result["useful_content"][file_path] = useful_parts
+ console.print(
+ f"[cyan] ✓ Contains useful content: {useful_parts['summary']}[/cyan]"
+ )
+
+ if useful_parts["action"] == "merge":
+ result["recommendations"].append(
+ {
+ "file": file_path,
+ "action": "merge",
+ "reason": useful_parts["reason"],
+ "keep_sections": useful_parts.get("keep_sections", []),
+ }
+ )
+ elif useful_parts["action"] == "modify":
+ result["recommendations"].append(
+ {
+ "file": file_path,
+ "action": "modify",
+ "reason": useful_parts["reason"],
+ }
+ )
+ else:
+ result["recommendations"].append(
+ {
+ "file": file_path,
+ "action": "backup_and_replace",
+ "reason": "Existing content not relevant",
+ }
+ )
+ elif operation_type in ["write", "copy", "create"]:
+ parent_dir = os.path.dirname(file_path)
+ if parent_dir and not os.path.exists(parent_dir):
+ console.print(
+ f"[yellow]📁 Parent directory doesn't exist: {parent_dir}[/yellow]"
+ )
+ result["recommendations"].append(
+ {
+ "file": file_path,
+ "action": "create_parent",
+ "reason": f"Need to create {parent_dir} first",
+ }
+ )
+
+ return result
+
+ def analyze_file_usefulness(
+ self,
+ content: str,
+ purpose: str,
+ user_query: str,
+ ) -> dict[str, Any]:
+ """Analyze if file content is useful for the current purpose."""
+ result = {
+ "is_useful": False,
+ "summary": "",
+ "action": "replace",
+ "reason": "",
+ "keep_sections": [],
+ }
+
+ content_lower = content.lower()
+ purpose_lower = purpose.lower()
+ query_lower = user_query.lower()
+
+ # Nginx configuration
+ if any(
+ x in content_lower for x in ["server {", "location", "nginx", "proxy_pass", "listen"]
+ ):
+ result["is_useful"] = True
+
+ has_server_block = "server {" in content_lower or "server{" in content_lower
+ has_location = "location" in content_lower
+ has_proxy = "proxy_pass" in content_lower
+ has_ssl = "ssl" in content_lower or "443" in content
+
+ summary_parts = []
+ if has_server_block:
+ summary_parts.append("server block")
+ if has_location:
+ summary_parts.append("location rules")
+ if has_proxy:
+ summary_parts.append("proxy settings")
+ if has_ssl:
+ summary_parts.append("SSL config")
+
+ result["summary"] = "Has " + ", ".join(summary_parts)
+
+ if "proxy" in query_lower or "forward" in query_lower:
+ if has_proxy:
+ existing_proxy = re.search(r"proxy_pass\s+([^;]+)", content)
+ if existing_proxy:
+ result["action"] = "modify"
+ result["reason"] = f"Existing proxy to {existing_proxy.group(1).strip()}"
+ else:
+ result["action"] = "merge"
+ result["reason"] = "Add proxy to existing server block"
+ result["keep_sections"] = ["server", "ssl", "location"]
+ elif "ssl" in query_lower or "https" in query_lower:
+ if has_ssl:
+ result["action"] = "modify"
+ result["reason"] = "SSL already configured, modify as needed"
+ else:
+ result["action"] = "merge"
+ result["reason"] = "Add SSL to existing config"
+ else:
+ result["action"] = "merge"
+ result["reason"] = "Preserve existing configuration"
+
+ # Apache configuration
+ elif any(
+ x in content_lower for x in [" 2:
+ result["is_useful"] = True
+ result["summary"] = f"Related content ({len(overlap)} keyword matches)"
+ result["action"] = "backup_and_replace"
+ result["reason"] = "Content partially relevant, backing up"
+
+ return result
+
+ def apply_file_recommendations(
+ self,
+ recommendations: list[dict[str, Any]],
+ ) -> list[str]:
+ """Apply recommendations for existing files."""
+ commands_executed = []
+
+ for rec in recommendations:
+ file_path = rec["file"]
+ action = rec["action"]
+
+ if action == "backup_and_replace":
+ backup_path = f"{file_path}.cortex.bak.{int(time.time())}"
+ backup_cmd = f"sudo cp '{file_path}' '{backup_path}'"
+ success, _, _ = self._execute_command(backup_cmd, needs_sudo=True)
+ if success:
+ console.print(f"[dim] ✓ Backed up to {backup_path}[/dim]")
+ commands_executed.append(backup_cmd)
+
+ elif action == "create_parent":
+ parent = os.path.dirname(file_path)
+ mkdir_cmd = f"sudo mkdir -p '{parent}'"
+ success, _, _ = self._execute_command(mkdir_cmd, needs_sudo=True)
+ if success:
+ console.print(f"[dim] ✓ Created directory {parent}[/dim]")
+ commands_executed.append(mkdir_cmd)
+
+ return commands_executed
diff --git a/cortex/semantic_cache.py b/cortex/semantic_cache.py
index 4dd8d75d..1d01b370 100644
--- a/cortex/semantic_cache.py
+++ b/cortex/semantic_cache.py
@@ -80,10 +80,10 @@ def _ensure_db_directory(self) -> None:
db_dir = Path(self.db_path).parent
try:
db_dir.mkdir(parents=True, exist_ok=True)
- # Also check if we can actually write to this directory
+ # Also check if directory is writable
if not os.access(db_dir, os.W_OK):
- raise PermissionError(f"No write permission to {db_dir}")
- except PermissionError:
+ raise PermissionError(f"Directory {db_dir} is not writable")
+ except (PermissionError, OSError):
user_dir = Path.home() / ".cortex"
user_dir.mkdir(parents=True, exist_ok=True)
self.db_path = str(user_dir / "cache.db")
diff --git a/cortex/system_info_generator.py b/cortex/system_info_generator.py
new file mode 100644
index 00000000..d2dd4b75
--- /dev/null
+++ b/cortex/system_info_generator.py
@@ -0,0 +1,879 @@
+"""
+System Information Command Generator for Cortex.
+
+Generates read-only commands using LLM to retrieve system and application information.
+All commands are validated against the CommandValidator to ensure they only read the system.
+
+Usage:
+ generator = SystemInfoGenerator(api_key="...", provider="claude")
+
+ # Simple info queries
+ result = generator.get_info("What version of Python is installed?")
+
+ # Application-specific queries
+ result = generator.get_app_info("nginx", "What's the current nginx configuration?")
+
+ # Structured info retrieval
+ info = generator.get_structured_info("hardware", ["cpu", "memory", "disk"])
+"""
+
+import json
+import os
+import re
+import subprocess
+from dataclasses import dataclass, field
+from enum import Enum
+from typing import Any
+
+from rich.console import Console
+from rich.panel import Panel
+from rich.table import Table
+
+from cortex.ask import CommandValidator
+
+console = Console()
+
+
+class InfoCategory(str, Enum):
+ """Categories of system information."""
+
+ HARDWARE = "hardware"
+ SOFTWARE = "software"
+ NETWORK = "network"
+ SECURITY = "security"
+ SERVICES = "services"
+ PACKAGES = "packages"
+ PROCESSES = "processes"
+ STORAGE = "storage"
+ PERFORMANCE = "performance"
+ CONFIGURATION = "configuration"
+ LOGS = "logs"
+ USERS = "users"
+ APPLICATION = "application"
+ CUSTOM = "custom"
+
+
+@dataclass
+class InfoCommand:
+ """A single read-only command for gathering information."""
+
+ command: str
+ purpose: str
+ category: InfoCategory = InfoCategory.CUSTOM
+ timeout: int = 30
+
+
+@dataclass
+class InfoResult:
+ """Result of executing an info command."""
+
+ command: str
+ success: bool
+ output: str
+ error: str = ""
+ execution_time: float = 0.0
+
+
+@dataclass
+class SystemInfoResult:
+ """Complete result of a system info query."""
+
+ query: str
+ answer: str
+ commands_executed: list[InfoResult] = field(default_factory=list)
+ raw_data: dict[str, Any] = field(default_factory=dict)
+ category: InfoCategory = InfoCategory.CUSTOM
+
+
+# Common info command templates for quick lookups
+# Note: Commands are simplified to avoid || patterns which are blocked by CommandValidator
+COMMON_INFO_COMMANDS: dict[str, list[InfoCommand]] = {
+ # Hardware Information
+ "cpu": [
+ InfoCommand("lscpu", "Get CPU architecture and details", InfoCategory.HARDWARE),
+ InfoCommand("head -30 /proc/cpuinfo", "Get CPU model and cores", InfoCategory.HARDWARE),
+ InfoCommand("nproc", "Get number of processing units", InfoCategory.HARDWARE),
+ ],
+ "memory": [
+ InfoCommand("free -h", "Get memory usage in human-readable format", InfoCategory.HARDWARE),
+ InfoCommand(
+ "head -20 /proc/meminfo", "Get detailed memory information", InfoCategory.HARDWARE
+ ),
+ ],
+ "disk": [
+ InfoCommand("df -h", "Get disk space usage", InfoCategory.STORAGE),
+ InfoCommand("lsblk", "List block devices", InfoCategory.STORAGE),
+ ],
+ "gpu": [
+ InfoCommand(
+ "nvidia-smi --query-gpu=name,memory.total,driver_version --format=csv,noheader",
+ "Get NVIDIA GPU info",
+ InfoCategory.HARDWARE,
+ ),
+ InfoCommand("lspci", "List PCI devices including VGA", InfoCategory.HARDWARE),
+ ],
+ # OS Information
+ "os": [
+ InfoCommand("cat /etc/os-release", "Get OS release information", InfoCategory.SOFTWARE),
+ InfoCommand("uname -a", "Get kernel and system info", InfoCategory.SOFTWARE),
+ InfoCommand("lsb_release -a", "Get LSB release info", InfoCategory.SOFTWARE),
+ ],
+ "kernel": [
+ InfoCommand("uname -r", "Get kernel version", InfoCategory.SOFTWARE),
+ InfoCommand("cat /proc/version", "Get detailed kernel version", InfoCategory.SOFTWARE),
+ ],
+ # Network Information
+ "network": [
+ InfoCommand("ip addr show", "List network interfaces", InfoCategory.NETWORK),
+ InfoCommand("ip route show", "Show routing table", InfoCategory.NETWORK),
+ InfoCommand("ss -tuln", "List listening ports", InfoCategory.NETWORK),
+ ],
+ "dns": [
+ InfoCommand("cat /etc/resolv.conf", "Get DNS configuration", InfoCategory.NETWORK),
+ InfoCommand("host google.com", "Test DNS resolution", InfoCategory.NETWORK),
+ ],
+ # Services
+ "services": [
+ InfoCommand(
+ "systemctl list-units --type=service --state=running --no-pager",
+ "List running services",
+ InfoCategory.SERVICES,
+ ),
+ InfoCommand(
+ "systemctl list-units --type=service --state=failed --no-pager",
+ "List failed services",
+ InfoCategory.SERVICES,
+ ),
+ ],
+ # Security
+ "security": [
+ InfoCommand("ufw status", "Check firewall status", InfoCategory.SECURITY),
+ InfoCommand("aa-status", "Check AppArmor status", InfoCategory.SECURITY),
+ InfoCommand("wc -l /etc/passwd", "Count system users", InfoCategory.SECURITY),
+ ],
+ # Processes
+ "processes": [
+ InfoCommand(
+ "ps aux --sort=-%mem", "Top memory-consuming processes", InfoCategory.PROCESSES
+ ),
+ InfoCommand("ps aux --sort=-%cpu", "Top CPU-consuming processes", InfoCategory.PROCESSES),
+ ],
+ # Environment
+ "environment": [
+ InfoCommand("env", "List environment variables", InfoCategory.CONFIGURATION),
+ InfoCommand("echo $PATH", "Show PATH", InfoCategory.CONFIGURATION),
+ InfoCommand("echo $SHELL", "Show current shell", InfoCategory.CONFIGURATION),
+ ],
+}
+
+# Application-specific info templates
+# Note: Commands are simplified to avoid || patterns which are blocked by CommandValidator
+APP_INFO_TEMPLATES: dict[str, dict[str, list[InfoCommand]]] = {
+ "nginx": {
+ "status": [
+ InfoCommand(
+ "systemctl status nginx --no-pager",
+ "Check nginx service status",
+ InfoCategory.SERVICES,
+ ),
+ InfoCommand("nginx -v", "Get nginx version", InfoCategory.SOFTWARE),
+ ],
+ "config": [
+ InfoCommand(
+ "cat /etc/nginx/nginx.conf", "Get nginx configuration", InfoCategory.CONFIGURATION
+ ),
+ InfoCommand(
+ "ls -la /etc/nginx/sites-enabled/", "List enabled sites", InfoCategory.CONFIGURATION
+ ),
+ ],
+ "logs": [
+ InfoCommand(
+ "tail -50 /var/log/nginx/access.log", "Recent access logs", InfoCategory.LOGS
+ ),
+ InfoCommand(
+ "tail -50 /var/log/nginx/error.log", "Recent error logs", InfoCategory.LOGS
+ ),
+ ],
+ },
+ "docker": {
+ "status": [
+ InfoCommand("docker --version", "Get Docker version", InfoCategory.SOFTWARE),
+ InfoCommand("docker info", "Get Docker info", InfoCategory.SOFTWARE),
+ ],
+ "containers": [
+ InfoCommand("docker ps -a", "List containers", InfoCategory.APPLICATION),
+ InfoCommand("docker images", "List images", InfoCategory.APPLICATION),
+ ],
+ "resources": [
+ InfoCommand(
+ "docker stats --no-stream", "Container resource usage", InfoCategory.PERFORMANCE
+ ),
+ ],
+ },
+ "postgresql": {
+ "status": [
+ InfoCommand(
+ "systemctl status postgresql --no-pager",
+ "Check PostgreSQL service",
+ InfoCategory.SERVICES,
+ ),
+ InfoCommand("psql --version", "Get PostgreSQL version", InfoCategory.SOFTWARE),
+ ],
+ "config": [
+ InfoCommand(
+ "head -50 /etc/postgresql/14/main/postgresql.conf",
+ "PostgreSQL config",
+ InfoCategory.CONFIGURATION,
+ ),
+ ],
+ },
+ "mysql": {
+ "status": [
+ InfoCommand(
+ "systemctl status mysql --no-pager", "Check MySQL status", InfoCategory.SERVICES
+ ),
+ InfoCommand("mysql --version", "Get MySQL version", InfoCategory.SOFTWARE),
+ ],
+ },
+ "redis": {
+ "status": [
+ InfoCommand(
+ "systemctl status redis-server --no-pager",
+ "Check Redis status",
+ InfoCategory.SERVICES,
+ ),
+ InfoCommand("redis-cli --version", "Get Redis version", InfoCategory.SOFTWARE),
+ ],
+ "info": [
+ InfoCommand("redis-cli info", "Redis server info", InfoCategory.APPLICATION),
+ ],
+ },
+ "python": {
+ "version": [
+ InfoCommand("python3 --version", "Get Python version", InfoCategory.SOFTWARE),
+ InfoCommand("which python3", "Find Python executable", InfoCategory.SOFTWARE),
+ ],
+ "packages": [
+ InfoCommand(
+ "pip3 list --format=freeze", "List installed packages", InfoCategory.PACKAGES
+ ),
+ ],
+ "venv": [
+ InfoCommand(
+ "echo $VIRTUAL_ENV", "Check active virtual environment", InfoCategory.CONFIGURATION
+ ),
+ ],
+ },
+ "nodejs": {
+ "version": [
+ InfoCommand("node --version", "Get Node.js version", InfoCategory.SOFTWARE),
+ InfoCommand("npm --version", "Get npm version", InfoCategory.SOFTWARE),
+ ],
+ "packages": [
+ InfoCommand("npm list -g --depth=0", "List global npm packages", InfoCategory.PACKAGES),
+ ],
+ },
+ "git": {
+ "version": [
+ InfoCommand("git --version", "Get Git version", InfoCategory.SOFTWARE),
+ ],
+ "config": [
+ InfoCommand(
+ "git config --global --list", "Git global config", InfoCategory.CONFIGURATION
+ ),
+ ],
+ },
+ "ssh": {
+ "status": [
+ InfoCommand(
+ "systemctl status ssh --no-pager", "Check SSH service", InfoCategory.SERVICES
+ ),
+ ],
+ "config": [
+ InfoCommand(
+ "head -50 /etc/ssh/sshd_config", "SSH server config", InfoCategory.CONFIGURATION
+ ),
+ ],
+ },
+ "systemd": {
+ "status": [
+ InfoCommand("systemctl --version", "Get systemd version", InfoCategory.SOFTWARE),
+ InfoCommand(
+ "systemctl list-units --state=failed --no-pager",
+ "Failed units",
+ InfoCategory.SERVICES,
+ ),
+ ],
+ "timers": [
+ InfoCommand(
+ "systemctl list-timers --no-pager", "List active timers", InfoCategory.SERVICES
+ ),
+ ],
+ },
+}
+
+
+class SystemInfoGenerator:
+ """
+ Generates read-only commands to retrieve system and application information.
+
+ Uses LLM to generate appropriate commands based on natural language queries,
+ while enforcing read-only access through CommandValidator.
+ """
+
+ MAX_ITERATIONS = 5
+ MAX_OUTPUT_CHARS = 4000
+
+ def __init__(
+ self,
+ api_key: str | None = None,
+ provider: str = "claude",
+ model: str | None = None,
+ debug: bool = False,
+ ):
+ """
+ Initialize the system info generator.
+
+ Args:
+ api_key: API key for LLM provider (defaults to env var)
+ provider: LLM provider ("claude", "openai", "ollama")
+ model: Optional model override
+ debug: Enable debug output
+ """
+ self.api_key = (
+ api_key or os.environ.get("ANTHROPIC_API_KEY") or os.environ.get("OPENAI_API_KEY")
+ )
+ self.provider = provider.lower()
+ self.model = model or self._default_model()
+ self.debug = debug
+
+ self._initialize_client()
+
+ def _default_model(self) -> str:
+ if self.provider == "openai":
+ return "gpt-4o"
+ elif self.provider == "claude":
+ return "claude-sonnet-4-20250514"
+ elif self.provider == "ollama":
+ return "llama3.2"
+ return "gpt-4o"
+
+ def _initialize_client(self):
+ """Initialize the LLM client."""
+ if self.provider == "openai":
+ try:
+ from openai import OpenAI
+
+ self.client = OpenAI(api_key=self.api_key)
+ except ImportError:
+ raise ImportError("OpenAI package not installed. Run: pip install openai")
+ elif self.provider == "claude":
+ try:
+ from anthropic import Anthropic
+
+ self.client = Anthropic(api_key=self.api_key)
+ except ImportError:
+ raise ImportError("Anthropic package not installed. Run: pip install anthropic")
+ elif self.provider == "ollama":
+ self.ollama_url = os.environ.get("OLLAMA_HOST", "http://localhost:11434")
+ self.client = None
+ else:
+ raise ValueError(f"Unsupported provider: {self.provider}")
+
+ def _get_system_prompt(self, context: str = "") -> str:
+ """Get the system prompt for info command generation."""
+ app_list = ", ".join(sorted(APP_INFO_TEMPLATES.keys()))
+ category_list = ", ".join([c.value for c in InfoCategory])
+
+ prompt = f"""You are a Linux system information assistant that generates READ-ONLY shell commands.
+
+Your task is to generate shell commands that gather system information to answer the user's query.
+You can ONLY generate commands that READ information - no modifications allowed.
+
+IMPORTANT RULES:
+- Generate ONLY read-only commands (cat, ls, grep, find, ps, etc.)
+- NEVER generate commands that modify the system (rm, mv, cp, apt install, etc.)
+- NEVER use sudo (commands must work as regular user where possible)
+- NEVER use output redirection (>, >>)
+- NEVER use dangerous command chaining (;, &&, ||) except for fallback patterns
+- Commands should handle missing files/tools gracefully using || echo fallbacks
+
+ALLOWED COMMAND PATTERNS:
+- Reading files: cat, head, tail, less (without writing)
+- Listing: ls, find, locate, which, whereis, type
+- System info: uname, hostname, uptime, whoami, id, lscpu, lsmem, lsblk
+- Process info: ps, top, pgrep, pidof, pstree, free, vmstat
+- Package queries: dpkg-query, dpkg -l, apt-cache, pip list/show/freeze
+- Network info: ip addr, ip route, ss, netstat (read operations)
+- Service status: systemctl status (NOT start/stop/restart)
+- Text processing: grep, awk, sed (for filtering, NOT modifying files)
+
+BLOCKED PATTERNS (NEVER USE):
+- sudo, su
+- apt install/remove, pip install/uninstall
+- rm, mv, cp, mkdir, touch, chmod, chown
+- Output redirection: > or >>
+- systemctl start/stop/restart/enable/disable
+
+RESPONSE FORMAT:
+You must respond with a JSON object in one of these formats:
+
+For generating a command to gather info:
+{{
+ "response_type": "command",
+ "command": "",
+ "category": "<{category_list}>",
+ "reasoning": ""
+}}
+
+For providing the final answer:
+{{
+ "response_type": "answer",
+ "answer": "",
+ "reasoning": ""
+}}
+
+KNOWN APPLICATIONS with pre-defined info commands: {app_list}
+
+{context}"""
+ return prompt
+
+ def _truncate_output(self, output: str) -> str:
+ """Truncate output to avoid context overflow."""
+ if len(output) <= self.MAX_OUTPUT_CHARS:
+ return output
+ half = self.MAX_OUTPUT_CHARS // 2
+ return f"{output[:half]}\n\n... [truncated {len(output) - self.MAX_OUTPUT_CHARS} chars] ...\n\n{output[-half:]}"
+
+ def _execute_command(self, command: str, timeout: int = 30) -> InfoResult:
+ """Execute a validated read-only command."""
+ import time
+
+ start_time = time.time()
+
+ # Validate command first
+ is_valid, error = CommandValidator.validate_command(command)
+ if not is_valid:
+ return InfoResult(
+ command=command,
+ success=False,
+ output="",
+ error=f"Command blocked: {error}",
+ execution_time=time.time() - start_time,
+ )
+
+ try:
+ result = subprocess.run(
+ command,
+ shell=True,
+ capture_output=True,
+ text=True,
+ timeout=timeout,
+ )
+ return InfoResult(
+ command=command,
+ success=result.returncode == 0,
+ output=result.stdout.strip(),
+ error=result.stderr.strip() if result.returncode != 0 else "",
+ execution_time=time.time() - start_time,
+ )
+ except subprocess.TimeoutExpired:
+ return InfoResult(
+ command=command,
+ success=False,
+ output="",
+ error=f"Command timed out after {timeout}s",
+ execution_time=timeout,
+ )
+ except Exception as e:
+ return InfoResult(
+ command=command,
+ success=False,
+ output="",
+ error=str(e),
+ execution_time=time.time() - start_time,
+ )
+
+ def _call_llm(self, system_prompt: str, user_prompt: str) -> dict[str, Any]:
+ """Call the LLM and parse the response."""
+ try:
+ if self.provider == "claude":
+ response = self.client.messages.create(
+ model=self.model,
+ max_tokens=2048,
+ system=system_prompt,
+ messages=[{"role": "user", "content": user_prompt}],
+ )
+ content = response.content[0].text
+ elif self.provider == "openai":
+ response = self.client.chat.completions.create(
+ model=self.model,
+ max_tokens=2048,
+ messages=[
+ {"role": "system", "content": system_prompt},
+ {"role": "user", "content": user_prompt},
+ ],
+ )
+ content = response.choices[0].message.content
+ elif self.provider == "ollama":
+ import httpx
+
+ response = httpx.post(
+ f"{self.ollama_url}/api/chat",
+ json={
+ "model": self.model,
+ "messages": [
+ {"role": "system", "content": system_prompt},
+ {"role": "user", "content": user_prompt},
+ ],
+ "stream": False,
+ },
+ timeout=60.0,
+ )
+ response.raise_for_status()
+ content = response.json()["message"]["content"]
+ else:
+ raise ValueError(f"Unsupported provider: {self.provider}")
+
+ # Parse JSON from response
+ json_match = re.search(r"\{[\s\S]*\}", content)
+ if json_match:
+ return json.loads(json_match.group())
+ raise ValueError("No JSON found in response")
+
+ except json.JSONDecodeError as e:
+ if self.debug:
+ console.print(f"[red]JSON parse error: {e}[/red]")
+ return {
+ "response_type": "answer",
+ "answer": f"Error parsing LLM response: {e}",
+ "reasoning": "",
+ }
+ except Exception as e:
+ if self.debug:
+ console.print(f"[red]LLM error: {e}[/red]")
+ return {"response_type": "answer", "answer": f"Error calling LLM: {e}", "reasoning": ""}
+
+ def get_info(self, query: str, context: str = "") -> SystemInfoResult:
+ """
+ Get system information based on a natural language query.
+
+ Uses an agentic loop to:
+ 1. Generate commands to gather information
+ 2. Execute commands (read-only only)
+ 3. Analyze results
+ 4. Either generate more commands or provide final answer
+
+ Args:
+ query: Natural language question about the system
+ context: Optional additional context for the LLM
+
+ Returns:
+ SystemInfoResult with answer and command execution details
+ """
+ system_prompt = self._get_system_prompt(context)
+ commands_executed: list[InfoResult] = []
+ history: list[dict[str, str]] = []
+
+ user_prompt = f"Query: {query}"
+
+ for iteration in range(self.MAX_ITERATIONS):
+ if self.debug:
+ console.print(f"[dim]Iteration {iteration + 1}/{self.MAX_ITERATIONS}[/dim]")
+
+ # Build prompt with history
+ full_prompt = user_prompt
+ if history:
+ full_prompt += "\n\nPrevious commands and results:\n"
+ for i, entry in enumerate(history, 1):
+ full_prompt += f"\n--- Command {i} ---\n"
+ full_prompt += f"Command: {entry['command']}\n"
+ if entry["success"]:
+ full_prompt += f"Output:\n{self._truncate_output(entry['output'])}\n"
+ else:
+ full_prompt += f"Error: {entry['error']}\n"
+ full_prompt += "\nBased on these results, either run another command or provide the final answer.\n"
+
+ # Call LLM
+ response = self._call_llm(system_prompt, full_prompt)
+
+ if response.get("response_type") == "answer":
+ # Final answer
+ return SystemInfoResult(
+ query=query,
+ answer=response.get("answer", "No answer provided"),
+ commands_executed=commands_executed,
+ raw_data={h["command"]: h["output"] for h in history if h.get("success")},
+ )
+
+ elif response.get("response_type") == "command":
+ command = response.get("command", "")
+ if not command:
+ continue
+
+ if self.debug:
+ console.print(f"[cyan]Executing:[/cyan] {command}")
+
+ result = self._execute_command(command)
+ commands_executed.append(result)
+
+ history.append(
+ {
+ "command": command,
+ "success": result.success,
+ "output": result.output,
+ "error": result.error,
+ }
+ )
+
+ if self.debug:
+ if result.success:
+ console.print("[green]✓ Success[/green]")
+ else:
+ console.print(f"[red]✗ Failed: {result.error}[/red]")
+
+ # Max iterations reached
+ return SystemInfoResult(
+ query=query,
+ answer="Could not complete the query within iteration limit.",
+ commands_executed=commands_executed,
+ raw_data={h["command"]: h["output"] for h in history if h.get("success")},
+ )
+
+ def get_app_info(
+ self,
+ app_name: str,
+ query: str | None = None,
+ aspects: list[str] | None = None,
+ ) -> SystemInfoResult:
+ """
+ Get information about a specific application.
+
+ Args:
+ app_name: Application name (nginx, docker, postgresql, etc.)
+ query: Optional natural language query about the app
+ aspects: Optional list of aspects to check (status, config, logs, etc.)
+
+ Returns:
+ SystemInfoResult with application information
+ """
+ app_lower = app_name.lower()
+ commands_executed: list[InfoResult] = []
+ raw_data: dict[str, Any] = {}
+
+ # Check if we have predefined commands for this app
+ if app_lower in APP_INFO_TEMPLATES:
+ templates = APP_INFO_TEMPLATES[app_lower]
+ aspects_to_check = aspects or list(templates.keys())
+
+ for aspect in aspects_to_check:
+ if aspect in templates:
+ for cmd_info in templates[aspect]:
+ result = self._execute_command(cmd_info.command, cmd_info.timeout)
+ commands_executed.append(result)
+ if result.success and result.output:
+ raw_data[f"{aspect}:{cmd_info.purpose}"] = result.output
+
+ # If there's a specific query, use LLM to analyze
+ if query:
+ context = f"""Application: {app_name}
+Already gathered data:
+{json.dumps(raw_data, indent=2)[:2000]}
+
+Now answer the specific question about this application."""
+
+ result = self.get_info(query, context)
+ result.commands_executed = commands_executed + result.commands_executed
+ result.raw_data.update(raw_data)
+ return result
+
+ # Generate summary answer from raw data
+ answer_parts = [f"**{app_name.title()} Information**\n"]
+ for key, value in raw_data.items():
+ aspect, desc = key.split(":", 1)
+ answer_parts.append(
+ f"\n**{aspect.title()}** ({desc}):\n```\n{value[:500]}{'...' if len(value) > 500 else ''}\n```"
+ )
+
+ return SystemInfoResult(
+ query=query or f"Get information about {app_name}",
+ answer="\n".join(answer_parts) if raw_data else f"No information found for {app_name}",
+ commands_executed=commands_executed,
+ raw_data=raw_data,
+ category=InfoCategory.APPLICATION,
+ )
+
+ def get_structured_info(
+ self,
+ category: str | InfoCategory,
+ aspects: list[str] | None = None,
+ ) -> SystemInfoResult:
+ """
+ Get structured system information for a category.
+
+ Args:
+ category: Info category (hardware, network, services, etc.)
+ aspects: Optional specific aspects (cpu, memory, disk for hardware, etc.)
+
+ Returns:
+ SystemInfoResult with structured information
+ """
+ if isinstance(category, str):
+ category = category.lower()
+ else:
+ category = category.value
+
+ commands_executed: list[InfoResult] = []
+ raw_data: dict[str, Any] = {}
+
+ # Map categories to common commands
+ category_mapping = {
+ "hardware": ["cpu", "memory", "disk", "gpu"],
+ "software": ["os", "kernel"],
+ "network": ["network", "dns"],
+ "services": ["services"],
+ "security": ["security"],
+ "processes": ["processes"],
+ "storage": ["disk"],
+ "performance": ["cpu", "memory", "processes"],
+ "configuration": ["environment"],
+ }
+
+ aspects_to_check = aspects or category_mapping.get(category, [])
+
+ for aspect in aspects_to_check:
+ if aspect in COMMON_INFO_COMMANDS:
+ for cmd_info in COMMON_INFO_COMMANDS[aspect]:
+ result = self._execute_command(cmd_info.command, cmd_info.timeout)
+ commands_executed.append(result)
+ if result.success and result.output:
+ raw_data[f"{aspect}:{cmd_info.purpose}"] = result.output
+
+ # Generate structured answer
+ answer_parts = [f"**{category.title()} Information**\n"]
+ for key, value in raw_data.items():
+ aspect, desc = key.split(":", 1)
+ answer_parts.append(
+ f"\n**{aspect.upper()}** ({desc}):\n```\n{value[:800]}{'...' if len(value) > 800 else ''}\n```"
+ )
+
+ return SystemInfoResult(
+ query=f"Get {category} information",
+ answer="\n".join(answer_parts) if raw_data else f"No {category} information found",
+ commands_executed=commands_executed,
+ raw_data=raw_data,
+ category=(
+ InfoCategory(category)
+ if category in [c.value for c in InfoCategory]
+ else InfoCategory.CUSTOM
+ ),
+ )
+
+ def quick_info(self, info_type: str) -> str:
+ """
+ Quick lookup for common system information.
+
+ Args:
+ info_type: Type of info (cpu, memory, disk, os, network, etc.)
+
+ Returns:
+ String with the requested information
+ """
+ info_lower = info_type.lower()
+
+ if info_lower in COMMON_INFO_COMMANDS:
+ outputs = []
+ for cmd_info in COMMON_INFO_COMMANDS[info_lower]:
+ result = self._execute_command(cmd_info.command)
+ if result.success and result.output:
+ outputs.append(result.output)
+ return "\n\n".join(outputs) if outputs else f"No {info_type} information available"
+
+ # Try as app info
+ if info_lower in APP_INFO_TEMPLATES:
+ result = self.get_app_info(info_lower, aspects=["status", "version"])
+ return result.answer
+
+ return (
+ f"Unknown info type: {info_type}. Available: {', '.join(COMMON_INFO_COMMANDS.keys())}"
+ )
+
+ def list_available_info(self) -> dict[str, list[str]]:
+ """List all available pre-defined info types and applications."""
+ return {
+ "system_info": list(COMMON_INFO_COMMANDS.keys()),
+ "applications": list(APP_INFO_TEMPLATES.keys()),
+ "categories": [c.value for c in InfoCategory],
+ }
+
+
+def get_system_info_generator(
+ provider: str = "claude",
+ debug: bool = False,
+) -> SystemInfoGenerator:
+ """
+ Factory function to create a SystemInfoGenerator with default configuration.
+
+ Args:
+ provider: LLM provider to use
+ debug: Enable debug output
+
+ Returns:
+ Configured SystemInfoGenerator instance
+ """
+ api_key = os.environ.get("ANTHROPIC_API_KEY") or os.environ.get("OPENAI_API_KEY")
+ if not api_key:
+ raise ValueError("No API key found. Set ANTHROPIC_API_KEY or OPENAI_API_KEY")
+
+ return SystemInfoGenerator(api_key=api_key, provider=provider, debug=debug)
+
+
+# CLI helper for quick testing
+if __name__ == "__main__":
+ import sys
+
+ if len(sys.argv) < 2:
+ print("Usage: python system_info_generator.py ")
+ print(" python system_info_generator.py --quick ")
+ print(" python system_info_generator.py --app [query]")
+ print(" python system_info_generator.py --list")
+ sys.exit(1)
+
+ try:
+ generator = get_system_info_generator(debug=True)
+
+ if sys.argv[1] == "--list":
+ available = generator.list_available_info()
+ console.print("\n[bold]Available Information Types:[/bold]")
+ console.print(f"System: {', '.join(available['system_info'])}")
+ console.print(f"Apps: {', '.join(available['applications'])}")
+ console.print(f"Categories: {', '.join(available['categories'])}")
+
+ elif sys.argv[1] == "--quick" and len(sys.argv) > 2:
+ info = generator.quick_info(sys.argv[2])
+ console.print(Panel(info, title=f"{sys.argv[2].title()} Info"))
+
+ elif sys.argv[1] == "--app" and len(sys.argv) > 2:
+ app_name = sys.argv[2]
+ query = " ".join(sys.argv[3:]) if len(sys.argv) > 3 else None
+ result = generator.get_app_info(app_name, query)
+ console.print(Panel(result.answer, title=f"{app_name.title()} Info"))
+
+ else:
+ query = " ".join(sys.argv[1:])
+ result = generator.get_info(query)
+ console.print(Panel(result.answer, title="System Info"))
+
+ if result.commands_executed:
+ table = Table(title="Commands Executed")
+ table.add_column("Command", style="cyan")
+ table.add_column("Status", style="green")
+ table.add_column("Time", style="dim")
+ for cmd in result.commands_executed:
+ status = "✓" if cmd.success else "✗"
+ table.add_row(cmd.command[:60], status, f"{cmd.execution_time:.2f}s")
+ console.print(table)
+
+ except ValueError as e:
+ console.print(f"[red]Error: {e}[/red]")
+ sys.exit(1)
diff --git a/cortex/test.py b/cortex/test.py
new file mode 100644
index 00000000..e69de29b
diff --git a/cortex/watch_service.py b/cortex/watch_service.py
new file mode 100644
index 00000000..899ec183
--- /dev/null
+++ b/cortex/watch_service.py
@@ -0,0 +1,719 @@
+#!/usr/bin/env python3
+"""
+Cortex Watch Service - Background terminal monitoring daemon.
+
+This service runs in the background and monitors all terminal activity,
+logging commands for Cortex to use during manual intervention.
+
+Features:
+- Runs as a systemd user service
+- Auto-starts on login
+- Auto-restarts on crash
+- Assigns unique IDs to each terminal
+- Excludes Cortex's own terminal from logging
+"""
+
+import datetime
+import fcntl
+import hashlib
+import json
+import os
+import signal
+import subprocess
+import sys
+import threading
+import time
+from pathlib import Path
+from typing import Any
+
+
+class CortexWatchDaemon:
+ """Background daemon that monitors terminal activity."""
+
+ def __init__(self):
+ self.running = False
+ self.cortex_dir = Path.home() / ".cortex"
+ self.watch_log = self.cortex_dir / "terminal_watch.log"
+ self.terminals_dir = self.cortex_dir / "terminals"
+ self.pid_file = self.cortex_dir / "watch_service.pid"
+ self.state_file = self.cortex_dir / "watch_state.json"
+
+ # Terminal tracking
+ self.terminals: dict[str, dict[str, Any]] = {}
+ self.terminal_counter = 0
+
+ # Track commands seen from watch_hook to avoid duplicates with bash_history
+ self._watch_hook_commands: set[str] = set()
+ self._recent_commands: list[str] = [] # Last 100 commands for dedup
+
+ # Ensure directories exist
+ self.cortex_dir.mkdir(parents=True, exist_ok=True)
+ self.terminals_dir.mkdir(parents=True, exist_ok=True)
+
+ # Setup signal handlers
+ signal.signal(signal.SIGTERM, self._handle_signal)
+ signal.signal(signal.SIGINT, self._handle_signal)
+ signal.signal(signal.SIGHUP, self._handle_reload)
+
+ def _handle_signal(self, signum, frame):
+ """Handle shutdown signals."""
+ self.log(f"Received signal {signum}, shutting down...")
+ self.running = False
+
+ def _handle_reload(self, signum, frame):
+ """Handle reload signal (SIGHUP)."""
+ self.log("Received SIGHUP, reloading configuration...")
+ self._load_state()
+
+ def log(self, message: str):
+ """Log a message to the service log."""
+ log_file = self.cortex_dir / "watch_service.log"
+ timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+ with open(log_file, "a") as f:
+ f.write(f"[{timestamp}] {message}\n")
+
+ def _load_state(self):
+ """Load saved state from file."""
+ if self.state_file.exists():
+ try:
+ with open(self.state_file) as f:
+ state = json.load(f)
+ self.terminal_counter = state.get("terminal_counter", 0)
+ self.terminals = state.get("terminals", {})
+ except Exception as e:
+ self.log(f"Error loading state: {e}")
+
+ def _save_state(self):
+ """Save current state to file."""
+ try:
+ state = {
+ "terminal_counter": self.terminal_counter,
+ "terminals": self.terminals,
+ "last_update": datetime.datetime.now().isoformat(),
+ }
+ with open(self.state_file, "w") as f:
+ json.dump(state, f, indent=2)
+ except Exception as e:
+ self.log(f"Error saving state: {e}")
+
+ def _get_terminal_id(self, pts: str) -> str:
+ """Generate or retrieve a unique terminal ID."""
+ if pts in self.terminals:
+ return self.terminals[pts]["id"]
+
+ self.terminal_counter += 1
+ terminal_id = f"term_{self.terminal_counter:04d}"
+
+ self.terminals[pts] = {
+ "id": terminal_id,
+ "pts": pts,
+ "created": datetime.datetime.now().isoformat(),
+ "is_cortex": False,
+ "command_count": 0,
+ }
+
+ self._save_state()
+ return terminal_id
+
+ def _is_cortex_terminal(self, pid: int) -> bool:
+ """Check if a process is a Cortex terminal."""
+ try:
+ # Check environment variables
+ environ_file = Path(f"/proc/{pid}/environ")
+ if environ_file.exists():
+ environ = environ_file.read_bytes()
+ if b"CORTEX_TERMINAL=1" in environ:
+ return True
+
+ # Check command line
+ cmdline_file = Path(f"/proc/{pid}/cmdline")
+ if cmdline_file.exists():
+ cmdline = cmdline_file.read_bytes().decode("utf-8", errors="ignore")
+ if "cortex" in cmdline.lower():
+ return True
+ except (PermissionError, FileNotFoundError, ProcessLookupError):
+ pass
+
+ return False
+
+ def _get_active_terminals(self) -> list[dict]:
+ """Get list of active terminal processes."""
+ terminals = []
+
+ try:
+ # Find all pts (pseudo-terminal) devices
+ pts_dir = Path("/dev/pts")
+ if pts_dir.exists():
+ for pts_file in pts_dir.iterdir():
+ if pts_file.name.isdigit():
+ pts_path = str(pts_file)
+
+ # Find process using this pts
+ result = subprocess.run(
+ ["fuser", pts_path], capture_output=True, text=True, timeout=2
+ )
+
+ if result.stdout.strip():
+ pids = result.stdout.strip().split()
+ for pid_str in pids:
+ try:
+ pid = int(pid_str)
+ is_cortex = self._is_cortex_terminal(pid)
+ terminal_id = self._get_terminal_id(pts_path)
+
+ # Update cortex flag
+ if pts_path in self.terminals:
+ self.terminals[pts_path]["is_cortex"] = is_cortex
+
+ terminals.append(
+ {
+ "pts": pts_path,
+ "pid": pid,
+ "id": terminal_id,
+ "is_cortex": is_cortex,
+ }
+ )
+ except ValueError:
+ continue
+
+ except Exception as e:
+ self.log(f"Error getting terminals: {e}")
+
+ return terminals
+
+ def _monitor_bash_history(self):
+ """Monitor bash history for new commands using inotify if available."""
+ history_files = [
+ Path.home() / ".bash_history",
+ Path.home() / ".zsh_history",
+ ]
+
+ positions: dict[str, int] = {}
+ last_commands: dict[str, str] = {} # Track last command per file to avoid duplicates
+
+ # Initialize positions to current end of file
+ for hist_file in history_files:
+ if hist_file.exists():
+ positions[str(hist_file)] = hist_file.stat().st_size
+ # Read last line to track for dedup
+ try:
+ content = hist_file.read_text()
+ lines = content.strip().split("\n")
+ if lines:
+ last_commands[str(hist_file)] = lines[-1].strip()
+ except Exception:
+ pass
+
+ # Try to use inotify for more efficient monitoring
+ try:
+ import ctypes
+ import select
+ import struct
+
+ # Check if inotify is available
+ libc = ctypes.CDLL("libc.so.6")
+ inotify_init = libc.inotify_init
+ inotify_add_watch = libc.inotify_add_watch
+
+ IN_MODIFY = 0x00000002
+ IN_CLOSE_WRITE = 0x00000008
+
+ fd = inotify_init()
+ if fd < 0:
+ raise OSError("Failed to initialize inotify")
+
+ watches = {}
+ for hist_file in history_files:
+ if hist_file.exists():
+ wd = inotify_add_watch(fd, str(hist_file).encode(), IN_MODIFY | IN_CLOSE_WRITE)
+ if wd >= 0:
+ watches[wd] = hist_file
+
+ self.log(f"Using inotify to monitor {len(watches)} history files")
+
+ while self.running:
+ # Wait for inotify event with timeout
+ r, _, _ = select.select([fd], [], [], 1.0)
+ if not r:
+ continue
+
+ data = os.read(fd, 4096)
+ # Process inotify events
+ for hist_file in history_files:
+ key = str(hist_file)
+ if not hist_file.exists():
+ continue
+
+ try:
+ current_size = hist_file.stat().st_size
+
+ if key not in positions:
+ positions[key] = current_size
+ continue
+
+ if current_size < positions[key]:
+ positions[key] = current_size
+ continue
+
+ if current_size > positions[key]:
+ with open(hist_file) as f:
+ f.seek(positions[key])
+ new_content = f.read()
+
+ for line in new_content.split("\n"):
+ line = line.strip()
+ # Skip empty, short, or duplicate commands
+ if line and len(line) > 1:
+ if last_commands.get(key) != line:
+ self._log_command(line, "history")
+ last_commands[key] = line
+
+ positions[key] = current_size
+ except Exception as e:
+ self.log(f"Error reading {hist_file}: {e}")
+
+ os.close(fd)
+ return
+
+ except Exception as e:
+ self.log(f"Inotify not available, using polling: {e}")
+
+ # Fallback to polling
+ while self.running:
+ for hist_file in history_files:
+ if not hist_file.exists():
+ continue
+
+ key = str(hist_file)
+ try:
+ current_size = hist_file.stat().st_size
+
+ if key not in positions:
+ positions[key] = current_size
+ continue
+
+ if current_size < positions[key]:
+ # File was truncated
+ positions[key] = current_size
+ continue
+
+ if current_size > positions[key]:
+ with open(hist_file) as f:
+ f.seek(positions[key])
+ new_content = f.read()
+
+ for line in new_content.split("\n"):
+ line = line.strip()
+ if line and len(line) > 1:
+ if last_commands.get(key) != line:
+ self._log_command(line, "history")
+ last_commands[key] = line
+
+ positions[key] = current_size
+
+ except Exception as e:
+ self.log(f"Error reading {hist_file}: {e}")
+
+ time.sleep(0.3)
+
+ def _monitor_watch_hook(self):
+ """Monitor the watch hook log file and sync to terminal_commands.json."""
+ position = 0
+
+ while self.running:
+ try:
+ if not self.watch_log.exists():
+ time.sleep(0.5)
+ continue
+
+ current_size = self.watch_log.stat().st_size
+
+ if current_size < position:
+ position = 0
+
+ if current_size > position:
+ with open(self.watch_log) as f:
+ f.seek(position)
+ new_content = f.read()
+
+ for line in new_content.split("\n"):
+ line = line.strip()
+ if not line or len(line) < 2:
+ continue
+
+ # Parse format: TTY|COMMAND (new format from updated hook)
+ # Skip lines that don't have the TTY| prefix or have "shared|"
+ if "|" not in line:
+ continue
+
+ parts = line.split("|", 1)
+ terminal_id = parts[0]
+
+ # Skip "shared" entries (those come from bash_history monitor)
+ if terminal_id == "shared":
+ continue
+
+ # Must have valid TTY format (pts_X, tty_X, etc.)
+ if not terminal_id or terminal_id == "unknown":
+ continue
+
+ command = parts[1] if len(parts) > 1 else ""
+ if not command:
+ continue
+
+ # Skip duplicates
+ if self._is_duplicate(command):
+ continue
+
+ # Mark this command as seen from watch_hook
+ self._watch_hook_commands.add(command)
+
+ # Log to terminal_commands.json only
+ self._log_to_json(command, "watch_hook", terminal_id)
+
+ position = current_size
+
+ except Exception as e:
+ self.log(f"Error monitoring watch hook: {e}")
+
+ time.sleep(0.2)
+
+ def _log_to_json(self, command: str, source: str, terminal_id: str):
+ """Log a command only to terminal_commands.json."""
+ try:
+ detailed_log = self.cortex_dir / "terminal_commands.json"
+ entry = {
+ "timestamp": datetime.datetime.now().isoformat(),
+ "command": command,
+ "source": source,
+ "terminal_id": terminal_id,
+ }
+
+ with open(detailed_log, "a") as f:
+ f.write(json.dumps(entry) + "\n")
+ except Exception as e:
+ self.log(f"Error logging to JSON: {e}")
+
+ def _is_duplicate(self, command: str) -> bool:
+ """Check if command was recently logged to avoid duplicates."""
+ if command in self._recent_commands:
+ return True
+
+ # Keep last 100 commands
+ self._recent_commands.append(command)
+ if len(self._recent_commands) > 100:
+ self._recent_commands.pop(0)
+
+ return False
+
+ def _log_command(self, command: str, source: str = "unknown", terminal_id: str | None = None):
+ """Log a command from bash_history (watch_hook uses _log_to_json directly)."""
+ # Skip cortex commands
+ if command.lower().startswith("cortex "):
+ return
+ if "watch_hook" in command:
+ return
+ if command.startswith("source ") and ".cortex" in command:
+ return
+
+ # Skip if this command was already logged by watch_hook
+ if command in self._watch_hook_commands:
+ self._watch_hook_commands.discard(command) # Clear it for next time
+ return
+
+ # Skip duplicates
+ if self._is_duplicate(command):
+ return
+
+ # For bash_history source, we can't know which terminal - use "shared"
+ if terminal_id is None:
+ terminal_id = "shared"
+
+ try:
+ # Write to watch_log with format TTY|COMMAND
+ with open(self.watch_log, "a") as f:
+ f.write(f"{terminal_id}|{command}\n")
+
+ # Log to JSON
+ self._log_to_json(command, source, terminal_id)
+
+ except Exception as e:
+ self.log(f"Error logging command: {e}")
+
+ def _cleanup_stale_terminals(self):
+ """Remove stale terminal entries."""
+ while self.running:
+ try:
+ active_pts = set()
+ pts_dir = Path("/dev/pts")
+ if pts_dir.exists():
+ for pts_file in pts_dir.iterdir():
+ if pts_file.name.isdigit():
+ active_pts.add(str(pts_file))
+
+ # Remove stale entries
+ stale = [pts for pts in self.terminals if pts not in active_pts]
+ for pts in stale:
+ del self.terminals[pts]
+
+ if stale:
+ self._save_state()
+
+ except Exception as e:
+ self.log(f"Error cleaning up terminals: {e}")
+
+ time.sleep(30) # Check every 30 seconds
+
+ def start(self):
+ """Start the watch daemon."""
+ # Check if already running
+ if self.pid_file.exists():
+ try:
+ pid = int(self.pid_file.read_text().strip())
+ os.kill(pid, 0) # Check if process exists
+ self.log(f"Daemon already running with PID {pid}")
+ return False
+ except (ProcessLookupError, ValueError):
+ # Stale PID file
+ self.pid_file.unlink()
+
+ # Write PID file
+ self.pid_file.write_text(str(os.getpid()))
+
+ self.running = True
+ self._load_state()
+
+ self.log("Cortex Watch Service starting...")
+
+ # Start monitor threads
+ threads = [
+ threading.Thread(target=self._monitor_bash_history, daemon=True),
+ threading.Thread(target=self._monitor_watch_hook, daemon=True),
+ threading.Thread(target=self._cleanup_stale_terminals, daemon=True),
+ ]
+
+ for t in threads:
+ t.start()
+
+ self.log(f"Cortex Watch Service started (PID: {os.getpid()})")
+
+ # Main loop - just keep alive and handle signals
+ try:
+ while self.running:
+ time.sleep(1)
+ finally:
+ self._shutdown()
+
+ return True
+
+ def _shutdown(self):
+ """Clean shutdown."""
+ self.log("Shutting down...")
+ self._save_state()
+
+ if self.pid_file.exists():
+ self.pid_file.unlink()
+
+ self.log("Cortex Watch Service stopped")
+
+ def stop(self):
+ """Stop the running daemon."""
+ if not self.pid_file.exists():
+ return False, "Service not running"
+
+ try:
+ pid = int(self.pid_file.read_text().strip())
+ os.kill(pid, signal.SIGTERM)
+
+ # Wait for process to exit
+ for _ in range(10):
+ try:
+ os.kill(pid, 0)
+ time.sleep(0.5)
+ except ProcessLookupError:
+ break
+
+ return True, f"Service stopped (PID: {pid})"
+
+ except ProcessLookupError:
+ self.pid_file.unlink()
+ return True, "Service was not running"
+ except Exception as e:
+ return False, f"Error stopping service: {e}"
+
+ def status(self) -> dict:
+ """Get service status."""
+ status = {
+ "running": False,
+ "pid": None,
+ "terminals": 0,
+ "commands_logged": 0,
+ }
+
+ if self.pid_file.exists():
+ try:
+ pid = int(self.pid_file.read_text().strip())
+ os.kill(pid, 0)
+ status["running"] = True
+ status["pid"] = pid
+ except (ProcessLookupError, ValueError):
+ pass
+
+ if self.watch_log.exists():
+ try:
+ content = self.watch_log.read_text()
+ status["commands_logged"] = len([l for l in content.split("\n") if l.strip()])
+ except Exception:
+ pass
+
+ self._load_state()
+ status["terminals"] = len(self.terminals)
+
+ return status
+
+
+def get_systemd_service_content() -> str:
+ """Generate systemd service file content."""
+ python_path = sys.executable
+ service_script = Path(__file__).resolve()
+
+ return f"""[Unit]
+Description=Cortex Terminal Watch Service
+Documentation=https://github.com/cortexlinux/cortex
+After=default.target
+
+[Service]
+Type=simple
+ExecStart={python_path} {service_script} --daemon
+ExecStop={python_path} {service_script} --stop
+ExecReload=/bin/kill -HUP $MAINPID
+Restart=always
+RestartSec=5
+StandardOutput=journal
+StandardError=journal
+
+# Security
+NoNewPrivileges=true
+PrivateTmp=true
+
+[Install]
+WantedBy=default.target
+"""
+
+
+def install_service() -> tuple[bool, str]:
+ """Install the systemd user service."""
+ service_dir = Path.home() / ".config" / "systemd" / "user"
+ service_file = service_dir / "cortex-watch.service"
+
+ try:
+ # Create directory
+ service_dir.mkdir(parents=True, exist_ok=True)
+
+ # Write service file
+ service_file.write_text(get_systemd_service_content())
+
+ # Reload systemd
+ subprocess.run(["systemctl", "--user", "daemon-reload"], check=True)
+
+ # Enable and start service
+ subprocess.run(["systemctl", "--user", "enable", "cortex-watch.service"], check=True)
+ subprocess.run(["systemctl", "--user", "start", "cortex-watch.service"], check=True)
+
+ # Enable lingering so service runs even when not logged in
+ subprocess.run(["loginctl", "enable-linger", os.getenv("USER", "")], capture_output=True)
+
+ return (
+ True,
+ f"""✓ Cortex Watch Service installed and started!
+
+Service file: {service_file}
+
+The service will:
+ • Start automatically on login
+ • Restart automatically if it crashes
+ • Monitor all terminal activity
+
+Commands:
+ systemctl --user status cortex-watch # Check status
+ systemctl --user restart cortex-watch # Restart
+ systemctl --user stop cortex-watch # Stop
+ journalctl --user -u cortex-watch # View logs
+""",
+ )
+ except subprocess.CalledProcessError as e:
+ return False, f"Failed to install service: {e}"
+ except Exception as e:
+ return False, f"Error: {e}"
+
+
+def uninstall_service() -> tuple[bool, str]:
+ """Uninstall the systemd user service."""
+ service_file = Path.home() / ".config" / "systemd" / "user" / "cortex-watch.service"
+
+ try:
+ # Stop and disable service
+ subprocess.run(["systemctl", "--user", "stop", "cortex-watch.service"], capture_output=True)
+ subprocess.run(
+ ["systemctl", "--user", "disable", "cortex-watch.service"], capture_output=True
+ )
+
+ # Remove service file
+ if service_file.exists():
+ service_file.unlink()
+
+ # Reload systemd
+ subprocess.run(["systemctl", "--user", "daemon-reload"], check=True)
+
+ return True, "✓ Cortex Watch Service uninstalled"
+ except Exception as e:
+ return False, f"Error: {e}"
+
+
+def main():
+ """Main entry point."""
+ import argparse
+
+ parser = argparse.ArgumentParser(description="Cortex Watch Service")
+ parser.add_argument("--daemon", action="store_true", help="Run as daemon")
+ parser.add_argument("--stop", action="store_true", help="Stop the daemon")
+ parser.add_argument("--status", action="store_true", help="Show status")
+ parser.add_argument("--install", action="store_true", help="Install systemd service")
+ parser.add_argument("--uninstall", action="store_true", help="Uninstall systemd service")
+
+ args = parser.parse_args()
+
+ daemon = CortexWatchDaemon()
+
+ if args.install:
+ success, msg = install_service()
+ print(msg)
+ sys.exit(0 if success else 1)
+
+ if args.uninstall:
+ success, msg = uninstall_service()
+ print(msg)
+ sys.exit(0 if success else 1)
+
+ if args.status:
+ status = daemon.status()
+ print(f"Running: {status['running']}")
+ if status["pid"]:
+ print(f"PID: {status['pid']}")
+ print(f"Terminals tracked: {status['terminals']}")
+ print(f"Commands logged: {status['commands_logged']}")
+ sys.exit(0)
+
+ if args.stop:
+ success, msg = daemon.stop()
+ print(msg)
+ sys.exit(0 if success else 1)
+
+ if args.daemon:
+ daemon.start()
+ else:
+ parser.print_help()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/docs/ASK_DO_ARCHITECTURE.md b/docs/ASK_DO_ARCHITECTURE.md
new file mode 100644
index 00000000..3b426123
--- /dev/null
+++ b/docs/ASK_DO_ARCHITECTURE.md
@@ -0,0 +1,741 @@
+# Cortex `ask --do` Architecture
+
+> AI-powered command execution with intelligent error handling, auto-repair, and real-time terminal monitoring.
+
+## Table of Contents
+
+- [Overview](#overview)
+- [Architecture Diagram](#architecture-diagram)
+- [Core Components](#core-components)
+- [Execution Flow](#execution-flow)
+- [Terminal Monitoring](#terminal-monitoring)
+- [Error Handling & Auto-Fix](#error-handling--auto-fix)
+- [Session Management](#session-management)
+- [Key Files](#key-files)
+- [Data Flow](#data-flow)
+
+---
+
+## Overview
+
+`cortex ask --do` is an interactive AI assistant that can execute commands on your Linux system. Unlike simple command execution, it features:
+
+- **Natural Language Understanding** - Describe what you want in plain English
+- **Conflict Detection** - Detects existing resources (Docker containers, services, files) before execution
+- **Task Tree Execution** - Structured command execution with dependencies
+- **Auto-Repair** - Automatically diagnoses and fixes failed commands
+- **Terminal Monitoring** - Watches your other terminals for real-time feedback
+- **Session Persistence** - Tracks history across multiple interactions
+
+---
+
+## Architecture Diagram
+
+```
+┌─────────────────────────────────────────────────────────────────────────────┐
+│ USER INPUT │
+│ "install nginx and configure it" │
+└─────────────────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────────────────┐
+│ CLI Layer │
+│ (cli.py) │
+│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
+│ │ Signal Handlers │ │ Session Manager │ │ Interactive │ │
+│ │ (Ctrl+Z/C) │ │ (session_id) │ │ Prompt │ │
+│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
+└─────────────────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────────────────┐
+│ AskHandler │
+│ (ask.py) │
+│ ┌─────────────────────────────────────────────────────────────────────┐ │
+│ │ LLM Integration │ │
+│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
+│ │ │ Claude │ │ Kimi K2 │ │ Ollama │ │ │
+│ │ │ (Primary) │ │ (Fallback) │ │ (Local) │ │ │
+│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
+│ └─────────────────────────────────────────────────────────────────────┘ │
+│ │
+│ Response Types: │
+│ ├── "command" → Read-only info gathering │
+│ ├── "do_commands" → Commands to execute (requires approval) │
+│ └── "answer" → Final response to user │
+└─────────────────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────────────────┐
+│ DoHandler │
+│ (do_runner/handler.py) │
+│ │
+│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
+│ │ Conflict │ │ Task Tree │ │ Auto │ │ Terminal │ │
+│ │ Detection │ │ Execution │ │ Repair │ │ Monitor │ │
+│ └──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ │
+│ │
+│ Execution Modes: │
+│ ├── Automatic → Commands run with user approval │
+│ └── Manual → User runs commands, Cortex monitors │
+└─────────────────────────────────────────────────────────────────────────────┘
+ │
+ ┌───────────────┴───────────────┐
+ ▼ ▼
+┌─────────────────────────────┐ ┌─────────────────────────────────────────┐
+│ Automatic Execution │ │ Manual Intervention │
+│ │ │ │
+│ ┌────────────────────────┐ │ │ ┌────────────────────────────────────┐ │
+│ │ ConflictDetector │ │ │ │ TerminalMonitor │ │
+│ │ (verification.py) │ │ │ │ (terminal.py) │ │
+│ │ │ │ │ │ │ │
+│ │ Checks for: │ │ │ │ Monitors: │ │
+│ │ • Docker containers │ │ │ │ • ~/.bash_history │ │
+│ │ • Running services │ │ │ │ • ~/.zsh_history │ │
+│ │ • Existing files │ │ │ │ • terminal_watch.log │ │
+│ │ • Port conflicts │ │ │ │ • Cursor IDE terminals │ │
+│ │ • Package conflicts │ │ │ │ │ │
+│ └────────────────────────┘ │ │ │ Features: │ │
+│ │ │ │ • Real-time command detection │ │
+│ ┌────────────────────────┐ │ │ │ • Error detection & auto-fix │ │
+│ │ CommandExecutor │ │ │ │ • Desktop notifications │ │
+│ │ (executor.py) │ │ │ │ • Terminal ID tracking │ │
+│ │ │ │ │ └────────────────────────────────────┘ │
+│ │ • Subprocess mgmt │ │ │ │
+│ │ • Timeout handling │ │ │ ┌────────────────────────────────────┐ │
+│ │ • Output capture │ │ │ │ Watch Service (Daemon) │ │
+│ │ • Sudo handling │ │ │ │ (watch_service.py) │ │
+│ └────────────────────────┘ │ │ │ │ │
+│ │ │ │ • Runs as systemd user service │ │
+│ ┌────────────────────────┐ │ │ │ • Auto-starts on login │ │
+│ │ ErrorDiagnoser │ │ │ │ • Uses inotify for efficiency │ │
+│ │ (diagnosis.py) │ │ │ │ • Logs to terminal_commands.json │ │
+│ │ │ │ │ └────────────────────────────────────┘ │
+│ │ • Pattern matching │ │ │ │
+│ │ • LLM-powered diag │ │ └─────────────────────────────────────────┘
+│ │ • Fix suggestions │ │
+│ └────────────────────────┘ │
+│ │
+│ ┌────────────────────────┐ │
+│ │ AutoFixer │ │
+│ │ (diagnosis.py) │ │
+│ │ │ │
+│ │ • Automatic repairs │ │
+│ │ • Retry strategies │ │
+│ │ • Verification tests │ │
+│ └────────────────────────┘ │
+└─────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────────────────┐
+│ Persistence Layer │
+│ │
+│ ┌─────────────────────────────┐ ┌─────────────────────────────────────┐ │
+│ │ DoRunDatabase │ │ Log Files │ │
+│ │ (~/.cortex/do_runs.db) │ │ │ │
+│ │ │ │ • terminal_watch.log │ │
+│ │ Tables: │ │ • terminal_commands.json │ │
+│ │ • do_runs │ │ • watch_service.log │ │
+│ │ • do_sessions │ │ │ │
+│ └─────────────────────────────┘ └─────────────────────────────────────┘ │
+└─────────────────────────────────────────────────────────────────────────────┘
+```
+
+---
+
+## Core Components
+
+### 1. CLI Layer (`cli.py`)
+
+The entry point for `cortex ask --do`. Handles:
+
+- **Signal Handlers**: Ctrl+Z stops current command (not the session), Ctrl+C exits
+- **Session Management**: Creates/tracks session IDs for history grouping
+- **Interactive Loop**: "What would you like to do?" prompt with suggestions
+- **Error Handling**: Graceful error display without exposing internal details
+
+```python
+# Key functions
+_run_interactive_do_session(handler) # Main interactive loop
+handle_session_interrupt() # Ctrl+Z handler
+```
+
+### 2. AskHandler (`ask.py`)
+
+Manages LLM communication and response parsing:
+
+- **Multi-LLM Support**: Claude (primary), Kimi K2, Ollama (local)
+- **Response Types**:
+ - `command` - Read-only info gathering (ls, cat, systemctl status)
+ - `do_commands` - Commands requiring execution (apt install, systemctl restart)
+ - `answer` - Final response to user
+- **Guardrails**: Rejects non-Linux/technical queries
+- **Chained Command Handling**: Splits `&&` chains into individual commands
+
+```python
+# Key methods
+_get_do_mode_system_prompt() # LLM system prompt
+_handle_do_commands() # Process do_commands response
+_call_llm() # Make LLM API call with interrupt support
+```
+
+### 3. DoHandler (`do_runner/handler.py`)
+
+The execution engine. Core responsibilities:
+
+- **Conflict Detection**: Checks for existing resources before execution
+- **Task Tree Building**: Creates structured execution plan
+- **Command Execution**: Runs commands with approval workflow
+- **Auto-Repair**: Handles failures with diagnostic commands
+- **Manual Intervention**: Coordinates with TerminalMonitor
+
+```python
+# Key methods
+execute_with_task_tree() # Main execution method
+_handle_resource_conflict() # User prompts for conflicts
+_execute_task_node() # Execute single task
+_interactive_session() # Post-execution suggestions
+```
+
+### 4. ConflictDetector (`verification.py`)
+
+Pre-flight checks before command execution:
+
+| Resource Type | Check Method |
+|--------------|--------------|
+| Docker containers | `docker ps -a --filter name=X` |
+| Systemd services | `systemctl is-active X` |
+| Files/directories | `os.path.exists()` |
+| Ports | `ss -tlnp \| grep :PORT` |
+| Packages (apt) | `dpkg -l \| grep X` |
+| Packages (pip) | `pip show X` |
+| Users/groups | `getent passwd/group` |
+| Databases | `mysql/psql -e "SHOW DATABASES"` |
+
+### 5. TerminalMonitor (`terminal.py`)
+
+Real-time monitoring for manual intervention mode:
+
+- **Sources Monitored**:
+ - `~/.bash_history` and `~/.zsh_history`
+ - `~/.cortex/terminal_watch.log` (from shell hooks)
+ - Cursor IDE terminal files
+ - tmux panes
+
+- **Features**:
+ - Command detection with terminal ID tracking
+ - Error detection in command output
+ - LLM-powered error analysis
+ - Desktop notifications for errors/fixes
+ - Auto-fix execution (non-sudo only)
+
+### 6. Watch Service (`watch_service.py`)
+
+Background daemon for persistent terminal monitoring:
+
+```bash
+# Install and manage
+cortex watch --install --service # Install systemd service
+cortex watch --status # Check status
+cortex watch --uninstall --service
+```
+
+- Runs as systemd user service
+- Uses inotify for efficient file watching
+- Auto-starts on login, auto-restarts on crash
+- Logs to `~/.cortex/terminal_commands.json`
+
+---
+
+## Execution Flow
+
+### Flow 1: Automatic Execution
+
+```
+User: "install nginx"
+ │
+ ▼
+ ┌─────────────────┐
+ │ LLM Analysis │ ──→ Gathers system info (OS, existing packages)
+ └─────────────────┘
+ │
+ ▼
+ ┌─────────────────┐
+ │ Conflict Check │ ──→ Is nginx already installed?
+ └─────────────────┘
+ │
+ ┌────┴────┐
+ │ │
+ ▼ ▼
+ Conflict No Conflict
+ │ │
+ ▼ │
+┌─────────────────┐ │
+│ User Choice: │ │
+│ 1. Use existing │ │
+│ 2. Restart │ │
+│ 3. Recreate │ │
+└─────────────────┘ │
+ │ │
+ └──────┬──────┘
+ │
+ ▼
+ ┌─────────────────┐
+ │ Show Commands │ ──→ Display planned commands for approval
+ └─────────────────┘
+ │
+ ▼
+ ┌─────────────────┐
+ │ User Approval? │
+ └─────────────────┘
+ │
+ ┌────┴────┐
+ │ │
+ ▼ ▼
+ Yes No ──→ Cancel
+ │
+ ▼
+ ┌─────────────────┐
+ │ Execute Tasks │ ──→ Run commands one by one
+ └─────────────────┘
+ │
+ ┌────┴────┐
+ │ │
+ ▼ ▼
+ Success Failure
+ │ │
+ │ ▼
+ │ ┌─────────────────┐
+ │ │ Error Diagnosis │ ──→ Pattern matching + LLM analysis
+ │ └─────────────────┘
+ │ │
+ │ ▼
+ │ ┌─────────────────┐
+ │ │ Auto-Repair │ ──→ Execute fix commands
+ │ └─────────────────┘
+ │ │
+ │ ▼
+ │ ┌─────────────────┐
+ │ │ Verify Fix │
+ │ └─────────────────┘
+ │ │
+ └────┬────┘
+ │
+ ▼
+ ┌─────────────────┐
+ │ Verification │ ──→ Run tests to confirm success
+ └─────────────────┘
+ │
+ ▼
+ ┌─────────────────┐
+ │ Interactive │ ──→ "What would you like to do next?"
+ │ Session │
+ └─────────────────┘
+```
+
+### Flow 2: Manual Intervention
+
+```
+User requests sudo commands OR chooses manual execution
+ │
+ ▼
+ ┌─────────────────────────────────────────────────────────┐
+ │ Manual Intervention Mode │
+ │ │
+ │ ┌────────────────────────────────────────────────────┐ │
+ │ │ Cortex Terminal │ │
+ │ │ Shows: │ │
+ │ │ • Commands to run │ │
+ │ │ • Live terminal feed │ │
+ │ │ • Real-time feedback │ │
+ │ └────────────────────────────────────────────────────┘ │
+ │ ▲ │
+ │ │ monitors │
+ │ │ │
+ │ ┌────────────────────────────────────────────────────┐ │
+ │ │ Other Terminal(s) │ │
+ │ │ User runs: │ │
+ │ │ $ sudo systemctl restart nginx │ │
+ │ │ $ sudo apt install package │ │
+ │ └────────────────────────────────────────────────────┘ │
+ └─────────────────────────────────────────────────────────┘
+ │
+ ▼
+ ┌─────────────────┐
+ │ Command Match? │
+ └─────────────────┘
+ │
+ ┌────┴────────────┐
+ │ │ │
+ ▼ ▼ ▼
+ Correct Wrong Error in
+ Command Command Output
+ │ │ │
+ │ ▼ ▼
+ │ Notification Notification
+ │ "Expected: "Fixing error..."
+ │ " + Auto-fix
+ │ │ │
+ └────┬────┴───────┘
+ │
+ ▼
+ User presses Enter when done
+ │
+ ▼
+ ┌─────────────────┐
+ │ Validate │ ──→ Check if expected commands were run
+ └─────────────────┘
+ │
+ ▼
+ ┌─────────────────┐
+ │ Continue or │
+ │ Show Next Steps │
+ └─────────────────┘
+```
+
+---
+
+## Terminal Monitoring
+
+### Watch Hook Flow
+
+```
+┌──────────────────────────────────────────────────────────────────────────┐
+│ Terminal with Hook Active │
+│ │
+│ $ sudo systemctl restart nginx │
+│ │ │
+│ ▼ │
+│ PROMPT_COMMAND triggers __cortex_log_cmd() │
+│ │ │
+│ ▼ │
+│ Writes to ~/.cortex/terminal_watch.log │
+│ Format: pts_1|sudo systemctl restart nginx │
+└──────────────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌──────────────────────────────────────────────────────────────────────────┐
+│ Watch Service (Daemon) │
+│ │
+│ Monitors with inotify: │
+│ • ~/.cortex/terminal_watch.log │
+│ • ~/.bash_history │
+│ • ~/.zsh_history │
+│ │ │
+│ ▼ │
+│ Parses: TTY|COMMAND │
+│ │ │
+│ ▼ │
+│ Writes to ~/.cortex/terminal_commands.json │
+│ {"timestamp": "...", "command": "...", "source": "watch_hook", │
+│ "terminal_id": "pts_1"} │
+└──────────────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌──────────────────────────────────────────────────────────────────────────┐
+│ TerminalMonitor (In Cortex) │
+│ │
+│ During manual intervention: │
+│ 1. Reads terminal_watch.log │
+│ 2. Detects new commands │
+│ 3. Shows in "Live Terminal Feed" │
+│ 4. Checks if command matches expected │
+│ 5. Detects errors in output │
+│ 6. Triggers auto-fix if needed │
+└──────────────────────────────────────────────────────────────────────────┘
+```
+
+### Log File Formats
+
+**`~/.cortex/terminal_watch.log`** (Simple):
+```
+pts_1|docker ps
+pts_1|sudo systemctl restart nginx
+pts_2|ls -la
+shared|cd /home/user
+```
+
+**`~/.cortex/terminal_commands.json`** (Detailed):
+```json
+{"timestamp": "2026-01-16T14:15:00.123", "command": "docker ps", "source": "watch_hook", "terminal_id": "pts_1"}
+{"timestamp": "2026-01-16T14:15:05.456", "command": "sudo systemctl restart nginx", "source": "watch_hook", "terminal_id": "pts_1"}
+{"timestamp": "2026-01-16T14:15:10.789", "command": "cd /home/user", "source": "history", "terminal_id": "shared"}
+```
+
+---
+
+## Error Handling & Auto-Fix
+
+### Error Diagnosis Pipeline
+
+```
+Command fails with error
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────┐
+│ Pattern Matching │
+│ │
+│ COMMAND_SHELL_ERRORS = { │
+│ "Permission denied": "permission_error", │
+│ "command not found": "missing_package", │
+│ "Connection refused": "service_not_running", │
+│ "No space left": "disk_full", │
+│ ... │
+│ } │
+└─────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────┐
+│ LLM Analysis (Claude) │
+│ │
+│ Prompt: "Analyze this error and suggest a fix" │
+│ Response: │
+│ CAUSE: Service not running │
+│ FIX: sudo systemctl start nginx │
+└─────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────┐
+│ AutoFixer Execution │
+│ │
+│ 1. Check if fix requires sudo │
+│ - Yes → Show manual instructions + notification │
+│ - No → Execute automatically │
+│ 2. Verify fix worked │
+│ 3. Retry original command if fixed │
+└─────────────────────────────────────────────────────────────┘
+```
+
+### Auto-Fix Strategies
+
+| Error Type | Strategy | Actions |
+|------------|----------|---------|
+| `permission_error` | `fix_permissions` | `chmod`, `chown`, or manual sudo |
+| `missing_package` | `install_package` | `apt install`, `pip install` |
+| `service_not_running` | `start_service` | `systemctl start`, check logs |
+| `port_in_use` | `kill_port_user` | Find and stop conflicting process |
+| `disk_full` | `free_disk_space` | `apt clean`, suggest cleanup |
+| `config_error` | `fix_config` | Backup + LLM-suggested fix |
+
+---
+
+## Session Management
+
+### Session Structure
+
+```
+Session (session_id: sess_20260116_141500)
+│
+├── Run 1 (run_id: do_20260116_141500_abc123)
+│ ├── Query: "install nginx"
+│ ├── Commands:
+│ │ ├── apt update
+│ │ ├── apt install -y nginx
+│ │ └── systemctl start nginx
+│ └── Status: SUCCESS
+│
+├── Run 2 (run_id: do_20260116_141600_def456)
+│ ├── Query: "configure nginx for my domain"
+│ ├── Commands:
+│ │ ├── cat /etc/nginx/sites-available/default
+│ │ └── [manual: edit config]
+│ └── Status: SUCCESS
+│
+└── Run 3 (run_id: do_20260116_141700_ghi789)
+ ├── Query: "test nginx"
+ ├── Commands:
+ │ └── curl localhost
+ └── Status: SUCCESS
+```
+
+### Database Schema
+
+```sql
+-- Sessions table
+CREATE TABLE do_sessions (
+ session_id TEXT PRIMARY KEY,
+ started_at TEXT,
+ ended_at TEXT,
+ total_runs INTEGER DEFAULT 0
+);
+
+-- Runs table
+CREATE TABLE do_runs (
+ run_id TEXT PRIMARY KEY,
+ session_id TEXT,
+ summary TEXT,
+ mode TEXT,
+ commands TEXT, -- JSON array
+ started_at TEXT,
+ completed_at TEXT,
+ user_query TEXT,
+ FOREIGN KEY (session_id) REFERENCES do_sessions(session_id)
+);
+```
+
+---
+
+## Key Files
+
+| File | Purpose |
+|------|---------|
+| `cortex/cli.py` | CLI entry point, signal handlers, interactive loop |
+| `cortex/ask.py` | LLM communication, response parsing, command validation |
+| `cortex/do_runner/handler.py` | Main execution engine, conflict handling, task tree |
+| `cortex/do_runner/executor.py` | Subprocess management, timeout handling |
+| `cortex/do_runner/verification.py` | Conflict detection, verification tests |
+| `cortex/do_runner/diagnosis.py` | Error patterns, diagnosis, auto-fix strategies |
+| `cortex/do_runner/terminal.py` | Terminal monitoring, shell hooks |
+| `cortex/do_runner/models.py` | Data models (TaskNode, DoRun, CommandStatus) |
+| `cortex/do_runner/database.py` | SQLite persistence for runs/sessions |
+| `cortex/watch_service.py` | Background daemon for terminal monitoring |
+| `cortex/llm_router.py` | Multi-LLM routing (Claude, Kimi, Ollama) |
+
+---
+
+## Data Flow
+
+```
+┌─────────────────────────────────────────────────────────────────────────┐
+│ Data Flow │
+│ │
+│ User Query ──→ AskHandler ──→ LLM ──→ Response │
+│ │ │ │ │ │
+│ │ │ │ ▼ │
+│ │ │ │ ┌─────────┐ │
+│ │ │ │ │ command │ ──→ Execute read-only │
+│ │ │ │ └─────────┘ │ │
+│ │ │ │ │ │ │
+│ │ │ │ ▼ │ │
+│ │ │ │ Output added │ │
+│ │ │ │ to history ─────────┘ │
+│ │ │ │ │ │
+│ │ │ │ ▼ │
+│ │ │ │ Loop back to LLM │
+│ │ │ │ │ │
+│ │ │ ▼ │ │
+│ │ │ ┌──────────────┐│ │
+│ │ │ │ do_commands ││ │
+│ │ │ └──────────────┘│ │
+│ │ │ │ │ │
+│ │ │ ▼ │ │
+│ │ │ DoHandler │ │
+│ │ │ │ │ │
+│ │ │ ▼ │ │
+│ │ │ Task Tree ──────┘ │
+│ │ │ │ │
+│ │ │ ▼ │
+│ │ │ Execute ──→ Success ──→ Verify ──→ Done │
+│ │ │ │ │
+│ │ │ ▼ │
+│ │ │ Failure ──→ Diagnose ──→ Fix ──→ Retry │
+│ │ │ │
+│ │ ▼ │
+│ │ ┌────────────┐ │
+│ │ │ answer │ ──→ Display to user │
+│ │ └────────────┘ │
+│ │ │ │
+│ ▼ ▼ │
+│ ┌─────────────────────┐ │
+│ │ Session Database │ │
+│ │ ~/.cortex/do_runs.db │
+│ └─────────────────────┘ │
+└─────────────────────────────────────────────────────────────────────────┘
+```
+
+---
+
+## Usage Examples
+
+### Basic Usage
+
+```bash
+# Start interactive session
+cortex ask --do
+
+# One-shot command
+cortex ask --do "install docker and run hello-world"
+```
+
+### With Terminal Monitoring
+
+```bash
+# Terminal 1: Start Cortex
+cortex ask --do
+> install nginx with ssl
+
+# Terminal 2: Run sudo commands shown by Cortex
+$ sudo apt install nginx
+$ sudo systemctl start nginx
+```
+
+### Check History
+
+```bash
+# View do history
+cortex do history
+
+# Shows:
+# Session: sess_20260116_141500 (3 runs)
+# Run 1: install nginx - SUCCESS
+# Run 2: configure nginx - SUCCESS
+# Run 3: test nginx - SUCCESS
+```
+
+---
+
+## Configuration
+
+### Environment Variables
+
+| Variable | Purpose | Default |
+|----------|---------|---------|
+| `ANTHROPIC_API_KEY` | Claude API key | Required |
+| `CORTEX_TERMINAL` | Marks Cortex's own terminal | Set automatically |
+| `CORTEX_DO_TIMEOUT` | Command timeout (seconds) | 120 |
+
+### Watch Service
+
+```bash
+# Install (recommended)
+cortex watch --install --service
+
+# Check status
+cortex watch --status
+
+# View logs
+journalctl --user -u cortex-watch
+cat ~/.cortex/watch_service.log
+```
+
+---
+
+## Troubleshooting
+
+### Terminal monitoring not working
+
+1. Check if service is running: `cortex watch --status`
+2. Check hook is in .bashrc: `grep "Cortex Terminal Watch" ~/.bashrc`
+3. For existing terminals, run: `source ~/.cortex/watch_hook.sh`
+
+### Commands not being detected
+
+1. Check watch log: `cat ~/.cortex/terminal_watch.log`
+2. Ensure format is `TTY|COMMAND` (e.g., `pts_1|ls -la`)
+3. Restart service: `systemctl --user restart cortex-watch`
+
+### Auto-fix not working
+
+1. Check if command requires sudo (auto-fix can't run sudo)
+2. Check error diagnosis: Look for `⚠ Fix requires manual execution`
+3. Run suggested commands manually in another terminal
+
+---
+
+## See Also
+
+- [LLM Integration](./LLM_INTEGRATION.md)
+- [Error Handling](./modules/README_ERROR_PARSER.md)
+- [Verification System](./modules/README_VERIFICATION.md)
+- [Troubleshooting Guide](./TROUBLESHOOTING.md)
+
diff --git a/scripts/setup_ask_do.py b/scripts/setup_ask_do.py
new file mode 100755
index 00000000..dd40807c
--- /dev/null
+++ b/scripts/setup_ask_do.py
@@ -0,0 +1,635 @@
+#!/usr/bin/env python3
+"""
+Setup script for Cortex `ask --do` command.
+
+This script sets up everything needed for the AI-powered command execution:
+1. Installs required Python dependencies
+2. Sets up Ollama Docker container with a small model
+3. Installs and starts the Cortex Watch service
+4. Configures shell hooks for terminal monitoring
+
+Usage:
+ python scripts/setup_ask_do.py [--no-docker] [--model MODEL] [--skip-watch]
+
+Options:
+ --no-docker Skip Docker/Ollama setup (use cloud LLM only)
+ --model MODEL Ollama model to install (default: mistral)
+ --skip-watch Skip watch service installation
+ --uninstall Remove all ask --do components
+"""
+
+import argparse
+import os
+import shutil
+import subprocess
+import sys
+import time
+from pathlib import Path
+
+
+# ANSI colors
+class Colors:
+ HEADER = "\033[95m"
+ BLUE = "\033[94m"
+ CYAN = "\033[96m"
+ GREEN = "\033[92m"
+ YELLOW = "\033[93m"
+ RED = "\033[91m"
+ BOLD = "\033[1m"
+ DIM = "\033[2m"
+ END = "\033[0m"
+
+
+def print_header(text: str):
+ """Print a section header."""
+ print(f"\n{Colors.BOLD}{Colors.CYAN}{'═' * 60}{Colors.END}")
+ print(f"{Colors.BOLD}{Colors.CYAN} {text}{Colors.END}")
+ print(f"{Colors.BOLD}{Colors.CYAN}{'═' * 60}{Colors.END}\n")
+
+
+def print_step(text: str):
+ """Print a step."""
+ print(f"{Colors.BLUE}▶{Colors.END} {text}")
+
+
+def print_success(text: str):
+ """Print success message."""
+ print(f"{Colors.GREEN}✓{Colors.END} {text}")
+
+
+def print_warning(text: str):
+ """Print warning message."""
+ print(f"{Colors.YELLOW}⚠{Colors.END} {text}")
+
+
+def print_error(text: str):
+ """Print error message."""
+ print(f"{Colors.RED}✗{Colors.END} {text}")
+
+
+def run_cmd(
+ cmd: list[str], check: bool = True, capture: bool = False, timeout: int = 300
+) -> subprocess.CompletedProcess:
+ """Run a command and return the result."""
+ try:
+ result = subprocess.run(
+ cmd, check=check, capture_output=capture, text=True, timeout=timeout
+ )
+ return result
+ except subprocess.CalledProcessError as e:
+ if capture:
+ print_error(f"Command failed: {' '.join(cmd)}")
+ if e.stderr:
+ print(f" {Colors.DIM}{e.stderr[:200]}{Colors.END}")
+ raise
+ except subprocess.TimeoutExpired:
+ print_error(f"Command timed out: {' '.join(cmd)}")
+ raise
+
+
+def check_docker() -> bool:
+ """Check if Docker is installed and running."""
+ try:
+ result = run_cmd(["docker", "info"], capture=True, check=False)
+ return result.returncode == 0
+ except FileNotFoundError:
+ return False
+
+
+def check_ollama_container() -> tuple[bool, bool]:
+ """Check if Ollama container exists and is running.
+
+ Returns: (exists, running)
+ """
+ try:
+ result = run_cmd(
+ ["docker", "ps", "-a", "--filter", "name=ollama", "--format", "{{.Status}}"],
+ capture=True,
+ check=False,
+ )
+ if result.returncode != 0 or not result.stdout.strip():
+ return False, False
+
+ status = result.stdout.strip().lower()
+ running = "up" in status
+ return True, running
+ except Exception:
+ return False, False
+
+
+def setup_ollama(model: str = "mistral") -> bool:
+ """Set up Ollama Docker container and pull a model."""
+ print_header("Setting up Ollama (Local LLM)")
+
+ # Check Docker
+ print_step("Checking Docker...")
+ if not check_docker():
+ print_error("Docker is not installed or not running")
+ print(f" {Colors.DIM}Install Docker: https://docs.docker.com/get-docker/{Colors.END}")
+ print(f" {Colors.DIM}Then run: sudo systemctl start docker{Colors.END}")
+ return False
+ print_success("Docker is available")
+
+ # Check existing container
+ exists, running = check_ollama_container()
+
+ if exists and running:
+ print_success("Ollama container is already running")
+ elif exists and not running:
+ print_step("Starting existing Ollama container...")
+ run_cmd(["docker", "start", "ollama"])
+ print_success("Ollama container started")
+ else:
+ # Pull and run Ollama
+ print_step("Pulling Ollama Docker image...")
+ run_cmd(["docker", "pull", "ollama/ollama"])
+ print_success("Ollama image pulled")
+
+ print_step("Starting Ollama container...")
+ run_cmd(
+ [
+ "docker",
+ "run",
+ "-d",
+ "--name",
+ "ollama",
+ "-p",
+ "11434:11434",
+ "-v",
+ "ollama:/root/.ollama",
+ "--restart",
+ "unless-stopped",
+ "ollama/ollama",
+ ]
+ )
+ print_success("Ollama container started")
+
+ # Wait for container to be ready
+ print_step("Waiting for Ollama to initialize...")
+ time.sleep(5)
+
+ # Check if model exists
+ print_step(f"Checking for {model} model...")
+ try:
+ result = run_cmd(["docker", "exec", "ollama", "ollama", "list"], capture=True, check=False)
+ if model in result.stdout:
+ print_success(f"Model {model} is already installed")
+ return True
+ except Exception:
+ pass
+
+ # Pull model
+ print_step(f"Pulling {model} model (this may take a few minutes)...")
+ print(f" {Colors.DIM}Model size: ~4GB for mistral, ~2GB for phi{Colors.END}")
+
+ try:
+ # Use subprocess directly for streaming output
+ process = subprocess.Popen(
+ ["docker", "exec", "ollama", "ollama", "pull", model],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT,
+ text=True,
+ )
+
+ for line in process.stdout:
+ line = line.strip()
+ if line:
+ # Show progress
+ if "pulling" in line.lower() or "%" in line:
+ print(f"\r {Colors.DIM}{line[:70]}{Colors.END}", end="", flush=True)
+
+ process.wait()
+ print() # New line after progress
+
+ if process.returncode == 0:
+ print_success(f"Model {model} installed successfully")
+ return True
+ else:
+ print_error(f"Failed to pull model {model}")
+ return False
+
+ except Exception as e:
+ print_error(f"Error pulling model: {e}")
+ return False
+
+
+def setup_watch_service() -> bool:
+ """Install and start the Cortex Watch service."""
+ print_header("Setting up Cortex Watch Service")
+
+ # Check if service is already installed
+ service_file = Path.home() / ".config" / "systemd" / "user" / "cortex-watch.service"
+
+ if service_file.exists():
+ print_step("Watch service is already installed, checking status...")
+ result = run_cmd(
+ ["systemctl", "--user", "is-active", "cortex-watch.service"], capture=True, check=False
+ )
+ if result.stdout.strip() == "active":
+ print_success("Cortex Watch service is running")
+ return True
+ else:
+ print_step("Starting watch service...")
+ run_cmd(["systemctl", "--user", "start", "cortex-watch.service"], check=False)
+ else:
+ # Install the service
+ print_step("Installing Cortex Watch service...")
+
+ try:
+ # Import and run the installation
+ from cortex.watch_service import install_service
+
+ success, msg = install_service()
+
+ if success:
+ print_success("Watch service installed and started")
+ print(
+ f" {Colors.DIM}{msg[:200]}...{Colors.END}"
+ if len(msg) > 200
+ else f" {Colors.DIM}{msg}{Colors.END}"
+ )
+ else:
+ print_error(f"Failed to install watch service: {msg}")
+ return False
+
+ except ImportError:
+ print_warning("Could not import watch_service module")
+ print_step("Installing via CLI...")
+
+ result = run_cmd(
+ ["cortex", "watch", "--install", "--service"], capture=True, check=False
+ )
+ if result.returncode == 0:
+ print_success("Watch service installed via CLI")
+ else:
+ print_error("Failed to install watch service")
+ return False
+
+ # Verify service is running
+ result = run_cmd(
+ ["systemctl", "--user", "is-active", "cortex-watch.service"], capture=True, check=False
+ )
+ if result.stdout.strip() == "active":
+ print_success("Watch service is active and monitoring terminals")
+ return True
+ else:
+ print_warning("Watch service installed but not running")
+ return True # Still return True as installation succeeded
+
+
+def setup_shell_hooks() -> bool:
+ """Set up shell hooks for terminal monitoring."""
+ print_header("Setting up Shell Hooks")
+
+ cortex_dir = Path.home() / ".cortex"
+ cortex_dir.mkdir(parents=True, exist_ok=True)
+
+ # Create watch hook script
+ hook_file = cortex_dir / "watch_hook.sh"
+ hook_content = """#!/bin/bash
+# Cortex Terminal Watch Hook
+# This hook logs commands for Cortex to monitor during manual intervention
+
+__cortex_last_histnum=""
+__cortex_log_cmd() {
+ local histnum="$(history 1 | awk '{print $1}')"
+ [[ "$histnum" == "$__cortex_last_histnum" ]] && return
+ __cortex_last_histnum="$histnum"
+
+ local cmd="$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//")"
+ [[ -z "${cmd// /}" ]] && return
+ [[ "$cmd" == cortex* ]] && return
+ [[ "$cmd" == *"source"*".cortex"* ]] && return
+ [[ "$cmd" == *"watch_hook"* ]] && return
+ [[ -n "$CORTEX_TERMINAL" ]] && return
+
+ # Include terminal ID (TTY) in the log
+ local tty_name="$(tty 2>/dev/null | sed 's|/dev/||' | tr '/' '_')"
+ echo "${tty_name:-unknown}|$cmd" >> ~/.cortex/terminal_watch.log
+}
+export PROMPT_COMMAND='history -a; __cortex_log_cmd'
+echo "✓ Cortex is now watching this terminal"
+"""
+
+ print_step("Creating watch hook script...")
+ hook_file.write_text(hook_content)
+ hook_file.chmod(0o755)
+ print_success(f"Created {hook_file}")
+
+ # Add to .bashrc if not already present
+ bashrc = Path.home() / ".bashrc"
+ marker = "# Cortex Terminal Watch Hook"
+
+ if bashrc.exists():
+ content = bashrc.read_text()
+ if marker not in content:
+ print_step("Adding hook to .bashrc...")
+
+ bashrc_addition = f"""
+{marker}
+__cortex_last_histnum=""
+__cortex_log_cmd() {{
+ local histnum="$(history 1 | awk '{{print $1}}')"
+ [[ "$histnum" == "$__cortex_last_histnum" ]] && return
+ __cortex_last_histnum="$histnum"
+
+ local cmd="$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//")"
+ [[ -z "${{cmd// /}}" ]] && return
+ [[ "$cmd" == cortex* ]] && return
+ [[ "$cmd" == *"source"*".cortex"* ]] && return
+ [[ "$cmd" == *"watch_hook"* ]] && return
+ [[ -n "$CORTEX_TERMINAL" ]] && return
+
+ local tty_name="$(tty 2>/dev/null | sed 's|/dev/||' | tr '/' '_')"
+ echo "${{tty_name:-unknown}}|$cmd" >> ~/.cortex/terminal_watch.log
+}}
+export PROMPT_COMMAND='history -a; __cortex_log_cmd'
+
+alias cw="source ~/.cortex/watch_hook.sh"
+"""
+ with open(bashrc, "a") as f:
+ f.write(bashrc_addition)
+ print_success("Hook added to .bashrc")
+ else:
+ print_success("Hook already in .bashrc")
+
+ # Add to .zshrc if it exists
+ zshrc = Path.home() / ".zshrc"
+ if zshrc.exists():
+ content = zshrc.read_text()
+ if marker not in content:
+ print_step("Adding hook to .zshrc...")
+
+ zshrc_addition = f"""
+{marker}
+typeset -g __cortex_last_cmd=""
+cortex_watch_hook() {{
+ local cmd="$(fc -ln -1 | sed 's/^[[:space:]]*//')"
+ [[ -z "$cmd" ]] && return
+ [[ "$cmd" == "$__cortex_last_cmd" ]] && return
+ __cortex_last_cmd="$cmd"
+ [[ "$cmd" == cortex* ]] && return
+ [[ "$cmd" == *".cortex"* ]] && return
+ [[ -n "$CORTEX_TERMINAL" ]] && return
+ local tty_name="$(tty 2>/dev/null | sed 's|/dev/||' | tr '/' '_')"
+ echo "${{tty_name:-unknown}}|$cmd" >> ~/.cortex/terminal_watch.log
+}}
+precmd_functions+=(cortex_watch_hook)
+"""
+ with open(zshrc, "a") as f:
+ f.write(zshrc_addition)
+ print_success("Hook added to .zshrc")
+ else:
+ print_success("Hook already in .zshrc")
+
+ return True
+
+
+def check_api_keys() -> dict[str, bool]:
+ """Check for available API keys."""
+ print_header("Checking API Keys")
+
+ keys = {
+ "ANTHROPIC_API_KEY": False,
+ "OPENAI_API_KEY": False,
+ }
+
+ # Check environment variables
+ for key in keys:
+ if os.environ.get(key):
+ keys[key] = True
+ print_success(f"{key} found in environment")
+
+ # Check .env file
+ env_file = Path.cwd() / ".env"
+ if env_file.exists():
+ content = env_file.read_text()
+ for key in keys:
+ if key in content and not keys[key]:
+ keys[key] = True
+ print_success(f"{key} found in .env file")
+
+ # Report missing keys
+ if not any(keys.values()):
+ print_warning("No API keys found")
+ print(f" {Colors.DIM}For cloud LLM, set ANTHROPIC_API_KEY or OPENAI_API_KEY{Colors.END}")
+ print(f" {Colors.DIM}Or use local Ollama (--no-docker to skip){Colors.END}")
+
+ return keys
+
+
+def verify_installation() -> bool:
+ """Verify the installation is working."""
+ print_header("Verifying Installation")
+
+ all_good = True
+
+ # Check cortex command
+ print_step("Checking cortex command...")
+ result = run_cmd(["cortex", "--version"], capture=True, check=False)
+ if result.returncode == 0:
+ print_success(f"Cortex installed: {result.stdout.strip()}")
+ else:
+ print_error("Cortex command not found")
+ all_good = False
+
+ # Check watch service
+ print_step("Checking watch service...")
+ result = run_cmd(
+ ["systemctl", "--user", "is-active", "cortex-watch.service"], capture=True, check=False
+ )
+ if result.stdout.strip() == "active":
+ print_success("Watch service is running")
+ else:
+ print_warning("Watch service is not running")
+
+ # Check Ollama
+ print_step("Checking Ollama...")
+ exists, running = check_ollama_container()
+ if running:
+ print_success("Ollama container is running")
+
+ # Check if model is available
+ result = run_cmd(["docker", "exec", "ollama", "ollama", "list"], capture=True, check=False)
+ if result.returncode == 0 and result.stdout.strip():
+ models = [
+ line.split()[0] for line in result.stdout.strip().split("\n")[1:] if line.strip()
+ ]
+ if models:
+ print_success(f"Models available: {', '.join(models[:3])}")
+ elif exists:
+ print_warning("Ollama container exists but not running")
+ else:
+ print_warning("Ollama not installed (will use cloud LLM)")
+
+ # Check API keys
+ api_keys = check_api_keys()
+ has_llm = any(api_keys.values()) or running
+
+ if not has_llm:
+ print_error("No LLM available (need API key or Ollama)")
+ all_good = False
+
+ return all_good
+
+
+def uninstall() -> bool:
+ """Remove all ask --do components."""
+ print_header("Uninstalling Cortex ask --do Components")
+
+ # Stop and remove watch service
+ print_step("Removing watch service...")
+ run_cmd(["systemctl", "--user", "stop", "cortex-watch.service"], check=False)
+ run_cmd(["systemctl", "--user", "disable", "cortex-watch.service"], check=False)
+
+ service_file = Path.home() / ".config" / "systemd" / "user" / "cortex-watch.service"
+ if service_file.exists():
+ service_file.unlink()
+ print_success("Watch service removed")
+
+ # Remove shell hooks from .bashrc and .zshrc
+ marker = "# Cortex Terminal Watch Hook"
+ for rc_file in [Path.home() / ".bashrc", Path.home() / ".zshrc"]:
+ if rc_file.exists():
+ content = rc_file.read_text()
+ if marker in content:
+ print_step(f"Removing hook from {rc_file.name}...")
+ lines = content.split("\n")
+ new_lines = []
+ skip = False
+ for line in lines:
+ if marker in line:
+ skip = True
+ elif skip and line.strip() == "":
+ skip = False
+ continue
+ elif not skip:
+ new_lines.append(line)
+ rc_file.write_text("\n".join(new_lines))
+ print_success(f"Hook removed from {rc_file.name}")
+
+ # Remove cortex directory files (but keep config)
+ cortex_dir = Path.home() / ".cortex"
+ files_to_remove = [
+ "watch_hook.sh",
+ "terminal_watch.log",
+ "terminal_commands.json",
+ "watch_service.log",
+ "watch_service.pid",
+ "watch_state.json",
+ ]
+ for filename in files_to_remove:
+ filepath = cortex_dir / filename
+ if filepath.exists():
+ filepath.unlink()
+ print_success("Cortex watch files removed")
+
+ # Optionally remove Ollama container
+ exists, _ = check_ollama_container()
+ if exists:
+ print_step("Ollama container found")
+ response = input(" Remove Ollama container and data? [y/N]: ").strip().lower()
+ if response == "y":
+ run_cmd(["docker", "stop", "ollama"], check=False)
+ run_cmd(["docker", "rm", "ollama"], check=False)
+ run_cmd(["docker", "volume", "rm", "ollama"], check=False)
+ print_success("Ollama container and data removed")
+ else:
+ print(f" {Colors.DIM}Keeping Ollama container{Colors.END}")
+
+ print_success("Uninstallation complete")
+ return True
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description="Setup script for Cortex ask --do command",
+ formatter_class=argparse.RawDescriptionHelpFormatter,
+ epilog="""
+Examples:
+ python scripts/setup_ask_do.py # Full setup with Ollama
+ python scripts/setup_ask_do.py --no-docker # Skip Docker/Ollama setup
+ python scripts/setup_ask_do.py --model phi # Use smaller phi model
+ python scripts/setup_ask_do.py --uninstall # Remove all components
+""",
+ )
+ parser.add_argument("--no-docker", action="store_true", help="Skip Docker/Ollama setup")
+ parser.add_argument(
+ "--model", default="mistral", help="Ollama model to install (default: mistral)"
+ )
+ parser.add_argument("--skip-watch", action="store_true", help="Skip watch service installation")
+ parser.add_argument("--uninstall", action="store_true", help="Remove all ask --do components")
+
+ args = parser.parse_args()
+
+ print(f"\n{Colors.BOLD}{Colors.CYAN}")
+ print(" ██████╗ ██████╗ ██████╗ ████████╗███████╗██╗ ██╗")
+ print(" ██╔════╝██╔═══██╗██╔══██╗╚══██╔══╝██╔════╝╚██╗██╔╝")
+ print(" ██║ ██║ ██║██████╔╝ ██║ █████╗ ╚███╔╝ ")
+ print(" ██║ ██║ ██║██╔══██╗ ██║ ██╔══╝ ██╔██╗ ")
+ print(" ╚██████╗╚██████╔╝██║ ██║ ██║ ███████╗██╔╝ ██╗")
+ print(" ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚══════╝╚═╝ ╚═╝")
+ print(f"{Colors.END}")
+ print(f" {Colors.DIM}ask --do Setup Wizard{Colors.END}\n")
+
+ if args.uninstall:
+ return 0 if uninstall() else 1
+
+ success = True
+
+ # Step 1: Check API keys
+ api_keys = check_api_keys()
+
+ # Step 2: Setup Ollama (unless skipped)
+ if not args.no_docker:
+ if not setup_ollama(args.model):
+ if not any(api_keys.values()):
+ print_error("No LLM available - need either Ollama or API key")
+ success = False
+ else:
+ print_warning("Skipping Docker/Ollama setup (--no-docker)")
+ if not any(api_keys.values()):
+ print_warning("No API keys found - you'll need to set one up")
+
+ # Step 3: Setup watch service
+ if not args.skip_watch:
+ if not setup_watch_service():
+ print_warning("Watch service setup had issues")
+ else:
+ print_warning("Skipping watch service (--skip-watch)")
+
+ # Step 4: Setup shell hooks
+ setup_shell_hooks()
+
+ # Step 5: Verify installation
+ if verify_installation():
+ print_header("Setup Complete! 🎉")
+ print(f"""
+{Colors.GREEN}Everything is ready!{Colors.END}
+
+{Colors.BOLD}To use Cortex ask --do:{Colors.END}
+ cortex ask --do
+
+{Colors.BOLD}To start an interactive session:{Colors.END}
+ cortex ask --do "install nginx and configure it"
+
+{Colors.BOLD}For terminal monitoring in existing terminals:{Colors.END}
+ source ~/.cortex/watch_hook.sh
+ {Colors.DIM}(or just type 'cw' after opening a new terminal){Colors.END}
+
+{Colors.BOLD}To check status:{Colors.END}
+ cortex watch --status
+""")
+ return 0
+ else:
+ print_header("Setup Completed with Warnings")
+ print(f"""
+{Colors.YELLOW}Some components may need attention.{Colors.END}
+
+Run {Colors.CYAN}cortex watch --status{Colors.END} to check the current state.
+""")
+ return 1
+
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/scripts/setup_ask_do.sh b/scripts/setup_ask_do.sh
new file mode 100755
index 00000000..0d1fa40b
--- /dev/null
+++ b/scripts/setup_ask_do.sh
@@ -0,0 +1,435 @@
+#!/bin/bash
+#
+# Cortex ask --do Setup Script
+#
+# This script sets up everything needed for the AI-powered command execution:
+# - Ollama Docker container with a local LLM
+# - Cortex Watch service for terminal monitoring
+# - Shell hooks for command logging
+#
+# Usage:
+# ./scripts/setup_ask_do.sh [options]
+#
+# Options:
+# --no-docker Skip Docker/Ollama setup
+# --model MODEL Ollama model (default: mistral, alternatives: phi, llama2)
+# --skip-watch Skip watch service installation
+# --uninstall Remove all components
+#
+
+set -e
+
+# Colors
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+BLUE='\033[0;34m'
+CYAN='\033[0;36m'
+BOLD='\033[1m'
+DIM='\033[2m'
+NC='\033[0m' # No Color
+
+# Defaults
+MODEL="mistral"
+NO_DOCKER=false
+SKIP_WATCH=false
+UNINSTALL=false
+
+# Parse arguments
+while [[ $# -gt 0 ]]; do
+ case $1 in
+ --no-docker)
+ NO_DOCKER=true
+ shift
+ ;;
+ --model)
+ MODEL="$2"
+ shift 2
+ ;;
+ --skip-watch)
+ SKIP_WATCH=true
+ shift
+ ;;
+ --uninstall)
+ UNINSTALL=true
+ shift
+ ;;
+ -h|--help)
+ echo "Usage: $0 [options]"
+ echo ""
+ echo "Options:"
+ echo " --no-docker Skip Docker/Ollama setup"
+ echo " --model MODEL Ollama model (default: mistral)"
+ echo " --skip-watch Skip watch service installation"
+ echo " --uninstall Remove all components"
+ exit 0
+ ;;
+ *)
+ echo -e "${RED}Unknown option: $1${NC}"
+ exit 1
+ ;;
+ esac
+done
+
+print_header() {
+ echo -e "\n${BOLD}${CYAN}════════════════════════════════════════════════════════════${NC}"
+ echo -e "${BOLD}${CYAN} $1${NC}"
+ echo -e "${BOLD}${CYAN}════════════════════════════════════════════════════════════${NC}\n"
+}
+
+print_step() {
+ echo -e "${BLUE}▶${NC} $1"
+}
+
+print_success() {
+ echo -e "${GREEN}✓${NC} $1"
+}
+
+print_warning() {
+ echo -e "${YELLOW}⚠${NC} $1"
+}
+
+print_error() {
+ echo -e "${RED}✗${NC} $1"
+}
+
+# Banner
+echo -e "\n${BOLD}${CYAN}"
+echo " ██████╗ ██████╗ ██████╗ ████████╗███████╗██╗ ██╗"
+echo " ██╔════╝██╔═══██╗██╔══██╗╚══██╔══╝██╔════╝╚██╗██╔╝"
+echo " ██║ ██║ ██║██████╔╝ ██║ █████╗ ╚███╔╝ "
+echo " ██║ ██║ ██║██╔══██╗ ██║ ██╔══╝ ██╔██╗ "
+echo " ╚██████╗╚██████╔╝██║ ██║ ██║ ███████╗██╔╝ ██╗"
+echo " ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚══════╝╚═╝ ╚═╝"
+echo -e "${NC}"
+echo -e " ${DIM}ask --do Setup Wizard${NC}\n"
+
+# Uninstall
+if [ "$UNINSTALL" = true ]; then
+ print_header "Uninstalling Cortex ask --do Components"
+
+ # Stop watch service
+ print_step "Stopping watch service..."
+ systemctl --user stop cortex-watch.service 2>/dev/null || true
+ systemctl --user disable cortex-watch.service 2>/dev/null || true
+ rm -f ~/.config/systemd/user/cortex-watch.service
+ systemctl --user daemon-reload
+ print_success "Watch service removed"
+
+ # Remove shell hooks
+ print_step "Removing shell hooks..."
+ if [ -f ~/.bashrc ]; then
+ sed -i '/# Cortex Terminal Watch Hook/,/^$/d' ~/.bashrc
+ sed -i '/alias cw=/d' ~/.bashrc
+ fi
+ if [ -f ~/.zshrc ]; then
+ sed -i '/# Cortex Terminal Watch Hook/,/^$/d' ~/.zshrc
+ fi
+ print_success "Shell hooks removed"
+
+ # Remove cortex files
+ print_step "Removing cortex watch files..."
+ rm -f ~/.cortex/watch_hook.sh
+ rm -f ~/.cortex/terminal_watch.log
+ rm -f ~/.cortex/terminal_commands.json
+ rm -f ~/.cortex/watch_service.log
+ rm -f ~/.cortex/watch_service.pid
+ rm -f ~/.cortex/watch_state.json
+ print_success "Watch files removed"
+
+ # Ask about Ollama
+ if docker ps -a --format '{{.Names}}' | grep -q '^ollama$'; then
+ print_step "Ollama container found"
+ read -p " Remove Ollama container and data? [y/N]: " -n 1 -r
+ echo
+ if [[ $REPLY =~ ^[Yy]$ ]]; then
+ docker stop ollama 2>/dev/null || true
+ docker rm ollama 2>/dev/null || true
+ docker volume rm ollama 2>/dev/null || true
+ print_success "Ollama removed"
+ fi
+ fi
+
+ print_success "Uninstallation complete"
+ exit 0
+fi
+
+# Check Python environment
+print_header "Checking Environment"
+
+print_step "Checking Python..."
+if command -v python3 &> /dev/null; then
+ PYTHON_VERSION=$(python3 --version 2>&1)
+ print_success "Python installed: $PYTHON_VERSION"
+else
+ print_error "Python 3 not found"
+ exit 1
+fi
+
+# Check if in virtual environment
+if [ -z "$VIRTUAL_ENV" ]; then
+ print_warning "Not in a virtual environment"
+ if [ -f "venv/bin/activate" ]; then
+ print_step "Activating venv..."
+ source venv/bin/activate
+ print_success "Activated venv"
+ else
+ print_warning "Consider running: python3 -m venv venv && source venv/bin/activate"
+ fi
+else
+ print_success "Virtual environment active: $VIRTUAL_ENV"
+fi
+
+# Check cortex installation
+print_step "Checking Cortex installation..."
+if command -v cortex &> /dev/null; then
+ print_success "Cortex is installed"
+else
+ print_warning "Cortex not found in PATH, installing..."
+ pip install -e . -q
+ print_success "Cortex installed"
+fi
+
+# Setup Ollama
+if [ "$NO_DOCKER" = false ]; then
+ print_header "Setting up Ollama (Local LLM)"
+
+ print_step "Checking Docker..."
+ if ! command -v docker &> /dev/null; then
+ print_error "Docker is not installed"
+ echo -e " ${DIM}Install Docker: https://docs.docker.com/get-docker/${NC}"
+ NO_DOCKER=true
+ elif ! docker info &> /dev/null; then
+ print_error "Docker daemon is not running"
+ echo -e " ${DIM}Run: sudo systemctl start docker${NC}"
+ NO_DOCKER=true
+ else
+ print_success "Docker is available"
+
+ # Check Ollama container
+ if docker ps --format '{{.Names}}' | grep -q '^ollama$'; then
+ print_success "Ollama container is running"
+ elif docker ps -a --format '{{.Names}}' | grep -q '^ollama$'; then
+ print_step "Starting Ollama container..."
+ docker start ollama
+ print_success "Ollama started"
+ else
+ print_step "Pulling Ollama image..."
+ docker pull ollama/ollama
+ print_success "Ollama image pulled"
+
+ print_step "Starting Ollama container..."
+ docker run -d \
+ --name ollama \
+ -p 11434:11434 \
+ -v ollama:/root/.ollama \
+ --restart unless-stopped \
+ ollama/ollama
+ print_success "Ollama container started"
+
+ sleep 3
+ fi
+
+ # Check model
+ print_step "Checking for $MODEL model..."
+ if docker exec ollama ollama list 2>/dev/null | grep -q "$MODEL"; then
+ print_success "Model $MODEL is installed"
+ else
+ print_step "Pulling $MODEL model (this may take a few minutes)..."
+ echo -e " ${DIM}Model size: ~4GB for mistral, ~2GB for phi${NC}"
+ docker exec ollama ollama pull "$MODEL"
+ print_success "Model $MODEL installed"
+ fi
+ fi
+else
+ print_warning "Skipping Docker/Ollama setup (--no-docker)"
+fi
+
+# Setup Watch Service
+if [ "$SKIP_WATCH" = false ]; then
+ print_header "Setting up Cortex Watch Service"
+
+ print_step "Installing watch service..."
+ cortex watch --install --service 2>/dev/null || {
+ # Manual installation if CLI fails
+ mkdir -p ~/.config/systemd/user
+
+ # Get Python path
+ PYTHON_PATH=$(which python3)
+ CORTEX_PATH=$(which cortex 2>/dev/null || echo "$HOME/.local/bin/cortex")
+
+ cat > ~/.config/systemd/user/cortex-watch.service << EOF
+[Unit]
+Description=Cortex Terminal Watch Service
+After=default.target
+
+[Service]
+Type=simple
+ExecStart=$PYTHON_PATH -m cortex.watch_service
+Restart=always
+RestartSec=5
+Environment=PATH=$HOME/.local/bin:/usr/local/bin:/usr/bin:/bin
+WorkingDirectory=$HOME
+
+[Install]
+WantedBy=default.target
+EOF
+
+ systemctl --user daemon-reload
+ systemctl --user enable cortex-watch.service
+ systemctl --user start cortex-watch.service
+ }
+
+ sleep 2
+
+ if systemctl --user is-active cortex-watch.service &> /dev/null; then
+ print_success "Watch service is running"
+ else
+ print_warning "Watch service installed but may need attention"
+ echo -e " ${DIM}Check with: systemctl --user status cortex-watch.service${NC}"
+ fi
+else
+ print_warning "Skipping watch service (--skip-watch)"
+fi
+
+# Setup Shell Hooks
+print_header "Setting up Shell Hooks"
+
+CORTEX_DIR="$HOME/.cortex"
+mkdir -p "$CORTEX_DIR"
+
+# Create watch hook
+print_step "Creating watch hook script..."
+cat > "$CORTEX_DIR/watch_hook.sh" << 'EOF'
+#!/bin/bash
+# Cortex Terminal Watch Hook
+
+__cortex_last_histnum=""
+__cortex_log_cmd() {
+ local histnum="$(history 1 | awk '{print $1}')"
+ [[ "$histnum" == "$__cortex_last_histnum" ]] && return
+ __cortex_last_histnum="$histnum"
+
+ local cmd="$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//")"
+ [[ -z "${cmd// /}" ]] && return
+ [[ "$cmd" == cortex* ]] && return
+ [[ "$cmd" == *"source"*".cortex"* ]] && return
+ [[ "$cmd" == *"watch_hook"* ]] && return
+ [[ -n "$CORTEX_TERMINAL" ]] && return
+
+ local tty_name="$(tty 2>/dev/null | sed 's|/dev/||' | tr '/' '_')"
+ echo "${tty_name:-unknown}|$cmd" >> ~/.cortex/terminal_watch.log
+}
+export PROMPT_COMMAND='history -a; __cortex_log_cmd'
+echo "✓ Cortex is now watching this terminal"
+EOF
+chmod +x "$CORTEX_DIR/watch_hook.sh"
+print_success "Created watch hook script"
+
+# Add to .bashrc
+MARKER="# Cortex Terminal Watch Hook"
+if [ -f ~/.bashrc ]; then
+ if ! grep -q "$MARKER" ~/.bashrc; then
+ print_step "Adding hook to .bashrc..."
+ cat >> ~/.bashrc << 'EOF'
+
+# Cortex Terminal Watch Hook
+__cortex_last_histnum=""
+__cortex_log_cmd() {
+ local histnum="$(history 1 | awk '{print $1}')"
+ [[ "$histnum" == "$__cortex_last_histnum" ]] && return
+ __cortex_last_histnum="$histnum"
+
+ local cmd="$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//")"
+ [[ -z "${cmd// /}" ]] && return
+ [[ "$cmd" == cortex* ]] && return
+ [[ "$cmd" == *"source"*".cortex"* ]] && return
+ [[ "$cmd" == *"watch_hook"* ]] && return
+ [[ -n "$CORTEX_TERMINAL" ]] && return
+
+ local tty_name="$(tty 2>/dev/null | sed 's|/dev/||' | tr '/' '_')"
+ echo "${tty_name:-unknown}|$cmd" >> ~/.cortex/terminal_watch.log
+}
+export PROMPT_COMMAND='history -a; __cortex_log_cmd'
+
+alias cw="source ~/.cortex/watch_hook.sh"
+EOF
+ print_success "Hook added to .bashrc"
+ else
+ print_success "Hook already in .bashrc"
+ fi
+fi
+
+# Check API keys
+print_header "Checking API Keys"
+
+HAS_API_KEY=false
+if [ -n "$ANTHROPIC_API_KEY" ]; then
+ print_success "ANTHROPIC_API_KEY found in environment"
+ HAS_API_KEY=true
+fi
+if [ -n "$OPENAI_API_KEY" ]; then
+ print_success "OPENAI_API_KEY found in environment"
+ HAS_API_KEY=true
+fi
+if [ -f ".env" ]; then
+ if grep -q "ANTHROPIC_API_KEY" .env || grep -q "OPENAI_API_KEY" .env; then
+ print_success "API key(s) found in .env file"
+ HAS_API_KEY=true
+ fi
+fi
+
+if [ "$HAS_API_KEY" = false ] && [ "$NO_DOCKER" = true ]; then
+ print_warning "No API keys found and Ollama not set up"
+ echo -e " ${DIM}Set ANTHROPIC_API_KEY or OPENAI_API_KEY for cloud LLM${NC}"
+fi
+
+# Verify
+print_header "Verification"
+
+print_step "Checking cortex command..."
+if cortex --version &> /dev/null; then
+ print_success "Cortex: $(cortex --version 2>&1)"
+else
+ print_error "Cortex command not working"
+fi
+
+print_step "Checking watch service..."
+if systemctl --user is-active cortex-watch.service &> /dev/null; then
+ print_success "Watch service: running"
+else
+ print_warning "Watch service: not running"
+fi
+
+if [ "$NO_DOCKER" = false ]; then
+ print_step "Checking Ollama..."
+ if docker ps --format '{{.Names}}' | grep -q '^ollama$'; then
+ print_success "Ollama: running"
+ MODELS=$(docker exec ollama ollama list 2>/dev/null | tail -n +2 | awk '{print $1}' | tr '\n' ', ' | sed 's/,$//')
+ if [ -n "$MODELS" ]; then
+ print_success "Models: $MODELS"
+ fi
+ else
+ print_warning "Ollama: not running"
+ fi
+fi
+
+# Final message
+print_header "Setup Complete! 🎉"
+
+echo -e "${GREEN}Everything is ready!${NC}"
+echo ""
+echo -e "${BOLD}To use Cortex ask --do:${NC}"
+echo " cortex ask --do"
+echo ""
+echo -e "${BOLD}To start an interactive session:${NC}"
+echo " cortex ask --do \"install nginx and configure it\""
+echo ""
+echo -e "${BOLD}For terminal monitoring in existing terminals:${NC}"
+echo " source ~/.cortex/watch_hook.sh"
+echo -e " ${DIM}(or just type 'cw' after opening a new terminal)${NC}"
+echo ""
+echo -e "${BOLD}To check status:${NC}"
+echo " cortex watch --status"
+echo ""
+