Table of Contents

Getting Started

Prerequisites

  • .NET 10.0 SDK or later
  • At least one AI provider configured (see below)

Installation

Install JD.AI as a global .NET tool:

dotnet tool install --global JD.AI

To update to the latest version:

dotnet tool update --global JD.AI

First run

Launch JD.AI in any project directory:

cd /path/to/your/project
jdai

On startup, JD.AI:

  1. Checks for available AI providers
  2. Displays detected providers and models
  3. Selects the best available provider
  4. Loads project instructions (JDAI.md, CLAUDE.md, etc.)
  5. Shows the welcome banner

JD.AI startup showing provider detection

Provider setup

You need at least one AI provider. JD.AI auto-detects all available providers.

Claude Code

  1. Install the Claude Code CLI:
    npm install -g @anthropic-ai/claude-code
    
  2. Authenticate:
    claude auth login
    
  3. JD.AI detects the session automatically on next launch.

GitHub Copilot

  1. Authenticate via GitHub CLI:
    gh auth login --scopes copilot
    
    Or sign in through the VS Code GitHub Copilot extension.
  2. JD.AI detects available Copilot models automatically.

OpenAI Codex

  1. Install the Codex CLI:
    npm install -g @openai/codex
    
  2. Authenticate:
    codex auth login
    
    Or set the OPENAI_API_KEY environment variable directly.
  3. JD.AI detects the session automatically on next launch.

Ollama (local, free)

  1. Install Ollama from ollama.com
  2. Start the server:
    ollama serve
    
  3. Pull a chat model:
    ollama pull llama3.2
    
  4. Optionally pull an embedding model for semantic memory:
    ollama pull all-minilm
    

Local models (fully standalone)

No external service needed — run GGUF models directly in-process via LLamaSharp:

  1. Place .gguf model files in ~/.jdai/models/ (or any directory).
  2. JD.AI detects them automatically on startup.
  3. Or use the interactive commands to search and download:
    /local search llama 7b
    /local download TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF
    

See Local Models for the full guide.

Switching providers and models

/providers     # List all detected providers with status
/provider      # Show current provider and model
/models        # List all available models across providers
/model <name>  # Switch to a specific model

CLI options

Flag Description
--resume <id> Resume a previous session by ID
--new Start a fresh session
--force-update-check Force NuGet update check
--dangerously-skip-permissions Skip all tool confirmations
--gateway Start in gateway mode
--gateway-port <port> Port for gateway API (default: 5100)

What's next