Docs Category
AI Providers
Critiq supports Claude, ChatGPT, Gemini, GitHub Copilot, Custom (OpenAI-compatible), and Local models.
AI-powered features like triage and review are only visible when a provider other than None is selected in Settings → AI Provider.
AI Review
Critiq's AI Review analyzes your code changes with full context awareness. Instead of reviewing diffs in isolation, the AI understands the surrounding codebase—including callers, dependencies, and related tests—to provide more accurate and actionable feedback.
When you request an AI review, Critiq builds rich context for each changed symbol using a combination of static analysis and language intelligence:
- Symbol boundaries - Tree-sitter parsing identifies exact function, class, and method boundaries so reviews focus on complete logical units rather than arbitrary line ranges.
- Caller discovery - LSP integration finds references to changed code across your codebase, helping the AI understand the blast radius of modifications.
- Test coverage - Critiq locates test files that exercise changed code, giving the AI visibility into existing test coverage.
This context is compiled once and cached per comparison, so triage and detailed review share the same analysis without redundant computation.
Risk Levels
Before detailed review, AI triage classifies each change by risk level to help you prioritize review effort:
- Critical - Security-sensitive changes, authentication modifications, or high-impact business logic affecting many callers.
- Needs Review - Significant logic changes, API modifications, or code affecting external integrations.
- Minor - Small improvements, refactoring, or changes with limited scope.
- Low Impact - Documentation, formatting, or trivial modifications.
Triage decisions factor in caller count and test coverage—changes affecting many callers or lacking test coverage trend toward higher priority.
Configuration
After selecting an AI provider in Settings → AI Provider, choose how review should run:
- Manual triage - Right-click any file and choose Triage File to classify changes on demand.
- Full review - Request detailed AI review from the file context menu or review panel for in-depth analysis with inline comments.
- Auto-triage - See Auto-triage files below for the automatic classification workflow and defaults.
Auto-triage files
In Settings → Additional → Code Review, enable Auto-triage files to automatically analyze all files when loading a comparison or PR. Results appear in the triage panel as they complete. This requires an AI provider to be configured.
Auto-triage is disabled by default. This setting only controls automatic runs; manual triage from Triage File remains available.
Custom Provider (OpenAI-Compatible APIs)
The Custom provider works with any API that accepts OpenAI-compatible chat completion requests. This includes services like AWS Bedrock, Azure OpenAI, Ollama, LM Studio, and self-hosted endpoints.
How It Works
When you select the Custom provider, Critiq sends requests to your specified endpoint using
standard OpenAI chat completion format with Bearer {apiKey} authentication.
Configuration
- Go to Settings → AI Provider
- Select Custom as the provider
- Enter your API Key (used as Bearer token)
- Enter the Custom Endpoint URL (the chat completions endpoint)
- Enter your Model name as expected by the API
Example: AWS Bedrock
AWS Bedrock supports an OpenAI-compatible API. To use it with Critiq:
- API Key: Your Amazon Bedrock API key (generate in AWS Console → Bedrock → API keys)
-
Endpoint:
https://bedrock-runtime.{region}.amazonaws.com/openai/v1/chat/completions -
Model:
anthropic.claude-3-5-sonnet-20241022-v2:0(or other Bedrock model ID)
Replace {region} with your AWS region (e.g., us-east-1,
us-west-2).
Example: Local Models (Ollama, LM Studio)
- API Key: Leave empty or use any placeholder
-
Endpoint:
http://localhost:11434/v1/chat/completions(Ollama) orhttp://localhost:1234/v1/chat/completions(LM Studio) -
Model: Your local model name (e.g.,
llama3,codellama)
Example: Azure OpenAI
- API Key: Your Azure OpenAI API key
-
Endpoint:
https://{resource}.openai.azure.com/openai/deployments/{deployment}/chat/completions?api-version=2024-02-01 - Model: Your deployment name