Knob Registry (Raw)

This is the complete configuration surface area: every tunable knob exposed by the Pydantic config models, with tooltips layered in where they exist.

Use search to jump to a setting, then copy the dot-path into config or the UI. For environment-style keys, we show the matching env key when available.

chat (78)

dot-path → type → default → env key (if available)
  • chat.benchmark.enabled
    env: —
    boolean
    default: True

    No description in OpenAPI yet.

  • chat.benchmark.include_cost_tracking
    env: —
    boolean
    default: True

    No description in OpenAPI yet.

  • chat.benchmark.include_timing_breakdown
    env: —
    boolean
    default: True

    No description in OpenAPI yet.

  • chat.benchmark.max_concurrent_models
    env: —
    number
    default: 4

    No description in OpenAPI yet.

  • chat.benchmark.results_path
    env: —
    string
    default: "data/benchmarks/"

    No description in OpenAPI yet.

  • chat.benchmark.save_results
    env: —
    boolean
    default: True

    No description in OpenAPI yet.

  • chat.default_corpus_ids
    env: —
    string[]
    default: ["epstein-files-1"]

    Default checked user-facing corpus IDs for new conversations.

  • chat.image_gen.comfyui_api_endpoint
    env: —
    string
    default: ""

    No description in OpenAPI yet.

  • chat.image_gen.default_resolution
    env: —
    string
    default: "1024x1024"

    No description in OpenAPI yet.

  • chat.image_gen.default_steps
    env: —
    number
    default: 8

    No description in OpenAPI yet.

  • chat.image_gen.enabled
    env: —
    boolean
    default: False

    No description in OpenAPI yet.

  • chat.image_gen.local_command
    env: —
    string
    default: "python -m qwen_image.generate"

    CLI command. Receives --prompt, --output, --steps, --width, --height.

  • chat.image_gen.local_model_path
    env: —
    string
    default: ""

    No description in OpenAPI yet.

  • chat.image_gen.provider
    env: —
    string
    default: "local"

    No description in OpenAPI yet.

  • chat.image_gen.replicate_model
    env: —
    string
    default: ""

    No description in OpenAPI yet.

  • chat.image_gen.use_lightning_lora
    env: —
    boolean
    default: True

    No description in OpenAPI yet.

  • chat.local_models.auto_detect
    env: —
    boolean
    default: True

    No description in OpenAPI yet.

  • chat.local_models.default_chat_model
    env: —
    string
    default: "qwen3:8b"

    No description in OpenAPI yet.

  • chat.local_models.default_embedding_model
    env: —
    string
    default: "nomic-embed-text"

    No description in OpenAPI yet.

  • chat.local_models.default_vision_model
    env: —
    string
    default: "qwen3-vl:8b"

    No description in OpenAPI yet.

  • chat.local_models.fallback_to_cloud
    env: —
    boolean
    default: True

    No description in OpenAPI yet.

  • chat.local_models.gpu_memory_limit_gb
    env: —
    number
    default: 0

    No description in OpenAPI yet.

  • chat.local_models.health_check_interval
    env: —
    number
    default: 30

    No description in OpenAPI yet.

  • chat.local_models.providers[].base_url
    env: —
    string
    default: —

    Provider API endpoint

  • chat.local_models.providers[].enabled
    env: —
    boolean
    default: True

    No description in OpenAPI yet.

  • chat.local_models.providers[].name
    env: —
    string
    default: —

    Display name

  • chat.local_models.providers[].priority
    env: —
    number
    default: 0

    Lower = higher priority when multiple have same model.

  • chat.local_models.providers[].provider_type
    env: —
    string
    default: —

    No description in OpenAPI yet.

  • chat.max_tokens
    env: —
    number
    default: 4096

    No description in OpenAPI yet.

  • chat.multimodal.image_detail
    env: —
    string
    default: "auto"

    OpenAI vision detail level.

  • chat.multimodal.max_image_size_mb
    env: —
    number
    default: 20

    No description in OpenAPI yet.

  • chat.multimodal.max_images_per_message
    env: —
    number
    default: 5

    No description in OpenAPI yet.

  • chat.multimodal.supported_formats
    env: —
    string[]
    default: ["png", "jpg", "jpeg", "gif", "webp"]

    No description in OpenAPI yet.

  • chat.multimodal.vision_enabled
    env: —
    boolean
    default: True

    No description in OpenAPI yet.

  • chat.multimodal.vision_model_override
    env: —
    string
    default: ""

    Force model for vision. Empty=use chat model if it supports vision.

  • chat.openai_protocol
    env: —
    "auto" | "responses" | "chat_completions"
    default: "auto"

    Protocol for OpenAI cloud_direct calls. 'auto' routes codex-only models to Responses.

  • chat.openrouter.api_key
    env: —
    string
    default: ""

    No description in OpenAPI yet.

  • chat.openrouter.base_url
    env: —
    string
    default: "https://openrouter.ai/api/v1"

    No description in OpenAPI yet.

  • chat.openrouter.default_model
    env: —
    string
    default: "anthropic/claude-sonnet-4"

    No description in OpenAPI yet.

  • chat.openrouter.enabled
    env: —
    boolean
    default: False

    No description in OpenAPI yet.

  • chat.openrouter.fallback_models
    env: —
    string[]
    default: ["openai/gpt-4o", "google/gemini-2.0-flash"]

    No description in OpenAPI yet.

  • chat.openrouter.site_name
    env: —
    string
    default: "TriBridRAG"

    No description in OpenAPI yet.

  • chat.recall_gate.deep_on_explicit_reference
    env: —
    boolean
    default: True

    Trigger deep when message explicitly references past conversation.

  • chat.recall_gate.deep_recency_weight
    env: —
    number
    default: 0.5

    recency_weight for deep (higher when user explicitly asks about the past).

  • chat.recall_gate.deep_top_k
    env: —
    number
    default: 10

    top_k when intensity=deep.

  • chat.recall_gate.default_intensity
    env: —
    RecallIntensity
    default: "standard"

    Fallback when classifier is uncertain.

  • chat.recall_gate.enabled
    env: —
    boolean
    default: True

    Enable smart gating. False=always query Recall when checked.

  • chat.recall_gate.light_for_short_questions
    env: —
    boolean
    default: True

    Use sparse-only for short questions (< 10 tokens) without explicit recall triggers.

  • chat.recall_gate.light_top_k
    env: —
    number
    default: 3

    top_k when intensity=light.

  • chat.recall_gate.show_gate_decision
    env: —
    boolean
    default: True

    Show gate decision (intensity, reason) in status bar.

  • chat.recall_gate.show_signals
    env: —
    boolean
    default: False
    internal/basics

    Show raw RecallSignals in debug footer (dev mode).

  • chat.recall_gate.skip_greetings
    env: —
    boolean
    default: True

    Skip Recall for greetings, farewells, acknowledgments.

  • chat.recall_gate.skip_max_tokens
    env: —
    number
    default: 4

    Messages with ≤ this many tokens are skip candidates (only if they match a skip pattern).

  • chat.recall_gate.skip_standalone_questions
    env: —
    boolean
    default: True

    Skip Recall for questions that don't reference past context. 'How does auth work?' doesn't need chat history.

  • chat.recall_gate.skip_when_rag_active
    env: —
    boolean
    default: False

    Skip Recall when RAG corpora are checked. Assumes user wants code context, not chat history. Default False — let both contribute.

  • chat.recall_gate.standard_recency_weight
    env: —
    number
    default: 0.3

    Default recency weight for Recall (recent messages often more relevant).

  • chat.recall_gate.standard_top_k
    env: —
    number
    default: 5

    top_k for standard Recall queries.

  • chat.recall.auto_index
    env: —
    boolean
    default: True

    No description in OpenAPI yet.

  • chat.recall.chunk_max_tokens
    env: —
    number
    default: 256

    Chat chunks should be smaller than code chunks.

  • chat.recall.chunking_strategy
    env: —
    string
    default: "sentence"

    'turn'=one chunk per message, 'sentence'=split by sentence.

  • chat.recall.default_corpus_id
    env: —
    string
    default: "recall_default"
    internal/basics

    Auto-created at first launch. Users never touch this.

  • chat.recall.embedding_model
    env: —
    string
    default: ""

    Override embedding model. Empty=use global config.

  • chat.recall.enabled
    env: —
    boolean
    default: True

    Enable Recall. ON by default.

  • chat.recall.graph_enabled
    env: —
    boolean
    default: False
    internal/basics

    Enable Recall graph indexing + retrieval (experimental).

  • chat.recall.index_delay_seconds
    env: —
    number
    default: 5

    No description in OpenAPI yet.

  • chat.recall.max_history_tokens
    env: —
    number
    default: 4096

    No description in OpenAPI yet.

  • chat.recall.vector_backend
    env: —
    string
    default: "pgvector"

    pgvector recommended (already running).

  • chat.send_shortcut
    env: —
    string
    default: "ctrl+enter"

    No description in OpenAPI yet.

  • chat.show_source_dropdown
    env: —
    boolean
    default: True

    No description in OpenAPI yet.

  • chat.system_prompt_base
    env: —
    string
    default: "You are a helpful assistant."

    No description in OpenAPI yet.

  • chat.system_prompt_direct
    env: —
    string
    default: "You are a helpful agentic RAG database assistan..."

    State 1: No context. Nothing checked or retrieval returned empty.

  • chat.system_prompt_rag
    env: —
    string
    default: "You are a database assistant powered by TriBrid..."

    State 2: RAG only. Code corpora returned results; Recall did not.

  • chat.system_prompt_rag_and_recall
    env: —
    string
    default: "You are an agentic RAG database assistant power..."

    State 4: Both. RAG and Recall both returned results.

  • chat.system_prompt_rag_suffix
    env: —
    string
    default: " Answer questions using the provided database i..."

    No description in OpenAPI yet.

  • chat.system_prompt_recall
    env: —
    string
    default: "You are an agentic RAG database assistant power..."

    State 3: Recall only. Recall returned results; no RAG corpora active.

  • chat.system_prompt_recall_suffix
    env: —
    string
    default: " You have access to conversation history. Refer..."

    No description in OpenAPI yet.

  • chat.temperature
    env: —
    number
    default: 0.3

    No description in OpenAPI yet.

  • chat.temperature_no_retrieval
    env: —
    number
    default: 0.7

    Temperature when nothing is checked (direct chat = more creative)

chunk_summaries (8)

dot-path → type → default → env key (if available)
  • chunk_summaries.code_snippet_length
    env: CHUNK_SUMMARIES_CODE_SNIPPET_LENGTH
    number
    default: 2000

    Max code snippet length in semantic chunk_summaries

  • chunk_summaries.exclude_dirs
    env: —
    string[]
    default: —

    Directories to skip when building chunk_summaries

  • chunk_summaries.exclude_keywords
    env: —
    string[]
    default: —

    Keywords that, when present in code, skip the chunk

  • chunk_summaries.exclude_patterns
    env: —
    string[]
    default: —

    File patterns/extensions to skip

  • chunk_summaries.max_routes
    env: CHUNK_SUMMARIES_MAX_ROUTES
    number
    default: 5

    Max API routes to include per chunk_summary

  • chunk_summaries.max_symbols
    env: CHUNK_SUMMARIES_MAX_SYMBOLS
    number
    default: 5

    Max symbols to include per chunk_summary

  • chunk_summaries.purpose_max_length
    env: CHUNK_SUMMARIES_PURPOSE_MAX_LENGTH
    number
    default: 240

    Max length for purpose field in chunk_summaries

  • chunk_summaries.quick_tips
    env: —
    string[]
    default: —

    Quick tips shown in chunk_summaries builder UI

chunking (18)

dot-path → type → default → env key (if available)
  • chunking.ast_overlap_lines
    env: AST_OVERLAP_LINES
    number
    default: 20

    Overlap lines for AST chunking

    Tooltip (AST_OVERLAP_LINES)
    Loading…
  • chunking.chunk_overlap
    env: CHUNK_OVERLAP
    number
    default: 200

    Overlap between chunks

    Tooltip (CHUNK_OVERLAP)
    Loading…
  • chunking.chunk_size
    env: CHUNK_SIZE
    number
    default: 1000

    Target chunk size (non-whitespace chars)

    Tooltip (CHUNK_SIZE)
    Loading…
  • chunking.chunking_strategy
    env: CHUNKING_STRATEGY
    string
    default: "ast"

    Chunking strategy (document + code)

    Tooltip (CHUNKING_STRATEGY)
    Loading…
  • chunking.emit_chunk_ordinal
    env: —
    boolean
    default: True

    Emit chunk ordinal metadata for neighbor-window retrieval.

  • chunking.emit_parent_doc_id
    env: —
    boolean
    default: True

    Emit parent document id metadata for neighbor-window retrieval.

  • chunking.greedy_fallback_target
    env: GREEDY_FALLBACK_TARGET
    number
    default: 800

    Target size for greedy chunking

    Tooltip (GREEDY_FALLBACK_TARGET)
    Loading…
  • chunking.markdown_include_code_fences
    env: —
    boolean
    default: True

    Whether to include fenced code blocks in markdown sections.

  • chunking.markdown_max_heading_level
    env: —
    number
    default: 4

    Max heading level to split on for markdown chunking.

  • chunking.max_chunk_tokens
    env: MAX_CHUNK_TOKENS
    number
    default: 8000

    Maximum tokens per chunk - chunks exceeding this are split recursively

    Tooltip (MAX_CHUNK_TOKENS)
    Loading…
  • chunking.max_indexable_file_size
    env: MAX_INDEXABLE_FILE_SIZE
    number
    default: 250000000

    Max file size to index (bytes) - files larger than this are skipped

    Tooltip (MAX_INDEXABLE_FILE_SIZE)
    Loading…
  • chunking.min_chunk_chars
    env: MIN_CHUNK_CHARS
    number
    default: 50

    Minimum chunk size

    Tooltip (MIN_CHUNK_CHARS)
    Loading…
  • chunking.overlap_tokens
    env: —
    number
    default: 64

    Token overlap between chunks (token-based strategies)

  • chunking.preserve_imports
    env: PRESERVE_IMPORTS
    number
    default: 1

    Include imports in chunks

    Tooltip (PRESERVE_IMPORTS)
    Loading…
  • chunking.recursive_max_depth
    env: —
    number
    default: 10

    Max recursion depth for recursive chunking.

  • chunking.separator_keep
    env: —
    "none" | "prefix" | "suffix"
    default: "suffix"

    Whether to keep separators when splitting (recursive strategy).

  • chunking.separators
    env: —
    string[]
    default: —

    Separators for recursive chunking, in priority order.

  • chunking.target_tokens
    env: —
    number
    default: 512

    Target tokens per chunk (token-based strategies)

docker (11)

dot-path → type → default → env key (if available)
  • docker.dev_backend_port
    env: DEV_BACKEND_PORT
    number
    default: 8012

    Port for dev backend (Uvicorn)

  • docker.dev_frontend_port
    env: DEV_FRONTEND_PORT
    number
    default: 5173

    Port for dev frontend (Vite)

  • docker.dev_stack_restart_timeout
    env: DEV_STACK_RESTART_TIMEOUT
    number
    default: 30

    Timeout for dev stack restart operations (seconds)

  • docker.docker_container_action_timeout
    env: DOCKER_CONTAINER_ACTION_TIMEOUT
    number
    default: 30

    Timeout for Docker container actions (start/stop/restart)

    Tooltip (DOCKER_CONTAINER_ACTION_TIMEOUT)
    Loading…
  • docker.docker_container_list_timeout
    env: DOCKER_CONTAINER_LIST_TIMEOUT
    number
    default: 10

    Timeout for Docker container list (seconds)

    Tooltip (DOCKER_CONTAINER_LIST_TIMEOUT)
    Loading…
  • docker.docker_host
    env: DOCKER_HOST
    string
    default: ""

    Docker socket URL (e.g., unix:///var/run/docker.sock). Leave empty for auto-detection.

  • docker.docker_infra_down_timeout
    env: DOCKER_INFRA_DOWN_TIMEOUT
    number
    default: 30

    Timeout for Docker infrastructure down command (seconds)

    Tooltip (DOCKER_INFRA_DOWN_TIMEOUT)
    Loading…
  • docker.docker_infra_up_timeout
    env: DOCKER_INFRA_UP_TIMEOUT
    number
    default: 60

    Timeout for Docker infrastructure up command (seconds)

    Tooltip (DOCKER_INFRA_UP_TIMEOUT)
    Loading…
  • docker.docker_logs_tail
    env: DOCKER_LOGS_TAIL
    number
    default: 100
    internal/basics

    Default number of log lines to tail from containers

    Tooltip (DOCKER_LOGS_TAIL)
    Loading…
  • docker.docker_logs_timestamps
    env: DOCKER_LOGS_TIMESTAMPS
    number
    default: 1

    Include timestamps in Docker logs (1=yes, 0=no)

    Tooltip (DOCKER_LOGS_TIMESTAMPS)
    Loading…
  • docker.docker_status_timeout
    env: DOCKER_STATUS_TIMEOUT
    number
    default: 5

    Timeout for Docker status check (seconds)

    Tooltip (DOCKER_STATUS_TIMEOUT)
    Loading…

embedding (18)

dot-path → type → default → env key (if available)
  • embedding.auto_set_dimensions
    env: —
    boolean
    default: True

    When true, the UI auto-syncs embedding_dim from data/models.json when model changes.

  • embedding.contextual_chunk_embeddings
    env: —
    "off" | "prepend_context" | "late_chunking_local_only"
    default: "off"

    Contextual chunk embedding mode. 'late_chunking_local_only' requires local/HF provider backend.

  • embedding.embed_text_prefix
    env: —
    string
    default: ""

    Prefix added before chunk text prior to embedding (stable document context).

  • embedding.embed_text_suffix
    env: —
    string
    default: ""

    Suffix added after chunk text prior to embedding.

  • embedding.embedding_backend
    env: —
    "deterministic" | "provider"
    default: "deterministic"

    Embedding execution backend. 'deterministic' is offline/test-friendly; 'provider' calls real providers.

  • embedding.embedding_batch_size
    env: EMBEDDING_BATCH_SIZE
    number
    default: 64

    Batch size for embedding generation

    Tooltip (EMBEDDING_BATCH_SIZE)
    Loading…
  • embedding.embedding_cache_enabled
    env: EMBEDDING_CACHE_ENABLED
    number
    default: 1

    Enable embedding cache

    Tooltip (EMBEDDING_CACHE_ENABLED)
    Loading…
  • embedding.embedding_dim
    env: EMBEDDING_DIM
    number
    default: 3072

    Embedding dimensions

    Tooltip (EMBEDDING_DIM)
    Loading…
  • embedding.embedding_max_tokens
    env: EMBEDDING_MAX_TOKENS
    number
    default: 8000

    Max tokens per embedding chunk

    Tooltip (EMBEDDING_MAX_TOKENS)
    Loading…
  • embedding.embedding_model
    env: EMBEDDING_MODEL
    string
    default: "text-embedding-3-large"

    OpenAI embedding model

    Tooltip (EMBEDDING_MODEL)
    Loading…
  • embedding.embedding_model_local
    env: EMBEDDING_MODEL_LOCAL
    string
    default: "all-MiniLM-L6-v2"

    Local SentenceTransformer model

    Tooltip (EMBEDDING_MODEL_LOCAL)
    Loading…
  • embedding.embedding_model_mlx
    env: —
    string
    default: "mlx-community/all-MiniLM-L6-v2-4bit"

    MLX-optimized embedding model (used when embedding_type=mlx)

  • embedding.embedding_retry_max
    env: EMBEDDING_RETRY_MAX
    number
    default: 3

    Max retries for embedding API

    Tooltip (EMBEDDING_RETRY_MAX)
    Loading…
  • embedding.embedding_timeout
    env: EMBEDDING_TIMEOUT
    number
    default: 30

    Embedding API timeout (seconds)

    Tooltip (EMBEDDING_TIMEOUT)
    Loading…
  • embedding.embedding_type
    env: EMBEDDING_TYPE
    string
    default: "openai"

    Embedding provider (dynamic - validated against models.json at runtime)

    Tooltip (EMBEDDING_TYPE)
    Loading…
  • embedding.input_truncation
    env: —
    "error" | "truncate_end" | "truncate_middle"
    default: "truncate_end"

    What to do when text exceeds embedding/token limits.

  • embedding.late_chunking_max_doc_tokens
    env: —
    number
    default: 8192

    Max tokens per document segment for local late chunking.

  • embedding.voyage_model
    env: VOYAGE_MODEL
    string
    default: "voyage-code-3"

    Voyage embedding model

    Tooltip (VOYAGE_MODEL)
    Loading…

enrichment (6)

dot-path → type → default → env key (if available)
  • enrichment.chunk_summaries_enrich_default
    env: CHUNK_SUMMARIES_ENRICH_DEFAULT
    number
    default: 1

    Enable chunk_summary enrichment by default

    Tooltip (CHUNK_SUMMARIES_ENRICH_DEFAULT)
    Loading…
  • enrichment.chunk_summaries_max
    env: CHUNK_SUMMARIES_MAX
    number
    default: 100

    Max chunk_summaries to generate

    Tooltip (CHUNK_SUMMARIES_MAX)
    Loading…
  • enrichment.enrich_code_chunks
    env: ENRICH_CODE_CHUNKS
    number
    default: 1

    Enable chunk enrichment

    Tooltip (ENRICH_CODE_CHUNKS)
    Loading…
  • enrichment.enrich_max_chars
    env: ENRICH_MAX_CHARS
    number
    default: 1000

    Max chars for enrichment prompt

  • enrichment.enrich_min_chars
    env: ENRICH_MIN_CHARS
    number
    default: 50

    Min chars for enrichment

  • enrichment.enrich_timeout
    env: ENRICH_TIMEOUT
    number
    default: 30

    Enrichment timeout (seconds)

evaluation (8)

dot-path → type → default → env key (if available)
  • evaluation.baseline_path
    env: BASELINE_PATH
    string
    default: "data/evals/eval_baseline.json"

    Baseline results path

    Tooltip (BASELINE_PATH)
    Loading…
  • evaluation.eval_dataset_path
    env: EVAL_DATASET_PATH
    string
    default: "data/evaluation_dataset.json"

    Evaluation dataset path

  • evaluation.eval_multi_m
    env: EVAL_MULTI_M
    number
    default: 10

    Multi-query variants for evaluation

  • evaluation.ndcg_at_10_k
    env: —
    number
    default: 10

    K used for ndcg_at_10 metric (default 10).

  • evaluation.precision_at_5_k
    env: —
    number
    default: 5

    K used for precision_at_5 metric (default 5).

  • evaluation.recall_at_10_k
    env: —
    number
    default: 10

    K used for recall_at_10 metric (default 10).

  • evaluation.recall_at_20_k
    env: —
    number
    default: 20

    K used for recall_at_20 metric (default 20).

  • evaluation.recall_at_5_k
    env: —
    number
    default: 5

    K used for recall_at_5 metric (default 5).

fusion (6)

dot-path → type → default → env key (if available)
  • fusion.graph_weight
    env: FUSION_GRAPH_WEIGHT
    number
    default: 0.3

    Weight for graph search results (Neo4j)

    Tooltip (FUSION_GRAPH_WEIGHT)
    Loading…
  • fusion.method
    env: FUSION_METHOD
    "rrf" | "weighted"
    default: "rrf"

    Fusion method: 'rrf' (Reciprocal Rank Fusion) or 'weighted' (score-based)

    Tooltip (FUSION_METHOD)
    Loading…
  • fusion.normalize_scores
    env: FUSION_NORMALIZE_SCORES
    boolean
    default: True

    Normalize scores to [0,1] before fusion

    Tooltip (FUSION_NORMALIZE_SCORES)
    Loading…
  • fusion.rrf_k
    env: FUSION_RRF_K
    number
    default: 60

    RRF smoothing constant (higher = more weight to top ranks)

    Tooltip (FUSION_RRF_K)
    Loading…
  • fusion.sparse_weight
    env: FUSION_SPARSE_WEIGHT
    number
    default: 0.3

    Weight for sparse BM25/FTS search results

    Tooltip (FUSION_SPARSE_WEIGHT)
    Loading…
  • fusion.vector_weight
    env: FUSION_VECTOR_WEIGHT
    number
    default: 0.4

    Weight for vector search results (pgvector)

    Tooltip (FUSION_VECTOR_WEIGHT)
    Loading…

generation (20)

dot-path → type → default → env key (if available)
  • generation.enrich_backend
    env: ENRICH_BACKEND
    string
    default: "openai"

    Enrichment backend

    Tooltip (ENRICH_BACKEND)
    Loading…
  • generation.enrich_disabled
    env: ENRICH_DISABLED
    number
    default: 0

    Disable code enrichment

    Tooltip (ENRICH_DISABLED)
    Loading…
  • generation.enrich_model
    env: ENRICH_MODEL
    string
    default: "gpt-4o-mini"

    Model for code enrichment

    Tooltip (ENRICH_MODEL)
    Loading…
  • generation.enrich_model_ollama
    env: ENRICH_MODEL_OLLAMA
    string
    default: ""

    Ollama enrichment model

    Tooltip (ENRICH_MODEL_OLLAMA)
    Loading…
  • generation.gen_backend
    env: —
    string
    default: "openai"

    Provider backend for gen_model and channel overrides

  • generation.gen_max_tokens
    env: GEN_MAX_TOKENS
    number
    default: 2048

    Max tokens for generation

    Tooltip (GEN_MAX_TOKENS)
    Loading…
  • generation.gen_model
    env: GEN_MODEL
    string
    default: "gpt-4o-mini"

    Primary generation model

    Tooltip (GEN_MODEL)
    Loading…
  • generation.gen_model_cli
    env: GEN_MODEL_CLI
    string
    default: "qwen3-coder:14b"
    internal/basics

    CLI generation model

    Tooltip (GEN_MODEL_CLI)
    Loading…
  • generation.gen_model_http
    env: GEN_MODEL_HTTP
    string
    default: ""
    internal/basics

    HTTP transport generation model override

    Tooltip (GEN_MODEL_HTTP)
    Loading…
  • generation.gen_model_mcp
    env: GEN_MODEL_MCP
    string
    default: ""

    MCP transport generation model override

    Tooltip (GEN_MODEL_MCP)
    Loading…
  • generation.gen_model_ollama
    env: GEN_MODEL_OLLAMA
    string
    default: "qwen3-coder:30b"

    Ollama generation model

    Tooltip (GEN_MODEL_OLLAMA)
    Loading…
  • generation.gen_retry_max
    env: GEN_RETRY_MAX
    number
    default: 2

    Max retries for generation

    Tooltip (GEN_RETRY_MAX)
    Loading…
  • generation.gen_temperature
    env: GEN_TEMPERATURE
    number
    default: 0.0
    internal/basics

    Generation temperature

    Tooltip (GEN_TEMPERATURE)
    Loading…
  • generation.gen_timeout
    env: GEN_TIMEOUT
    number
    default: 60

    Generation timeout (seconds)

    Tooltip (GEN_TIMEOUT)
    Loading…
  • generation.gen_top_p
    env: GEN_TOP_P
    number
    default: 1.0

    Nucleus sampling threshold

    Tooltip (GEN_TOP_P)
    Loading…
  • generation.ollama_num_ctx
    env: OLLAMA_NUM_CTX
    number
    default: 8192

    Context window for Ollama

    Tooltip (OLLAMA_NUM_CTX)
    Loading…
  • generation.ollama_request_timeout
    env: OLLAMA_REQUEST_TIMEOUT
    number
    default: 300

    Maximum total time to wait for a local (Ollama) generation request to complete (seconds)

    Tooltip (OLLAMA_REQUEST_TIMEOUT)
    Loading…
  • generation.ollama_stream_idle_timeout
    env: OLLAMA_STREAM_IDLE_TIMEOUT
    number
    default: 60

    Maximum idle time allowed between streamed chunks from local (Ollama) during generation (seconds)

    Tooltip (OLLAMA_STREAM_IDLE_TIMEOUT)
    Loading…
  • generation.ollama_url
    env: OLLAMA_URL
    string
    default: "http://127.0.0.1:11434/api"
    internal/basics

    Ollama API URL

    Tooltip (OLLAMA_URL)
    Loading…
  • generation.openai_base_url
    env: OPENAI_BASE_URL
    string
    default: ""
    internal/basics

    OpenAI API base URL override (for proxies)

    Tooltip (OPENAI_BASE_URL)
    Loading…

graph_indexing (26)

dot-path → type → default → env key (if available)
  • graph_indexing.ast_calls_weight
    env: —
    number
    default: 1.0

    Edge weight for AST call relationships (function->callee).

  • graph_indexing.ast_contains_weight
    env: —
    number
    default: 1.0

    Edge weight for AST containment relationships (module->class/function, class->method).

  • graph_indexing.ast_imports_weight
    env: —
    number
    default: 1.0

    Edge weight for AST import relationships (module->imported_module).

  • graph_indexing.ast_inherits_weight
    env: —
    number
    default: 1.0

    Edge weight for AST inheritance relationships (class->base).

  • graph_indexing.build_lexical_graph
    env: —
    boolean
    default: True

    Build lexical graph (Document/Chunk nodes + NEXT_CHUNK relationships)

  • graph_indexing.chunk_embedding_property
    env: —
    string
    default: "embedding"

    Chunk node property that stores the embedding vector

  • graph_indexing.chunk_vector_index_name
    env: —
    string
    default: "tribrid_chunk_embeddings"

    Neo4j vector index name for Chunk embeddings (mode='chunk')

  • graph_indexing.enabled
    env: —
    boolean
    default: True

    Enable graph building during indexing (Neo4j)

  • graph_indexing.semantic_kg_allowed_entity_types
    env: —
    ("person" | "org" | "location" | "event" | "concept")[]
    default: ["concept"]

    Allowed semantic KG entity types produced by extraction.

  • graph_indexing.semantic_kg_enabled
    env: —
    boolean
    default: False

    Build semantic knowledge graph (concept entities + relations) linked to chunks during indexing

  • graph_indexing.semantic_kg_llm_model
    env: —
    string
    default: ""

    LLM model name for semantic KG extraction when semantic_kg_mode='llm' (empty = use generation.enrich_model)

  • graph_indexing.semantic_kg_llm_timeout_s
    env: —
    number
    default: 30

    Timeout (seconds) for semantic KG LLM extraction per chunk

  • graph_indexing.semantic_kg_max_chunks
    env: —
    number
    default: 200

    Maximum chunks to process for semantic KG extraction per indexing run (0 = disabled)

  • graph_indexing.semantic_kg_max_concepts_per_chunk
    env: —
    number
    default: 8

    Maximum semantic concepts to extract per chunk

  • graph_indexing.semantic_kg_max_relations_per_chunk
    env: —
    number
    default: 12

    Maximum semantic relations to create per chunk (heuristic mode)

  • graph_indexing.semantic_kg_min_concept_len
    env: —
    number
    default: 4

    Minimum length for semantic concept tokens

  • graph_indexing.semantic_kg_mode
    env: —
    "heuristic" | "llm"
    default: "heuristic"

    Semantic KG extraction mode. 'heuristic' is deterministic and test-friendly; 'llm' uses an LLM to extract entities + relations.

  • graph_indexing.semantic_kg_reasoning_effort
    env: —
    "minimal" | "low" | "medium" | "high" | "xhigh"
    default: "medium"

    Reasoning effort for semantic KG extraction when using OpenAI Responses-compatible models.

  • graph_indexing.semantic_kg_relation_weight_heuristic
    env: —
    number
    default: 0.5

    Edge weight for semantic concept relations in heuristic fallback mode.

  • graph_indexing.semantic_kg_relation_weight_llm
    env: —
    number
    default: 0.7

    Edge weight for semantic concept relations in LLM mode.

  • graph_indexing.semantic_kg_require_llm_success
    env: —
    boolean
    default: False

    When true in LLM mode, fail semantic KG extraction for a chunk if LLM extraction fails instead of falling back.

  • graph_indexing.semantic_kg_typed_entities_enabled
    env: —
    boolean
    default: False

    When true, semantic KG extraction preserves/uses typed entities (person, org, location, event, concept).

  • graph_indexing.store_chunk_embeddings
    env: —
    boolean
    default: True

    Store chunk embeddings on Chunk nodes for Neo4j vector search (requires dense embeddings)

  • graph_indexing.vector_index_online_timeout_s
    env: —
    number
    default: 60.0

    Timeout waiting for Neo4j vector index ONLINE (seconds)

  • graph_indexing.vector_similarity_function
    env: —
    "cosine" | "euclidean"
    default: "cosine"

    Neo4j vector similarity function

  • graph_indexing.wait_vector_index_online
    env: —
    boolean
    default: True

    Wait for the Neo4j vector index to come ONLINE after (re)creating it

graph_storage (14)

dot-path → type → default → env key (if available)
  • graph_storage.community_algorithm
    env: GRAPH_COMMUNITY_ALGORITHM
    "louvain" | "label_propagation"
    default: "louvain"

    Community detection algorithm

  • graph_storage.entity_types
    env: —
    string[]
    default: ["function", "class", "module", "variable", "im...

    Entity types to extract and store in graph

  • graph_storage.graph_search_top_k
    env: GRAPH_SEARCH_TOP_K
    number
    default: 30

    Number of results from graph traversal

    Tooltip (GRAPH_SEARCH_TOP_K)
    Loading…
  • graph_storage.include_communities
    env: GRAPH_INCLUDE_COMMUNITIES
    boolean
    default: True

    Include community detection in graph analysis

    Tooltip (GRAPH_INCLUDE_COMMUNITIES)
    Loading…
  • graph_storage.max_hops
    env: GRAPH_MAX_HOPS
    number
    default: 2

    Maximum traversal hops for graph search

    Tooltip (GRAPH_MAX_HOPS)
    Loading…
  • graph_storage.neo4j_auto_create_databases
    env: —
    boolean
    default: True

    Automatically create per-corpus Neo4j databases when missing (Enterprise).

  • graph_storage.neo4j_database
    env: NEO4J_DATABASE
    string
    default: "neo4j"

    Neo4j database name

  • graph_storage.neo4j_database_mode
    env: —
    "shared" | "per_corpus"
    default: "shared"

    Database isolation mode: 'shared' uses a single Neo4j database (Community-compatible), 'per_corpus' uses a separate Neo4j database per corpus (Enterprise multi-database).

  • graph_storage.neo4j_database_prefix
    env: —
    string
    default: "tribrid_"

    Prefix for per-corpus Neo4j database names when neo4j_database_mode='per_corpus'.

  • graph_storage.neo4j_password
    env: NEO4J_PASSWORD
    string
    default: —

    Neo4j password (defaults to NEO4J_PASSWORD env var when unset)

  • graph_storage.neo4j_uri
    env: NEO4J_URI
    string
    default: "bolt://localhost:7687"

    Neo4j connection URI (bolt:// or neo4j://)

    Tooltip (NEO4J_URI)
    Loading…
  • graph_storage.neo4j_user
    env: NEO4J_USER
    string
    default: "neo4j"

    Neo4j username

  • graph_storage.neo4j_vector_query_mode
    env: —
    "auto" | "procedure" | "search"
    default: "auto"

    Neo4j chunk-vector query mode. 'auto' prefers runtime-safe defaults and only uses SEARCH where supported.

  • graph_storage.relationship_types
    env: —
    string[]
    default: ["calls", "imports", "inherits", "contains", "r...

    Relationship types to extract

hydration (2)

dot-path → type → default → env key (if available)
  • hydration.hydration_max_chars
    env: HYDRATION_MAX_CHARS
    number
    default: 2000

    Max characters to hydrate

    Tooltip (HYDRATION_MAX_CHARS)
    Loading…
  • hydration.hydration_mode
    env: HYDRATION_MODE
    string
    default: "lazy"

    Context hydration mode

    Tooltip (HYDRATION_MODE)
    Loading…

indexing (16)

dot-path → type → default → env key (if available)
  • indexing.bm25_stemmer_lang
    env: BM25_STEMMER_LANG
    string
    default: "english"

    Stemmer language

    Tooltip (BM25_STEMMER_LANG)
    Loading…
  • indexing.bm25_tokenizer
    env: BM25_TOKENIZER
    string
    default: "stemmer"

    BM25 tokenizer type

    Tooltip (BM25_TOKENIZER)
    Loading…
  • indexing.estimated_tokens_per_second_local
    env: —
    number | null
    default: None

    Optional local embedding throughput override for index-time estimates (tokens/sec).

  • indexing.index_excluded_exts
    env: INDEX_EXCLUDED_EXTS
    string
    default: ".png,.jpg,.gif,.ico,.svg,.woff,.ttf"

    Excluded file extensions (comma-separated)

    Tooltip (INDEX_EXCLUDED_EXTS)
    Loading…
  • indexing.index_max_file_size_mb
    env: INDEX_MAX_FILE_SIZE_MB
    number
    default: 250

    Max file size to index (MB)

    Tooltip (INDEX_MAX_FILE_SIZE_MB)
    Loading…
  • indexing.indexing_batch_size
    env: INDEXING_BATCH_SIZE
    number
    default: 100

    Batch size for indexing

    Tooltip (INDEXING_BATCH_SIZE)
    Loading…
  • indexing.indexing_workers
    env: INDEXING_WORKERS
    number
    default: 4

    Parallel workers for indexing

    Tooltip (INDEXING_WORKERS)
    Loading…
  • indexing.large_file_mode
    env: —
    "read_all" | "stream"
    default: "stream"

    How to ingest very large text files. 'stream' avoids loading entire files into memory.

  • indexing.large_file_stream_chunk_chars
    env: —
    number
    default: 2000000

    When large_file_mode='stream', read text files in bounded char blocks (best-effort).

  • indexing.parquet_extract_include_column_names
    env: —
    number
    default: 1

    Include column headers when extracting Parquet text

  • indexing.parquet_extract_max_cell_chars
    env: —
    number
    default: 20000

    Max characters per extracted Parquet cell (best-effort)

  • indexing.parquet_extract_max_chars
    env: —
    number
    default: 2000000

    Max characters to extract from a single Parquet file during indexing (best-effort)

  • indexing.parquet_extract_max_rows
    env: —
    number
    default: 5000

    Max rows to extract from a single Parquet file during indexing (best-effort)

  • indexing.parquet_extract_text_columns_only
    env: —
    number
    default: 1

    Extract only text/string-like columns from Parquet files when possible

  • indexing.postgres_url
    env: POSTGRES_URL
    string
    default: "postgresql://postgres:postgres@localhost:5432/t..."

    PostgreSQL connection string (DSN) for pgvector + FTS storage

    Tooltip (POSTGRES_URL)
    Loading…
  • indexing.skip_dense
    env: SKIP_DENSE
    number
    default: 0

    Skip dense vector indexing

    Tooltip (SKIP_DENSE)
    Loading…

keywords (5)

dot-path → type → default → env key (if available)
  • keywords.keywords_auto_generate
    env: KEYWORDS_AUTO_GENERATE
    number
    default: 1

    Auto-generate keywords

    Tooltip (KEYWORDS_AUTO_GENERATE)
    Loading…
  • keywords.keywords_boost
    env: KEYWORDS_BOOST
    number
    default: 1.3

    Score boost for keyword matches

    Tooltip (KEYWORDS_BOOST)
    Loading…
  • keywords.keywords_max_per_repo
    env: KEYWORDS_MAX_PER_REPO
    number
    default: 50

    Max discriminative keywords per repo

    Tooltip (KEYWORDS_MAX_PER_REPO)
    Loading…
  • keywords.keywords_min_freq
    env: KEYWORDS_MIN_FREQ
    number
    default: 3

    Min frequency for keyword

    Tooltip (KEYWORDS_MIN_FREQ)
    Loading…
  • keywords.keywords_refresh_hours
    env: KEYWORDS_REFRESH_HOURS
    number
    default: 24

    Hours between keyword refresh

    Tooltip (KEYWORDS_REFRESH_HOURS)
    Loading…

layer_bonus (6)

dot-path → type → default → env key (if available)
  • layer_bonus.freshness_bonus
    env: FRESHNESS_BONUS
    number
    default: 0.05

    Bonus for recently modified files

    Tooltip (FRESHNESS_BONUS)
    Loading…
  • layer_bonus.gui
    env: LAYER_BONUS_GUI
    number
    default: 0.15
    internal/basics

    Bonus for GUI/front-end layers

    Tooltip (LAYER_BONUS_GUI)
    Loading…
  • layer_bonus.indexer
    env: LAYER_BONUS_INDEXER
    number
    default: 0.15

    Bonus for indexing/ingestion layers

    Tooltip (LAYER_BONUS_INDEXER)
    Loading…
  • layer_bonus.intent_matrix
    env: LAYER_INTENT_MATRIX
    Record<string, Record<string, number>>
    default: —

    Intent-to-layer bonus matrix. Keys are query intents, values are layer->multiplier maps.

    Tooltip (LAYER_INTENT_MATRIX)
    Loading…
  • layer_bonus.retrieval
    env: LAYER_BONUS_RETRIEVAL
    number
    default: 0.15

    Bonus for retrieval/API layers

    Tooltip (LAYER_BONUS_RETRIEVAL)
    Loading…
  • layer_bonus.vendor_penalty
    env: VENDOR_PENALTY
    number
    default: -0.1
    internal/basics

    Penalty for vendor/third-party code (negative values apply a penalty)

    Tooltip (VENDOR_PENALTY)
    Loading…

mcp (10)

dot-path → type → default → env key (if available)
  • mcp.allowed_hosts
    env: —
    string[]
    default: —

    Allowed Host header values for MCP HTTP (supports wildcard ':*').

  • mcp.allowed_origins
    env: —
    string[]
    default: —

    Allowed Origin header values for MCP HTTP (supports wildcard ':*').

  • mcp.default_mode
    env: MCP_DEFAULT_MODE
    "tribrid" | "dense_only" | "sparse_only" | "graph_only"
    default: "tribrid"

    Default retrieval mode for MCP search/answer tools when not provided.

  • mcp.default_top_k
    env: MCP_DEFAULT_TOP_K
    number
    default: 20

    Default top_k for MCP search/answer tools when not provided.

  • mcp.enable_dns_rebinding_protection
    env: MCP_HTTP_DNS_REBIND_PROTECTION
    boolean
    default: True

    Enable DNS rebinding protection for MCP HTTP (recommended).

  • mcp.enabled
    env: MCP_HTTP_ENABLED
    boolean
    default: True

    Enable the embedded MCP Streamable HTTP server.

  • mcp.json_response
    env: MCP_HTTP_JSON_RESPONSE
    boolean
    default: True

    Prefer JSON responses for MCP Streamable HTTP (recommended).

  • mcp.mount_path
    env: MCP_HTTP_PATH
    string
    default: "/mcp"

    Mount path for the MCP Streamable HTTP endpoint (e.g. /mcp).

    Tooltip (MCP_HTTP_PATH)
    Loading…
  • mcp.require_api_key
    env: MCP_REQUIRE_API_KEY
    boolean
    default: False

    Require `Authorization: Bearer $MCP_API_KEY` for MCP HTTP access.

  • mcp.stateless_http
    env: MCP_HTTP_STATELESS
    boolean
    default: True

    Run MCP Streamable HTTP in stateless mode (recommended).

reranking (12)

dot-path → type → default → env key (if available)
  • reranking.rerank_input_snippet_chars
    env: RERANK_INPUT_SNIPPET_CHARS
    number
    default: 700

    Snippet chars for reranking input

    Tooltip (RERANK_INPUT_SNIPPET_CHARS)
    Loading…
  • reranking.reranker_cloud_model
    env: RERANKER_CLOUD_MODEL
    string
    default: "rerank-v3.5"

    Cloud reranker model name when mode=cloud (Cohere: rerank-v3.5)

    Tooltip (RERANKER_CLOUD_MODEL)
    Loading…
  • reranking.reranker_cloud_provider
    env: RERANKER_CLOUD_PROVIDER
    string
    default: "cohere"

    Cloud reranker provider when mode=cloud (cohere, voyage, jina)

    Tooltip (RERANKER_CLOUD_PROVIDER)
    Loading…
  • reranking.reranker_cloud_top_n
    env: RERANKER_CLOUD_TOP_N
    number
    default: 50

    Number of candidates to rerank (cloud mode)

    Tooltip (RERANKER_CLOUD_TOP_N)
    Loading…
  • reranking.reranker_mode
    env: RERANKER_MODE
    string
    default: "none"

    Reranker mode: 'cloud' (Cohere/Voyage/Jina API), 'learning' (MLX Qwen3 LoRA learning reranker), 'none' (disabled). Legacy values 'local'/'hf' normalize to 'learning'.

    Tooltip (RERANKER_MODE)
    Loading…
  • reranking.reranker_timeout
    env: RERANKER_TIMEOUT
    number
    default: 10

    Reranker API timeout (seconds)

    Tooltip (RERANKER_TIMEOUT)
    Loading…
  • reranking.tribrid_reranker_alpha
    env: TRIBRID_RERANKER_ALPHA
    number
    default: 0.7

    Blend weight for reranker scores

    Tooltip (TRIBRID_RERANKER_ALPHA)
    Loading…
  • reranking.tribrid_reranker_batch
    env: TRIBRID_RERANKER_BATCH
    number
    default: 16

    Reranker batch size

    Tooltip (TRIBRID_RERANKER_BATCH)
    Loading…
  • reranking.tribrid_reranker_maxlen
    env: TRIBRID_RERANKER_MAXLEN
    number
    default: 512

    Max token length for reranker

    Tooltip (TRIBRID_RERANKER_MAXLEN)
    Loading…
  • reranking.tribrid_reranker_reload_on_change
    env: TRIBRID_RERANKER_RELOAD_ON_CHANGE
    number
    default: 0

    Hot-reload on model change

    Tooltip (TRIBRID_RERANKER_RELOAD_ON_CHANGE)
    Loading…
  • reranking.tribrid_reranker_reload_period_sec
    env: TRIBRID_RERANKER_RELOAD_PERIOD_SEC
    number
    default: 60

    Reload check period (seconds)

  • reranking.tribrid_reranker_topn
    env: TRIBRID_RERANKER_TOPN
    number
    default: 50

    Number of candidates to rerank (learning mode)

    Tooltip (TRIBRID_RERANKER_TOPN)
    Loading…

retrieval (32)

dot-path → type → default → env key (if available)
  • retrieval.bm25_b
    env: BM25_B
    number
    default: 0.4

    BM25 length normalization (0=no penalty, 1=full penalty, 0.3-0.5 recommended for code)

    Tooltip (BM25_B)
    Loading…
  • retrieval.bm25_k1
    env: BM25_K1
    number
    default: 1.2

    BM25 term frequency saturation parameter (higher = more weight to term frequency)

    Tooltip (BM25_K1)
    Loading…
  • retrieval.bm25_weight
    env: BM25_WEIGHT
    number
    default: 0.3

    Weight for BM25 in hybrid search

    Tooltip (BM25_WEIGHT)
    Loading…
  • retrieval.chunk_summary_search_enabled
    env: CHUNK_SUMMARY_SEARCH_ENABLED
    number
    default: 1

    Enable chunk_summary-based retrieval

    Tooltip (CHUNK_SUMMARY_SEARCH_ENABLED)
    Loading…
  • retrieval.conf_any
    env: CONF_ANY
    number
    default: 0.55

    Minimum confidence threshold

    Tooltip (CONF_ANY)
    Loading…
  • retrieval.conf_avg5
    env: CONF_AVG5
    number
    default: 0.55

    Confidence threshold for avg top-5

    Tooltip (CONF_AVG5)
    Loading…
  • retrieval.conf_top1
    env: CONF_TOP1
    number
    default: 0.62

    Confidence threshold for top-1

    Tooltip (CONF_TOP1)
    Loading…
  • retrieval.dedup_by
    env: —
    "chunk_id" | "file_path"
    default: "chunk_id"

    Dedup key for final results.

  • retrieval.enable_mmr
    env: —
    boolean
    default: False

    Enable MMR diversification when embeddings are available.

  • retrieval.eval_final_k
    env: EVAL_FINAL_K
    number
    default: 5

    Top-k for evaluation runs

    Tooltip (EVAL_FINAL_K)
    Loading…
  • retrieval.eval_multi
    env: EVAL_MULTI
    number
    default: 1

    Enable multi-query in eval

    Tooltip (EVAL_MULTI)
    Loading…
  • retrieval.fallback_confidence
    env: FALLBACK_CONFIDENCE
    number
    default: 0.55

    Confidence threshold for fallback retrieval strategies

    Tooltip (FALLBACK_CONFIDENCE)
    Loading…
  • retrieval.final_k
    env: FINAL_K
    number
    default: 10

    Default top-k for search results

    Tooltip (FINAL_K)
    Loading…
  • retrieval.hydration_max_chars
    env: —
    number
    default: 2000

    Max characters for result hydration

  • retrieval.hydration_mode
    env: —
    string
    default: "lazy"

    Result hydration mode

  • retrieval.langgraph_final_k
    env: LANGGRAPH_FINAL_K
    number
    default: 20

    Number of final results to return in LangGraph pipeline

    Tooltip (LANGGRAPH_FINAL_K)
    Loading…
  • retrieval.langgraph_max_query_rewrites
    env: LANGGRAPH_MAX_QUERY_REWRITES
    number
    default: 2

    Maximum number of query rewrites for LangGraph pipeline

    Tooltip (LANGGRAPH_MAX_QUERY_REWRITES)
    Loading…
  • retrieval.max_chunks_per_file
    env: —
    number
    default: 3

    Max chunks to return per file_path (document-aware result shaping).

  • retrieval.max_query_rewrites
    env: MAX_QUERY_REWRITES, MQ_REWRITES
    number
    default: 2

    Maximum number of query rewrites for multi-query expansion

    Tooltip (MAX_QUERY_REWRITES)
    Loading…
  • retrieval.min_score_graph
    env: —
    number
    default: 0.0

    Minimum score threshold for graph leg results (0 disables).

  • retrieval.min_score_sparse
    env: —
    number
    default: 0.0

    Minimum score threshold for sparse leg results (0 disables). Note: sparse scores are engine-dependent (FTS vs BM25).

  • retrieval.min_score_vector
    env: —
    number
    default: 0.0

    Minimum score threshold for vector leg results (0 disables).

  • retrieval.mmr_lambda
    env: —
    number
    default: 0.7

    MMR lambda (1=query relevance only, 0=diversity only).

  • retrieval.multi_query_m
    env: MULTI_QUERY_M
    number
    default: 4

    Query variants for multi-query

    Tooltip (MULTI_QUERY_M)
    Loading…
  • retrieval.neighbor_window
    env: —
    number
    default: 1

    Include adjacent chunks by ordinal for coherence (requires chunk_ordinal metadata).

  • retrieval.query_expansion_enabled
    env: QUERY_EXPANSION_ENABLED
    number
    default: 1

    Enable synonym expansion

    Tooltip (QUERY_EXPANSION_ENABLED)
    Loading…
  • retrieval.rrf_k_div
    env: RRF_K_DIV
    number
    default: 60

    RRF rank smoothing constant (higher = more weight to top ranks)

    Tooltip (RRF_K_DIV)
    Loading…
  • retrieval.topk_dense
    env: TOPK_DENSE
    number
    default: 75

    Top-K for dense vector search

    Tooltip (TOPK_DENSE)
    Loading…
  • retrieval.topk_sparse
    env: TOPK_SPARSE
    number
    default: 75

    Top-K for sparse BM25 search

    Tooltip (TOPK_SPARSE)
    Loading…
  • retrieval.tribrid_synonyms_path
    env: TRIBRID_SYNONYMS_PATH
    string
    default: ""

    Custom path to semantic_synonyms.json (default: data/semantic_synonyms.json)

    Tooltip (TRIBRID_SYNONYMS_PATH)
    Loading…
  • retrieval.use_semantic_synonyms
    env: USE_SEMANTIC_SYNONYMS
    number
    default: 1

    Enable semantic synonym expansion

    Tooltip (USE_SEMANTIC_SYNONYMS)
    Loading…
  • retrieval.vector_weight
    env: VECTOR_WEIGHT
    number
    default: 0.7

    Weight for vector search

    Tooltip (VECTOR_WEIGHT)
    Loading…

scoring (5)

dot-path → type → default → env key (if available)
  • scoring.chunk_summary_bonus
    env: CHUNK_SUMMARY_BONUS
    number
    default: 0.08

    Bonus score for chunks matched via chunk_summary-based retrieval

    Tooltip (CHUNK_SUMMARY_BONUS)
    Loading…
  • scoring.filename_boost_exact
    env: FILENAME_BOOST_EXACT
    number
    default: 1.5

    Score multiplier when filename exactly matches query terms

    Tooltip (FILENAME_BOOST_EXACT)
    Loading…
  • scoring.filename_boost_partial
    env: FILENAME_BOOST_PARTIAL
    number
    default: 1.2

    Score multiplier when path components match query terms

    Tooltip (FILENAME_BOOST_PARTIAL)
    Loading…
  • scoring.path_boosts
    env: PATH_BOOSTS
    string
    default: "/gui,/server,/indexer,/retrieval"

    Comma-separated path prefixes to boost

    Tooltip (PATH_BOOSTS)
    Loading…
  • scoring.vendor_mode
    env: VENDOR_MODE
    string
    default: "prefer_first_party"
    internal/basics

    Vendor code preference

    Tooltip (VENDOR_MODE)
    Loading…

semantic_cache (13)

dot-path → type → default → env key (if available)
  • semantic_cache.bypass_if_images
    env: —
    number
    default: 1

    Bypass chat generation cache when images are attached.

  • semantic_cache.chat_history_window
    env: —
    number
    default: 6

    Number of prior conversation turns included in chat cache fingerprint.

  • semantic_cache.enabled
    env: —
    number
    default: 0

    Enable semantic cache reads/writes (0=off, 1=on).

  • semantic_cache.max_entries
    env: —
    number
    default: 5000

    Maximum cache rows to retain per scope/endpoint.

  • semantic_cache.max_temperature_for_write
    env: —
    number
    default: 0.5

    Skip generation-cache writes when temperature exceeds this value.

  • semantic_cache.min_query_chars
    env: —
    number
    default: 3

    Minimum query length before cache is eligible.

  • semantic_cache.mode
    env: —
    "read_write" | "read_only" | "write_only"
    default: "read_write"

    Cache mode when enabled.

  • semantic_cache.similarity_threshold_answer
    env: —
    number
    default: 0.93

    Minimum cosine similarity for semantic answer cache hits.

  • semantic_cache.similarity_threshold_chat
    env: —
    number
    default: 0.95

    Minimum cosine similarity for semantic chat cache hits.

  • semantic_cache.similarity_threshold_search
    env: —
    number
    default: 0.9

    Minimum cosine similarity for semantic search cache hits.

  • semantic_cache.ttl_seconds_answer
    env: —
    number
    default: 1800

    TTL in seconds for answer cache entries.

  • semantic_cache.ttl_seconds_chat
    env: —
    number
    default: 600

    TTL in seconds for chat cache entries.

  • semantic_cache.ttl_seconds_search
    env: —
    number
    default: 900

    TTL in seconds for search cache entries.

system_prompts (9)

dot-path → type → default → env key (if available)
  • system_prompts.code_enrichment
    env: PROMPT_CODE_ENRICHMENT
    string
    default: "Analyze this database and return a JSON object ..."

    Extract metadata from code chunks during indexing

  • system_prompts.eval_analysis
    env: PROMPT_EVAL_ANALYSIS
    string
    default: "You are an expert RAG (Retrieval-Augmented Gene..."

    Analyze eval regressions with skeptical approach - avoid false explanations

  • system_prompts.lightweight_chunk_summaries
    env: PROMPT_LIGHTWEIGHT_CARDS
    string
    default: "Extract key information from this database: sym..."

    Lightweight chunk_summary generation prompt for faster indexing

  • system_prompts.main_rag_chat
    env: PROMPT_MAIN_RAG_CHAT
    string
    default: "You are a helpful agentic RAG database assistan..."

    Main conversational AI system prompt for answering database questions

  • system_prompts.query_expansion
    env: PROMPT_QUERY_EXPANSION
    string
    default: "You are a database search query expander. Given..."

    Generate query variants for better recall in hybrid search

  • system_prompts.query_rewrite
    env: PROMPT_QUERY_REWRITE
    string
    default: "You rewrite developer questions into search-opt..."

    Optimize user query for code search - expand CamelCase, include API nouns

  • system_prompts.semantic_chunk_summaries
    env: PROMPT_SEMANTIC_CARDS
    string
    default: "Analyze this database chunk and create a compre..."

    Generate JSON summaries for code chunks during indexing

  • system_prompts.semantic_kg_extraction
    env: —
    string
    default: "You are a semantic knowledge graph extractor.\n..."

    Prompt for LLM-assisted semantic KG extraction (typed entities + relations)

  • system_prompts.synthetic_judge
    env: —
    string
    default: "You are a strict evaluator for synthetic retrie..."

    Judge prompt for synthetic eval row curation and quality filtering

tokenization (7)

dot-path → type → default → env key (if available)
  • tokenization.estimate_only
    env: —
    boolean
    default: False

    If true, use fast approximate token counting.

  • tokenization.hf_tokenizer_name
    env: —
    string
    default: "gpt2"

    HuggingFace tokenizer name (strategy='huggingface').

  • tokenization.lowercase
    env: —
    boolean
    default: False

    Lowercase before tokenization.

  • tokenization.max_tokens_per_chunk_hard
    env: —
    number
    default: 8192

    Absolute hard limit for tokens per chunk (safety ceiling).

  • tokenization.normalize_unicode
    env: —
    boolean
    default: True

    Normalize unicode (NFKC) before tokenization for stability.

  • tokenization.strategy
    env: —
    "whitespace" | "tiktoken" | "huggingface"
    default: "tiktoken"

    Tokenization strategy used for chunking/budgeting.

  • tokenization.tiktoken_encoding
    env: —
    string
    default: "o200k_base"

    tiktoken encoding name (strategy='tiktoken').

tracing (17)

dot-path → type → default → env key (if available)
  • tracing.alert_include_resolved
    env: ALERT_INCLUDE_RESOLVED
    number
    default: 1

    Include resolved alerts

    Tooltip (ALERT_INCLUDE_RESOLVED)
    Loading…
  • tracing.alert_notify_severities
    env: ALERT_NOTIFY_SEVERITIES
    string
    default: "critical,warning"

    Alert severities to notify

    Tooltip (ALERT_NOTIFY_SEVERITIES)
    Loading…
  • tracing.alert_webhook_timeout
    env: ALERT_WEBHOOK_TIMEOUT
    number
    default: 5

    Alert webhook timeout (seconds)

    Tooltip (ALERT_WEBHOOK_TIMEOUT)
    Loading…
  • tracing.langchain_endpoint
    env: LANGCHAIN_ENDPOINT
    string
    default: "https://api.smith.langchain.com"

    LangChain/LangSmith API endpoint

    Tooltip (LANGCHAIN_ENDPOINT)
    Loading…
  • tracing.langchain_project
    env: LANGCHAIN_PROJECT
    string
    default: "tribrid"

    LangChain project name

    Tooltip (LANGCHAIN_PROJECT)
    Loading…
  • tracing.langchain_tracing_v2
    env: LANGCHAIN_TRACING_V2
    number
    default: 0
    internal/basics

    Enable LangChain v2 tracing

    Tooltip (LANGCHAIN_TRACING_V2)
    Loading…
  • tracing.langtrace_api_host
    env: LANGTRACE_API_HOST
    string
    default: ""

    LangTrace API host

    Tooltip (LANGTRACE_API_HOST)
    Loading…
  • tracing.langtrace_project_id
    env: LANGTRACE_PROJECT_ID
    string
    default: ""

    LangTrace project ID

    Tooltip (LANGTRACE_PROJECT_ID)
    Loading…
  • tracing.log_level
    env: LOG_LEVEL
    string
    default: "INFO"
    internal/basics

    Logging level

    Tooltip (LOG_LEVEL)
    Loading…
  • tracing.metrics_enabled
    env: METRICS_ENABLED
    number
    default: 1

    Enable metrics collection

    Tooltip (METRICS_ENABLED)
    Loading…
  • tracing.prometheus_port
    env: PROMETHEUS_PORT
    number
    default: 9090

    Prometheus metrics port

    Tooltip (PROMETHEUS_PORT)
    Loading…
  • tracing.trace_auto_ls
    env: TRACE_AUTO_LS
    number
    default: 1
    internal/basics

    Auto-enable LangSmith tracing

    Tooltip (TRACE_AUTO_LS)
    Loading…
  • tracing.trace_retention
    env: TRACE_RETENTION
    number
    default: 50

    Number of traces to retain

    Tooltip (TRACE_RETENTION)
    Loading…
  • tracing.trace_sampling_rate
    env: TRACE_SAMPLING_RATE
    number
    default: 1.0

    Trace sampling rate (0.0-1.0)

    Tooltip (TRACE_SAMPLING_RATE)
    Loading…
  • tracing.tracing_enabled
    env: TRACING_ENABLED
    number
    default: 1
    internal/basics

    Enable distributed tracing

    Tooltip (TRACING_ENABLED)
    Loading…
  • tracing.tracing_mode
    env: TRACING_MODE
    string
    default: "langsmith"

    Tracing backend mode

    Tooltip (TRACING_MODE)
    Loading…
  • tracing.tribrid_log_path
    env: TRIBRID_LOG_PATH
    string
    default: "data/logs/queries.jsonl"

    Query log file path

    Tooltip (TRIBRID_LOG_PATH)
    Loading…

training (36)

dot-path → type → default → env key (if available)
  • training.learning_reranker_backend
    env: —
    "auto" | "mlx_qwen3"
    default: "auto"

    Learning reranker backend: auto (prefer MLX Qwen3 on Apple Silicon), mlx_qwen3 (force). Legacy values 'transformers'/'hf' normalize to 'auto'.

  • training.learning_reranker_base_model
    env: —
    string
    default: "Qwen/Qwen3-Reranker-0.6B"

    Base model to fine-tune for MLX Qwen3 learning reranker

  • training.learning_reranker_grad_accum_steps
    env: —
    number
    default: 8

    Gradient accumulation steps per optimizer update for MLX Qwen3 learning reranker training

  • training.learning_reranker_lora_alpha
    env: —
    number
    default: 32.0

    LoRA alpha for MLX Qwen3 learning reranker

  • training.learning_reranker_lora_dropout
    env: —
    number
    default: 0.05

    LoRA dropout for MLX Qwen3 learning reranker

  • training.learning_reranker_lora_rank
    env: —
    number
    default: 16

    LoRA rank for MLX Qwen3 learning reranker

  • training.learning_reranker_lora_target_modules
    env: —
    string[]
    default: —

    Module name suffixes to apply LoRA to (MLX Qwen3)

  • training.learning_reranker_negative_ratio
    env: —
    number
    default: 5

    Negative pairs per positive during learning reranker training

  • training.learning_reranker_promote_epsilon
    env: —
    number
    default: 0.0

    Minimum improvement required to auto-promote (primary metric delta)

  • training.learning_reranker_promote_if_improves
    env: —
    number
    default: 1

    Promote trained learning artifact to active path only if primary metric improves

  • training.learning_reranker_telemetry_interval_steps
    env: —
    number
    default: 2

    Emit trainer telemetry every N optimizer steps (plus first/final)

  • training.learning_reranker_unload_after_sec
    env: —
    number
    default: 0

    Unload MLX learning reranker model after idle seconds (0 = never)

  • training.ragweld_agent_backend
    env: —
    string
    default: "mlx_qwen3"

    Ragweld agent backend (in-process chat model). Currently: mlx_qwen3

  • training.ragweld_agent_base_model
    env: —
    string
    default: "mlx-community/Qwen3-1.7B-4bit"

    Shipped base model for the ragweld agent (MLX).

  • training.ragweld_agent_grad_accum_steps
    env: —
    number
    default: 8

    Gradient accumulation steps per optimizer update for ragweld agent training.

  • training.ragweld_agent_lora_alpha
    env: —
    number
    default: 32.0

    LoRA alpha for ragweld agent MLX fine-tuning.

  • training.ragweld_agent_lora_dropout
    env: —
    number
    default: 0.05

    LoRA dropout for ragweld agent MLX fine-tuning.

  • training.ragweld_agent_lora_rank
    env: —
    number
    default: 16

    LoRA rank for ragweld agent MLX fine-tuning.

  • training.ragweld_agent_lora_target_modules
    env: —
    string[]
    default: —

    Module name suffixes to apply LoRA to (ragweld agent; MLX Qwen3).

  • training.ragweld_agent_model_path
    env: —
    string
    default: "models/learning-agent-epstein-files-1"

    Active ragweld agent adapter artifact path (directory containing adapter.npz + adapter_config.json).

  • training.ragweld_agent_promote_epsilon
    env: —
    number
    default: 0.0

    Minimum eval_loss improvement required to auto-promote (baseline_loss - new_loss >= epsilon).

  • training.ragweld_agent_promote_if_improves
    env: —
    number
    default: 1

    Auto-promote trained ragweld agent adapter only if eval_loss improves.

  • training.ragweld_agent_reload_period_sec
    env: —
    number
    default: 60

    Adapter reload check period (seconds). 0 = check every request.

  • training.ragweld_agent_telemetry_interval_steps
    env: —
    number
    default: 2

    Emit ragweld agent trainer telemetry every N optimizer steps (plus first/final).

  • training.ragweld_agent_train_dataset_path
    env: —
    string
    default: ""

    Training dataset path for the ragweld agent (empty = use evaluation.eval_dataset_path).

  • training.ragweld_agent_unload_after_sec
    env: —
    number
    default: 0

    Unload ragweld agent model after idle seconds (0 = never).

  • training.reranker_train_batch
    env: RERANKER_TRAIN_BATCH
    number
    default: 16

    Training batch size

    Tooltip (RERANKER_TRAIN_BATCH)
    Loading…
  • training.reranker_train_epochs
    env: RERANKER_TRAIN_EPOCHS
    number
    default: 2

    Training epochs for reranker

    Tooltip (RERANKER_TRAIN_EPOCHS)
    Loading…
  • training.reranker_train_lr
    env: RERANKER_TRAIN_LR
    number
    default: 2e-05

    Learning rate

    Tooltip (RERANKER_TRAIN_LR)
    Loading…
  • training.reranker_warmup_ratio
    env: RERANKER_WARMUP_RATIO
    number
    default: 0.1

    Warmup steps ratio

    Tooltip (RERANKER_WARMUP_RATIO)
    Loading…
  • training.tribrid_reranker_mine_mode
    env: TRIBRID_RERANKER_MINE_MODE
    string
    default: "replace"

    Triplet mining mode

    Tooltip (TRIBRID_RERANKER_MINE_MODE)
    Loading…
  • training.tribrid_reranker_mine_reset
    env: TRIBRID_RERANKER_MINE_RESET
    number
    default: 0

    Reset triplets file before mining

    Tooltip (TRIBRID_RERANKER_MINE_RESET)
    Loading…
  • training.tribrid_reranker_model_path
    env: TRIBRID_RERANKER_MODEL_PATH
    string
    default: "models/learning-reranker-epstein-files-1"

    Active learning reranker artifact path (MLX adapter directory).

    Tooltip (TRIBRID_RERANKER_MODEL_PATH)
    Loading…
  • training.tribrid_triplets_path
    env: TRIBRID_TRIPLETS_PATH
    string
    default: "data/training/triplets__epstein-files-1.jsonl"

    Training triplets file path

    Tooltip (TRIBRID_TRIPLETS_PATH)
    Loading…
  • training.triplets_min_count
    env: TRIPLETS_MIN_COUNT
    number
    default: 100

    Min triplets for training

    Tooltip (TRIPLETS_MIN_COUNT)
    Loading…
  • training.triplets_mine_mode
    env: TRIPLETS_MINE_MODE
    string
    default: "replace"

    Triplet mining mode

    Tooltip (TRIPLETS_MINE_MODE)
    Loading…

ui (45)

dot-path → type → default → env key (if available)
  • ui.chat_default_model
    env: CHAT_DEFAULT_MODEL
    string
    default: "gpt-4o-mini"

    Default model for chat if not specified in request

    Tooltip (CHAT_DEFAULT_MODEL)
    Loading…
  • ui.chat_history_max
    env: CHAT_HISTORY_MAX
    number
    default: 50

    Max chat history messages

  • ui.chat_show_citations
    env: CHAT_SHOW_CITATIONS
    number
    default: 1
    internal/basics

    Show citations list on chat answers

    Tooltip (CHAT_SHOW_CITATIONS)
    Loading…
  • ui.chat_show_confidence
    env: CHAT_SHOW_CONFIDENCE
    number
    default: 0

    Show confidence badge on chat answers

  • ui.chat_show_debug_footer
    env: CHAT_SHOW_DEBUG_FOOTER
    number
    default: 1
    internal/basics

    Show dev/debug footer under chat answers

  • ui.chat_show_trace
    env: CHAT_SHOW_TRACE
    number
    default: 1

    Show routing trace panel by default

  • ui.chat_stream_include_thinking
    env: CHAT_STREAM_INCLUDE_THINKING
    number
    default: 1
    internal/basics

    Include reasoning/thinking in streamed responses when supported by model

    Tooltip (CHAT_STREAM_INCLUDE_THINKING)
    Loading…
  • ui.chat_stream_timeout
    env: CHAT_STREAM_TIMEOUT
    number
    default: 120

    Streaming response timeout in seconds

    Tooltip (CHAT_STREAM_TIMEOUT)
    Loading…
  • ui.chat_streaming_enabled
    env: CHAT_STREAMING_ENABLED
    number
    default: 1

    Enable streaming responses

    Tooltip (CHAT_STREAMING_ENABLED)
    Loading…
  • ui.chat_thinking_budget_tokens
    env: CHAT_THINKING_BUDGET_TOKENS
    number
    default: 10000

    Max thinking tokens for Anthropic extended thinking

    Tooltip (CHAT_THINKING_BUDGET_TOKENS)
    Loading…
  • ui.editor_bind
    env: EDITOR_BIND
    string
    default: "local"

    Editor bind mode

    Tooltip (EDITOR_BIND)
    Loading…
  • ui.editor_embed_enabled
    env: EDITOR_EMBED_ENABLED
    number
    default: 1

    Enable editor embedding

    Tooltip (EDITOR_EMBED_ENABLED)
    Loading…
  • ui.editor_enabled
    env: EDITOR_ENABLED
    number
    default: 1

    Enable embedded editor

    Tooltip (EDITOR_ENABLED)
    Loading…
  • ui.editor_image
    env: EDITOR_IMAGE
    string
    default: "codercom/code-server:latest"

    Editor Docker image

  • ui.editor_port
    env: EDITOR_PORT
    number
    default: 4440

    Embedded editor port

    Tooltip (EDITOR_PORT)
    Loading…
  • ui.grafana_auth_mode
    env: GRAFANA_AUTH_MODE
    string
    default: "anonymous"
    internal/basics

    Grafana authentication mode

    Tooltip (GRAFANA_AUTH_MODE)
    Loading…
  • ui.grafana_base_url
    env: GRAFANA_BASE_URL
    string
    default: "http://127.0.0.1:3001"

    Grafana base URL

    Tooltip (GRAFANA_BASE_URL)
    Loading…
  • ui.grafana_dashboard_slug
    env: GRAFANA_DASHBOARD_SLUG
    string
    default: "tribrid-overview"

    Grafana dashboard slug

  • ui.grafana_dashboard_uid
    env: GRAFANA_DASHBOARD_UID
    string
    default: "tribrid-overview"

    Default Grafana dashboard UID

    Tooltip (GRAFANA_DASHBOARD_UID)
    Loading…
  • ui.grafana_embed_enabled
    env: GRAFANA_EMBED_ENABLED
    number
    default: 1

    Enable Grafana embedding

  • ui.grafana_kiosk
    env: GRAFANA_KIOSK
    string
    default: "tv"

    Grafana kiosk mode

  • ui.grafana_org_id
    env: GRAFANA_ORG_ID
    number
    default: 1

    Grafana organization ID

  • ui.grafana_refresh
    env: GRAFANA_REFRESH
    string
    default: "10s"

    Grafana refresh interval

  • ui.learning_reranker_default_preset
    env: —
    "balanced" | "focus_viz" | "focus_logs" | "focus_inspector"
    default: "balanced"

    Default pane layout preset applied when opening Learning Reranker Studio

  • ui.learning_reranker_dockview_layout_json
    env: —
    string
    default: ""

    Serialized Dockview layout JSON for Learning Reranker Studio pane persistence

  • ui.learning_reranker_layout_engine
    env: —
    "dockview" | "panels"
    default: "dockview"

    Learning Reranker Studio layout engine selection

  • ui.learning_reranker_logs_renderer
    env: —
    "json" | "xterm"
    default: "xterm"

    Preferred logs renderer for Learning Reranker Studio

  • ui.learning_reranker_show_setup_row
    env: —
    number
    default: 0

    Show setup summary row above studio dock layout (1=show, 0=collapsed)

  • ui.learning_reranker_studio_bottom_panel_pct
    env: —
    number
    default: 28

    Default bottom dock height percentage for Learning Reranker Studio

  • ui.learning_reranker_studio_immersive
    env: —
    number
    default: 1

    Use immersive full-height studio mode for Learning Reranker

  • ui.learning_reranker_studio_left_panel_pct
    env: —
    number
    default: 20

    Default left dock width percentage for Learning Reranker Studio

  • ui.learning_reranker_studio_right_panel_pct
    env: —
    number
    default: 30

    Default right dock width percentage for Learning Reranker Studio

  • ui.learning_reranker_studio_v2_enabled
    env: —
    number
    default: 1

    Enable Learning Reranker Studio V2 layout and controls

  • ui.learning_reranker_visualizer_color_mode
    env: —
    "absolute" | "delta"
    default: "absolute"

    Neural Visualizer trajectory coloring mode (absolute loss vs delta loss)

  • ui.learning_reranker_visualizer_max_points
    env: —
    number
    default: 10000

    Maximum telemetry points retained for Neural Visualizer

  • ui.learning_reranker_visualizer_motion_intensity
    env: —
    number
    default: 1.0

    Global motion intensity multiplier for Neural Visualizer effects

  • ui.learning_reranker_visualizer_quality
    env: —
    "balanced" | "cinematic" | "ultra"
    default: "cinematic"

    Neural Visualizer quality tier

  • ui.learning_reranker_visualizer_reduce_motion
    env: —
    number
    default: 0

    Reduce Neural Visualizer motion for accessibility/performance

  • ui.learning_reranker_visualizer_renderer
    env: —
    "auto" | "webgpu" | "webgl2" | "canvas2d"
    default: "auto"

    Preferred renderer for Neural Visualizer

  • ui.learning_reranker_visualizer_show_vector_field
    env: —
    number
    default: 1

    Render animated vector field accents in Neural Visualizer

  • ui.learning_reranker_visualizer_tail_seconds
    env: —
    number
    default: 8.0

    Temporal tail length in seconds for visualizer trajectory effects

  • ui.learning_reranker_visualizer_target_fps
    env: —
    number
    default: 60

    Target FPS for Neural Visualizer animation loop

  • ui.open_browser
    env: OPEN_BROWSER
    number
    default: 1

    Auto-open browser on start

    Tooltip (OPEN_BROWSER)
    Loading…
  • ui.runtime_mode
    env: RUNTIME_MODE
    "development" | "production"
    default: "development"

    Runtime environment mode (development uses localhost, production uses deployed URLs)

  • ui.theme_mode
    env: THEME_MODE
    string
    default: "dark"
    internal/basics

    UI theme mode

    Tooltip (THEME_MODE)
    Loading…