Skip to main content

Zenii Configuration Reference

Note: This document was generated with AI assistance and may contain inaccuracies. If you find errors, please report an issue.

Table of Contents


File Location

Zenii uses the directories crate with the reverse-domain identifier com.sprklai.zenii to determine platform-correct paths:

PlatformConfig File Path
Linux~/.config/zenii/config.toml
macOS~/Library/Application Support/com.sprklai.zenii/config.toml
Windows%APPDATA%\sprklai\zenii\config\config.toml

Data files (databases, identity, skills) default to:

PlatformData Directory
Linux~/.local/share/zenii/
macOS~/Library/Application Support/com.sprklai.zenii/
Windows%APPDATA%\sprklai\zenii\data\

If the config file does not exist on startup, Zenii uses all default values.


Configuration Sections

All fields use serde(default), so any field can be omitted to use its default value. The config file format is TOML.

Gateway

FieldTypeDefaultDescription
gateway_hostString"127.0.0.1"IP address the gateway listens on
gateway_portu1618981Port the gateway listens on
gateway_auth_tokenOption<String>nullBearer token for API authentication. If unset, auth is disabled
gateway_cors_originsVec<String>["http://localhost:18971"]Allowed CORS origins. ["*"] enables permissive CORS
ws_max_connectionsusize32Maximum concurrent WebSocket connections
gateway_host = "127.0.0.1"
gateway_port = 18981
gateway_auth_token = "my-secret-token"
gateway_cors_origins = ["http://localhost:18971"]
ws_max_connections = 32

Database

FieldTypeDefaultDescription
data_dirOption<String>Platform default (see above)Root directory for all data files
db_pathOption<String>{data_dir}/zenii.dbPath to main SQLite database (app + FTS5)
memory_db_pathOption<String>{data_dir}/memory_vec.dbPath to vector memory SQLite database (sqlite-vec)
data_dir = "/home/user/.zenii"
db_path = "/home/user/.zenii/zenii.db"
memory_db_path = "/home/user/.zenii/memory_vec.db"

Memory

FieldTypeDefaultDescription
memory_fts_weightf320.4Weight for FTS5 full-text search scoring (0.0-1.0)
memory_vector_weightf320.6Weight for vector similarity scoring (0.0-1.0)
memory_default_limitusize10Default number of results for memory recall queries
embedding_dimusize384Dimensionality of embedding vectors
embedding_cache_sizeusize1000Number of embeddings to cache in memory
memory_fts_weight = 0.4
memory_vector_weight = 0.6
memory_default_limit = 10
embedding_dim = 384
embedding_cache_size = 1000

Security

FieldTypeDefaultDescription
security_autonomy_levelString"supervised"Agent autonomy level (supervised, semi-autonomous, autonomous)
max_tool_retriesu323Maximum retry attempts for failed tool executions
security_rate_limit_maxu3260Maximum requests per rate limit window
security_rate_limit_window_secsu6460Rate limit window duration in seconds
security_audit_log_capacityusize1000Maximum number of audit log entries in memory
security_autonomy_level = "supervised"
max_tool_retries = 3
security_rate_limit_max = 60
security_rate_limit_window_secs = 60
security_audit_log_capacity = 1000

AI Agent

FieldTypeDefaultDescription
provider_nameString"openai"Default AI provider name. Alias: default_provider
provider_typeString"openai"Provider type (used for API compatibility)
provider_base_urlOption<String>nullCustom base URL for the provider API
provider_model_idString"gpt-4o"Default model ID. Alias: default_model
provider_api_key_envOption<String>nullEnvironment variable name for the API key
agent_max_turnsusize4Maximum agent turns (tool call loops) per request
agent_max_tokensusize4096Maximum tokens for agent responses
agent_system_promptOption<String>nullAdditional system prompt appended to identity (never replaces it)
provider_name = "openai"
provider_type = "openai"
provider_base_url = "https://api.openai.com/v1"
provider_model_id = "gpt-4o"
provider_api_key_env = "OPENAI_API_KEY"
agent_max_turns = 4
agent_max_tokens = 4096
agent_system_prompt = "Always respond concisely."

Identity

FieldTypeDefaultDescription
identity_nameString"Zenii"Display name of the AI assistant
identity_descriptionString"AI-powered assistant"Short description of the assistant
identity_dirOption<String>{data_dir}/identity/Directory containing identity/persona markdown files
identity_name = "Zenii"
identity_description = "AI-powered assistant"
identity_dir = "/home/user/.zenii/identity"

Skills

FieldTypeDefaultDescription
skills_dirOption<String>{data_dir}/skills/Directory containing skill definition files
skill_max_content_sizeusize100000Maximum size in bytes for a skill's content
skill_proposal_expiry_daysu327Days before pending skill proposals expire
skills_dir = "/home/user/.zenii/skills"
skill_max_content_size = 100000
skill_proposal_expiry_days = 7

User Learning

FieldTypeDefaultDescription
learning_enabledbooltrueWhether the user learning system is active
learning_denied_categoriesVec<String>[]Categories of observations the system must not learn
learning_max_observationsusize10000Maximum number of stored user observations
learning_observation_ttl_daysu32365Days before observations expire
learning_min_confidencef320.5Minimum confidence threshold to store an observation
learning_enabled = true
learning_denied_categories = ["medical", "financial"]
learning_max_observations = 10000
learning_observation_ttl_days = 365
learning_min_confidence = 0.5

Tools

FieldTypeDefaultDescription
tool_shell_timeout_secsu6430Timeout in seconds for shell command execution
tool_file_read_max_linesusize10000Maximum lines to read from a file
tool_file_search_max_resultsusize100Maximum results for file search operations
tool_process_list_limitusize200Maximum number of processes to list
tool_shell_timeout_secs = 30
tool_file_read_max_lines = 10000
tool_file_search_max_results = 100
tool_process_list_limit = 200
FieldTypeDefaultDescription
web_search_timeout_secsu6430Timeout for web search requests
web_search_max_resultsusize20Maximum number of web search results
web_search_timeout_secs = 30
web_search_max_results = 20

Context Injection

FieldTypeDefaultDescription
context_injection_enabledbooltrueWhether context injection into agent prompts is active
context_summary_model_idString"gpt-4o-mini"Model used for generating conversation summaries
context_summary_provider_idString"openai"Provider used for summary generation
context_reinject_gap_minutesu3230Minutes of inactivity before reinjecting full context
context_reinject_message_countu3220Number of messages before triggering context reinjection
context_injection_enabled = true
context_summary_model_id = "gpt-4o-mini"
context_summary_provider_id = "openai"
context_reinject_gap_minutes = 30
context_reinject_message_count = 20

Prompt Strategy

FieldTypeDefaultDescription
prompt_compact_identitybooltrueUse compact axiom-based preamble instead of verbose prose. Reduces token usage by ~60-80% while maintaining response quality
prompt_max_preamble_tokensusize1500Token budget for system preamble. Overflow trims lowest-priority dynamic context
prompt_compact_identity = true
prompt_max_preamble_tokens = 1500

When prompt_compact_identity is true (default), Zenii uses a 4-layer compact format:

  • Layer 0: Core identity (~80 tokens) -- name, version, location, OS, capabilities
  • Layer 1: Runtime state (~60 tokens) -- date, model, session, compact reasoning axioms
  • Layer 2: Dynamic context (variable) -- memories, user observations, skills, domain-specific details
  • Layer 3: Overrides -- custom system prompt, conversation summary

When false, the legacy verbose prose mode is used (PromptComposer + ContextEngine).

The token budget (prompt_max_preamble_tokens) acts as overflow protection. When the assembled preamble exceeds the budget, lowest-priority dynamic context fragments are trimmed first.

Context Management

FieldTypeDefaultDescription
context_strategyString"balanced"Context assembly strategy (minimal, balanced, full)
context_max_history_messagesusize20Maximum conversation history messages to include in context
context_max_memory_resultsusize5Maximum memory recall results to include in context
context_auto_extractbooltrueWhether to automatically extract key facts from conversations
context_extract_intervalusize3Extract facts every N messages
context_summary_modelString""Override model for context summarization (empty uses default)
context_strategy = "balanced"
context_max_history_messages = 20
context_max_memory_results = 5
context_auto_extract = true
context_extract_interval = 3
context_summary_model = ""

Embeddings

FieldTypeDefaultDescription
embedding_providerString"none"Embedding provider type: none (FTS5 only), openai, or local (FastEmbed)
embedding_modelString"BAAI/bge-small-en-v1.5"Model ID for embedding generation
embedding_download_dirOption<String>nullDirectory for local embedding model downloads (defaults to data dir)
embedding_provider = "local"
embedding_model = "BAAI/bge-small-en-v1.5"
# embedding_download_dir = "/custom/path/models"

Reasoning

FieldTypeDefaultDescription
agent_max_continuationsusize1Maximum autonomous continuation turns for the reasoning engine
tool_dedup_enabledbooltrueDeduplicate identical tool calls within a single request. Uses a per-request cache keyed by hash(tool_name + args)
agent_reasoning_guidanceOption<String>nullCustom reasoning instructions appended to agent system prompt
agent_max_continuations = 1
tool_dedup_enabled = true
agent_reasoning_guidance = "Think step by step before taking actions."

Plugins

FieldTypeDefaultDescription
plugins_dirOption<String>{data_dir}/plugins/Directory containing installed plugins
plugin_idle_timeout_secsu64300Seconds before idle plugin processes are stopped
plugin_max_restart_attemptsu323Maximum restart attempts for crashed plugin processes
plugin_execute_timeout_secsu6460Timeout for plugin tool execution
plugin_auto_updateboolfalseWhether to auto-update plugins on boot
# plugins_dir = "/custom/path/plugins"
plugin_idle_timeout_secs = 300
plugin_max_restart_attempts = 3
plugin_execute_timeout_secs = 60
plugin_auto_update = false

Tool Permissions

Risk-based, per-surface tool permission system. See Architecture: Tool Permission System for details.

FieldTypeDefaultDescription
tool_permissions.low_risk_defaultString"allowed"Default permission for low-risk tools
tool_permissions.medium_risk_defaultString"allowed"Default permission for medium-risk tools
tool_permissions.high_risk_defaultString"denied"Default permission for high-risk tools
tool_permissions.overridesHashMapdesktop/cli/tui: all high-risk allowedPer-surface, per-tool overrides
[tool_permissions]
low_risk_default = "allowed"
medium_risk_default = "allowed"
high_risk_default = "denied"

[tool_permissions.overrides.telegram]
memory = "denied"
web_search = "allowed"

[tool_permissions.overrides.desktop]
shell = "allowed"
file_read = "allowed"
file_write = "allowed"

Permission states: allowed, denied, ask_once (future), ask_always (future).

Channels

FieldTypeDefaultDescription
channels_enabledVec<String>[]List of channel names to enable on startup
channel_tool_policyHashMap<String, Vec<String>>{}Legacy per-channel tool allowlists (superseded by tool_permissions)
telegram_polling_timeout_secsu3230Telegram long-polling timeout
telegram_dm_policyString"allowlist"Telegram DM policy (allowlist, open, deny)
telegram_retry_min_msu641000Minimum retry delay for Telegram API errors (milliseconds)
telegram_retry_max_msu6460000Maximum retry delay for Telegram API errors (milliseconds)
telegram_require_group_mentionbooltrueWhether the bot must be @mentioned in group chats to respond
channels_enabled = ["telegram", "slack"]

# Tool permissions for channels are now managed via [tool_permissions]
# See the Tool Permissions section above

telegram_polling_timeout_secs = 30
telegram_dm_policy = "allowlist"
telegram_retry_min_ms = 1000
telegram_retry_max_ms = 60000
telegram_require_group_mention = true

Scheduler

FieldTypeDefaultDescription
scheduler_tick_interval_secsu641How often the scheduler checks for due jobs (seconds)
scheduler_stuck_threshold_secsu64120Seconds before a running job is considered stuck
scheduler_error_backoff_secsVec<u64>[30, 60, 300, 900, 3600]Exponential backoff delays for failed jobs (seconds)
scheduler_max_history_per_jobusize100Maximum execution history entries per job
scheduler_agent_turn_timeout_secsu64120Timeout for agent turns within scheduled jobs
scheduler_heartbeat_fileOption<String>nullPath to heartbeat file (updated each tick for external monitoring)
scheduler_tick_interval_secs = 1
scheduler_stuck_threshold_secs = 120
scheduler_error_backoff_secs = [30, 60, 300, 900, 3600]
scheduler_max_history_per_job = 100
scheduler_agent_turn_timeout_secs = 120
scheduler_heartbeat_file = "/tmp/zenii-heartbeat"

Credentials

FieldTypeDefaultDescription
keyring_service_idString"com.sprklai.zenii"OS keyring service identifier for credential storage
keyring_service_id = "com.sprklai.zenii"

Self-Evolution

FieldTypeDefaultDescription
self_evolution_enabledbooltrueWhether the self-evolution system (skill proposals) is active
learning_archive_thresholdf640.3Confidence threshold below which observations are archived
learning_archive_after_daysu3230Days before low-confidence observations are archived
self_evolution_enabled = true
learning_archive_threshold = 0.3
learning_archive_after_days = 30

User Profile

FieldTypeDefaultDescription
user_nameOption<String>nullUser's display name (e.g., "John"). Used in greetings and personalization
user_timezoneOption<String>nullIANA timezone (e.g., "America/New_York"). Auto-detected on first run
user_locationOption<String>nullLocation/region description (e.g., "New York, US"). Used for context injection
user_name = "John"
user_timezone = "America/New_York"
user_location = "New York, US"

Logging

FieldTypeDefaultDescription
log_levelString"info"Log level for the tracing framework (trace, debug, info, warn, error)
log_level = "info"

Environment Variable Overrides

VariableDescriptionMaps To
ZENII_TOKENGateway authentication tokengateway_auth_token
ZENII_GATEWAY_URLGateway URL override (used by CLI and desktop app to connect to an external daemon instead of starting an embedded one)N/A (runtime override, not a config field)

Environment variables take precedence over config file values when supported.


Feature Flag Impact

Some configuration fields are only relevant when specific feature flags are enabled at compile time:

Feature FlagRelevant Config Fields
local-embeddingsembedding_provider (when set to "local"), embedding_model, embedding_download_dir
channelschannels_enabled, tool_permissions (channel surface overrides)
channels-telegramtelegram_polling_timeout_secs, telegram_dm_policy, telegram_retry_min_ms, telegram_retry_max_ms, telegram_require_group_mention
channels-slack(uses tool_permissions for Slack surface overrides)
channels-discord(uses tool_permissions for Discord surface overrides)
schedulerscheduler_tick_interval_secs, scheduler_stuck_threshold_secs, scheduler_error_backoff_secs, scheduler_max_history_per_job, scheduler_agent_turn_timeout_secs, scheduler_heartbeat_file

Fields can always be set in the config file regardless of feature flags -- they are simply ignored at runtime if the corresponding feature is not compiled in.


Example Full Config

# Gateway
gateway_host = "127.0.0.1"
gateway_port = 18981
gateway_auth_token = "my-secret-token"
gateway_cors_origins = ["http://localhost:18971"]
ws_max_connections = 32

# Logging
log_level = "info"

# Database
# data_dir = "~/.local/share/zenii" # uses platform default if unset

# AI Agent
provider_name = "openai"
provider_model_id = "gpt-4o"
agent_max_turns = 4
agent_max_tokens = 4096

# Identity
identity_name = "Zenii"
identity_description = "AI-powered assistant"

# Memory
memory_fts_weight = 0.4
memory_vector_weight = 0.6
memory_default_limit = 10
embedding_dim = 384

# Security
security_autonomy_level = "supervised"
max_tool_retries = 3
security_rate_limit_max = 60
security_rate_limit_window_secs = 60

# Tools
tool_shell_timeout_secs = 30
tool_file_read_max_lines = 10000

# Web Search
web_search_timeout_secs = 30
web_search_max_results = 20

# Embeddings
embedding_provider = "none"
embedding_model = "BAAI/bge-small-en-v1.5"

# Reasoning
agent_max_continuations = 1
tool_dedup_enabled = true

# Context
context_injection_enabled = true
context_strategy = "balanced"
context_max_history_messages = 20
context_auto_extract = true

# User Learning
learning_enabled = true
learning_max_observations = 10000
learning_min_confidence = 0.5

# User Profile
# user_name = "John"
# user_timezone = "America/New_York"
# user_location = "New York, US"

# Tool Permissions
[tool_permissions]
low_risk_default = "allowed"
medium_risk_default = "allowed"
high_risk_default = "denied"

# Channels (requires --features channels)
channels_enabled = []
telegram_dm_policy = "allowlist"

# Scheduler (requires --features scheduler)
scheduler_tick_interval_secs = 1

# Credentials
keyring_service_id = "com.sprklai.zenii"

# Self-Evolution
self_evolution_enabled = true
skill_proposal_expiry_days = 7