Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

LLM Configuration

oatbar-llm is configured via ~/.config/oatbar-llm/config.toml.

Structure

The configuration file consists of three main sections:

  1. [llm]: Global LLM provider settings.
  2. [[command]]: External commands to gather context.
  3. [[variable]]: Variables to extract or generate using the LLM.

[llm] Section

Configures the LLM provider and global behavior.

FieldTypeDefaultDescription
providerstringRequiredThe LLM provider. Supported: google, openai, anthropic, mistral, xai, ollama.
namestringRequiredThe model name (e.g., gemini-2.5-flash, gpt-4o).
rolestringDefault system promptCustom system prompt to define the AI’s persona and goal.
temperaturefloat0.6Controls randomness (0.0 = deterministic, 1.0 = creative).
max_tokensint3000Maximum number of tokens in the response.
urlstringNoneCustom API URL (useful for local LLMs or proxies).
knowledge_basepathNonePath to a text file containing static context/preferences to include in the prompt. Must be an absolute path (no ~).
output_format_promptstringNoneCustom instruction for output format (required if using Custom output mode).
retriesint5Number of retries for failed API calls.
back_offduration1sInitial backoff duration for retries.

[[command]] Section

Defines shell commands to run. Their output is fed to the LLM as context.

FieldTypeDefaultDescription
commandstringRequiredThe shell command to execute.
namestringcommand stringA unique name to refer to this command’s output in the prompt context.

[[variable]] Section

Defines the questions to ask the LLM and how to handle the answers.

FieldTypeDefaultDescription
namestringRequiredThe key for the variable in the output JSON.
questionstringRequiredThe prompt/question for the LLM to answer to populate this variable.
typestringstringThe expected data type: string, number, boolean.
allowed_answerslistNoneA list of valid string values (enum) to restrict the output.
max_lengthintNoneMaximum length of the string response.
write_topathNoneIf set, the variable’s value will be written to this file. Must be an absolute path (no ~).

Output Modes

oatbar-llm supports different output modes via the --mode CLI flag:

  • json (default): Outputs a JSON object suitable for oatbar (i3bar format).
  • debug: Prints the full prompt and raw response for debugging.
  • custom: Outputs raw text based on output_format_prompt. Useful for generating reports or files.

Configuring Keys

API keys are not stored in the configuration file. Instead, oatbar-llm reads them from specific files in the configuration directory (~/.config/oatbar-llm/).

ProviderKey File Path
Google~/.config/oatbar-llm/google_api_key
OpenAI~/.config/oatbar-llm/openai_api_key
Anthropic~/.config/oatbar-llm/anthropic_api_key
Mistral~/.config/oatbar-llm/mistral_api_key
xAI~/.config/oatbar-llm/xai_api_key
OllamaNot required

Ensure these files contain only the API key (no newlines or extra spaces preferred, though whitespace is trimmed).

Ollama Configuration

Ollama does not require an API key. However, you may need to specify the URL if it’s not running on the default port.

[llm]
provider="ollama"
name="llama3"
url="http://localhost:11434" # Optional, defaults to this value

CLI Options

  • --config <FILE>: Path to a custom config file (default: ~/.config/oatbar-llm/config.toml).
  • --mode <MODE>: Output mode (json, debug, custom).