LLM
oatbar ships with oatbar-llm, a powerful utility that uses LLMs to process data and return it in i3bar format.
It can summarize logs, explain errors, fetch news, or just generate cool content.
How it works
oatbar-llm runs the configured [[command]]s before invoking the LLM. The output of these commands is then fed into the LLM prompt as context.
This means the LLM does not execute tools or commands itself. You have full control over what data is sent to the model for processing.
See LLM Configuration for full reference.
Examples
Note
These examples are for illustrative purposes. Due to the non-deterministic nature of LLMs, you may need to tune the prompts (questions) to get the exact output format or content you desire for your specific model and use case.
System Health Check
Use conky to generate a detailed, one-shot system report and have the LLM analyze it for potential bottlenecks or issues.
1. Create ~/.config/oatbar-llm/conky.conf
conky.config = {
out_to_console = true,
out_to_x = false,
background = false,
update_interval = 1,
total_run_times = 1,
use_spacer = 'none',
}
conky.text = [[
System Health Report
--------------------
Uptime: ${uptime}
Load Avg: ${loadavg}
Processes: ${running_processes} running of ${processes} total
CPU: ${cpu}%
RAM: ${memperc}% (${mem}/${memmax})
Swap: ${swapperc}% (${swap}/${swapmax})
Disk Usage:
/ : ${fs_used /}/${fs_size /} (${fs_used_perc /}%)
/home : ${fs_used /home}/${fs_size /home} (${fs_used_perc /home}%)
Network (eth0):
Up: ${upspeed eth0} (Total: ${totalup eth0})
Down: ${downspeed eth0} (Total: ${totaldown eth0})
Top CPU:
1. ${top name 1}: ${top cpu 1}%
2. ${top name 2}: ${top cpu 2}%
3. ${top name 3}: ${top cpu 3}%
4. ${top name 4}: ${top cpu 4}%
5. ${top name 5}: ${top cpu 5}%
Top Mem:
1. ${top_mem name 1}: ${top_mem mem 1}%
2. ${top_mem name 2}: ${top_mem mem 2}%
3. ${top_mem name 3}: ${top_mem mem 3}%
4. ${top_mem name 4}: ${top_mem mem 4}%
5. ${top_mem name 5}: ${top_mem mem 5}%
]]
2. Configure ~/.config/oatbar-llm/config.toml
[[command]]
name = "conky_report"
command = "conky -c ~/.config/oatbar-llm/conky.conf"
interval = 1800
[[variable]]
name = "health_check"
type = "string"
question = "Analyze this system report and summarize the health status. Highlight any resource hogs. Up to 5 words"
[[variable]]
name = "health_report"
type = "string"
question = "Generate a detailed bulleted report of the system health based on the data."
write_to = "/tmp/health_report.md"
3. Configure oatbar
[[command]]
name="conky_ai"
command="oatbar-llm"
interval=1800
[[block]]
name="health"
type="text"
value="${conky_ai:health_check.value}"
on_mouse_left="xdg-open /tmp/health_report.md"
Git Repository Status
Summarize uncommitted changes in your current project to keep you focused.
1. Configure ~/.config/oatbar-llm/config.toml
[llm]
provider="google"
name="gemini-2.5-flash"
[[command]]
name="git_status"
command="cd ~/Projects/my-project && git status -s && git diff --stat"
[[variable]]
name="git_summary"
type="string"
question="Summarize the uncommitted changes in 3-5 words. If clean, say 'Clean'."
2. Configure oatbar
[[command]]
name="git_ai"
command="oatbar-llm"
interval=600
[[block]]
name="git_status"
type="text"
value="Git: ${git_ai:git_summary.value}"
Security Monitor
Monitor open ports and recent authentication failures for a quick security overview.
1. Configure ~/.config/oatbar-llm/config.toml
[llm]
provider="google"
name="gemini-2.5-flash"
[[command]]
name="ports"
command="ss -tuln"
[[command]]
name="auth_logs"
command="journalctl -u sshd -n 20 --no-pager"
[[variable]]
name="security_alert"
type="string"
question="Analyze open ports and sshd logs. Is there any suspicious activity?"
allowed_answers=["Safe", "Suspicious"]
2. Configure oatbar
[[command]]
name="security_ai"
command="oatbar-llm"
interval=3600
[[block]]
name="security"
type="text"
value="Sec: ${security_ai:security_alert.value}"
Weather & Outfit Advisor
Get clothing suggestions based on the current weather.
1. Configure ~/.config/oatbar-llm/config.toml
[llm]
provider="google"
name="gemini-2.5-flash"
[[command]]
name="weather"
command="curl -s 'https://api.open-meteo.com/v1/forecast?latitude=40.76&longitude=-73.99¤t=temperature_2m,weather_code'"
[[variable]]
name="outfit"
type="string"
question="Based on this weather JSON (temperature in Celsius), suggest a simple outfit (e.g., 'T-shirt & Shorts', 'Coat & Scarf'). Keep it under 5 words."
2. Configure oatbar
[[command]]
name="outfit_ai"
command="oatbar-llm"
interval=7200
[[block]]
name="outfit"
type="text"
value="Wear: ${outfit_ai:outfit.value}"
Standup Meeting Helper
Summarize your work from the last 24 hours to prepare for your daily standup meeting.
1. Configure ~/.config/oatbar-llm/config.toml
[llm]
provider="google"
name="gemini-2.5-flash"
[[command]]
name="my_commits"
command="cd ~/Projects/my-project && git log --author='My Name' --since='24 hours ago' --oneline"
[[variable]]
name="standup_notes"
type="string"
question="Create a bulleted list of my completed tasks for a standup meeting."
write_to="/tmp/standup_notes.md"
2. Configure oatbar
[[command]]
name="standup_ai"
command="oatbar-llm"
interval=3600
[[block]]
name="standup"
type="text"
value="Standup Prep"
on_mouse_left="xdg-open /tmp/standup_notes.md"
Knowledge Base Examples
These examples demonstrate how to use the knowledge_base feature to provide static context to the LLM, allowing it to act as a specialized assistant.
Code Review Helper (Style Guide Enforcer)
Check your code against your team’s style guide.
1. Create ~/.config/oatbar-llm/style_guide.md
# Team Style Guide
- Prefer `unwrap_or_else` over `unwrap`.
- Use `tracing` for logging, not `println!`.
- All public functions must have documentation.
- Variable names should be descriptive (no `x`, `y`, `temp`).
2. Configure ~/.config/oatbar-llm/config.toml
[llm]
provider="google"
name="gemini-2.5-flash"
knowledge_base="/home/user/.config/oatbar-llm/style_guide.md"
[[command]]
name="git_diff"
command="cd ~/Projects/my-project && git diff --cached"
[[variable]]
name="style_review"
type="string"
question="Review the git diff against the style guide. Point out any violations concisely."
Focus & Schedule Assistant
Get reminders based on your personal schedule and priorities.
1. Create ~/.config/oatbar-llm/schedule.md
# My Schedule & Priorities
- **Mornings (8am-12pm):** Deep work (Coding, Writing). No meetings.
- **Lunch:** 12pm-1pm.
- **Afternoons (1pm-5pm):** Meetings, Emails, Code Reviews.
- **Evenings:** Learning Rust, Gym.
**Current Focus:** Shipping the LLM module for Oatbar.
2. Configure ~/.config/oatbar-llm/config.toml
[llm]
provider="google"
name="gemini-2.5-flash"
knowledge_base="/home/user/.config/oatbar-llm/schedule.md"
[[command]]
name="current_time"
command="date +%H:%M"
[[variable]]
name="focus_tip"
type="string"
question="Based on the current time and my schedule, what should I be focusing on right now? Keep it short."
Error Log Analyzer
Suggest next steps when system errors occur, based on a runbook.
1. Create ~/.config/oatbar-llm/runbook.md
# Incident Runbook
- **High CPU:** Check `top`, identify process. If `cargo`, ignore. If unknown, kill.
- **Disk Full:** Clean `/tmp` and `~/.cache`. Check `docker system df`.
- **SSH Failures:** Check `auth.log` for repeated IPs. Ban with `fail2ban`.
- **OOM:** Check kernel logs. Restart service.
2. Configure ~/.config/oatbar-llm/config.toml
[llm]
provider="google"
name="gemini-2.5-flash"
knowledge_base="/home/user/.config/oatbar-llm/runbook.md"
[[command]]
name="sys_errors"
command="journalctl -p err -n 10 --no-pager"
[[variable]]
name="incident_action"
type="string"
question="Analyze the recent system errors. Based on the runbook, what is the recommended action?"
Hacker News RSS Summary
Fetch the latest news and get a concise summary on your bar.
1. Configure ~/.config/oatbar-llm/config.toml
[llm]
provider="google"
name="gemini-2.5-flash"
knowledge_base="/home/user/.config/oatbar-llm/hn_preferences.md"
Create ~/.config/oatbar-llm/hn_preferences.md:
I am interested in:
- Rust, Go, C++
- System programming, Linux, Kernel
- AI, LLMs, Machine Learning
- Security, Cryptography
I am NOT interested in:
- Web frameworks (React, Vue, etc.)
[[command]]
name="hn_rss"
command="curl -s https://news.ycombinator.com/rss"
[[variable]]
name="top_stories"
type="string"
question="Extract the top 3 most interesting headlines from this RSS feed and combine them into a single, short sentence separated by pipes."
2. Configure oatbar
[[command]]
name="news"
command="oatbar-llm"
interval=10800 # Every 3 hours
[[block]]
name="news_feed"
type="text"
value="HN: ${news:top_stories.value}"
Tips & Best Practices
Debugging Prompts
Before connecting oatbar-llm to oatbar, run it manually in your terminal to verify the output. Use oatbar-llm --mode=debug to see the raw response from the LLM, which is helpful for troubleshooting prompts.
Prompt Engineering
LLMs are sensitive to how you ask questions.
- Be Specific: Instead of “What’s the status?”, ask “Summarize the system status in 3 words based on these logs.”
- Define Output: Explicitly state the desired format (e.g., “Format: …”).
- Iterate: Use the debug mode to tweak your prompt until you get consistent results.
Quota Management
LLM API calls can be expensive or rate-limited.
- Watch your usage: Monitor your provider’s dashboard.
- Increase Intervals: For non-critical data (like weather or news), set the
intervalinoatbarto a higher value (e.g.,3600for 1 hour,10800for 3 hours).
Consolidating Queries
To save on API calls and context window usage, combine related tasks into a single oatbar-llm configuration.
Instead of having one config for “CPU” and another for “Memory”, fetch both metrics in the [[command]] section and ask for a combined summary populating multiple [[variable]]s.