10.5. Headless Mode¶
"Headless Mode" refers to running Backend.AI GO primarily as a background service or server, without relying on the graphical user interface (GUI) for daily interactions. This is particularly useful for setting up a dedicated inference server on a spare machine or managing the application remotely.
Concept¶
Although Backend.AI GO ships as a desktop app, its core runtime is shared:
- Shared Rust runtime: Handles model inference, process orchestration, the Management API, and the Continuum Router.
- Desktop transport: Tauri IPC from the embedded WebView.
- Headless transport: REST and SSE exposed by
aigo-server, plus the WebUI served over HTTP.
This means headless mode is no longer a reduced "server-only" path. The goal is for headless WebUI and desktop UI to execute the same runtime logic unless a feature truly depends on desktop integration.
Operation¶
System Tray¶
The simplest form of "headless-like" operation is closing the main window. * By default, closing the window minimizes Backend.AI GO to the System Tray (Menu Bar on macOS). * The API server and model inference continue running in the background.
CLI Control¶
You can use the bundled aigo CLI to manage the application without opening the window.
# List loaded models
aigo model list
# Load a model
aigo model load --name "llama-3-8b-instruct"
# Check system stats
aigo system info
See the CLI Reference for full documentation.
Dedicated Headless Server (aigo-server)¶
Backend.AI GO provides a standalone headless binary:
In this mode:
- The
tauricrate is not part of theaigo-serverdependency graph. - The Management API becomes the primary control plane.
- The WebUI connects over HTTP/SSE instead of Tauri IPC.
- Model pools, router management, scheduling, agents, memory, and provider/runtime coordination reuse the same shared runtime managers used by the desktop app.
Remote Access (Server Mode)¶
To turn your local machine into a headless node for others:
- Go to Settings > Advanced.
- Enable Remote Access (Allow external connections).
- Set the API Port (default: 8080).
- (Optional) Setup a firewall rule to allow traffic on that port.
Now, other instances of Backend.AI GO, the WebUI, or curl/Python scripts can connect to your machine's IP address as if it were a server.
Troubleshooting¶
Navigation Broken After OOBE Completion¶
If you complete the initial setup wizard in headless mode and the application subsequently shows infinite errors or cannot navigate to the main interface, this was a known issue in earlier versions.
The root causes were:
- The monitoring store made direct Tauri IPC calls that fail in web/headless mode.
- The SPA fallback served
index.htmlfor unmatched/api/*routes, causing the frontend to receive HTML instead of JSON.
Both are fixed in Backend.AI GO. If you encounter this on an older version, upgrading resolves it.
API Keys Lost After Restarting the Server¶
In headless mode, Backend.AI GO uses an encrypted file (encrypted_keys.json) to store API keys instead of the OS keychain, which requires a GUI. Earlier versions had a bug where:
- Adding a new key corrupted existing encrypted entries by re-saving them as plaintext, causing them to fail decryption on the next restart.
- On startup,
force_encrypted_file_backend()unconditionally cleared the key registry (key_ids), so all keys were forgotten even if the encrypted file was intact.
Both issues are fixed in the current version. If you experience key loss after upgrading from an older version, re-enter your API keys once and they will persist correctly across restarts.