Quickstart Guide¶
Get Backend.AI GO up and running and start your first local AI chat in 5 minutes.
Step 1: Install Backend.AI GO¶
Download the installer for your platform from the official website or the GitHub Releases page.
- macOS: Open the
.dmgfile and drag Backend.AI GO to your Applications folder. - Windows: Run the
.exeor.msiinstaller and follow the prompts. - Linux: Install the
.debpackage or use the.flatpakbundle.
For more details, see the Installation Guide.
Step 2: Download Your First Model¶
When you first open Backend.AI GO, the application will be empty. You need to download a model to begin.
- Click on the Search (Hugging Face) icon in the sidebar.
- Type in a popular model name like
Gemma3-4B,Qwen3-4B, orgpt-oss-20B. - Look for models tagged as GGUF (most common) or MLX (if you are on macOS).
- Click the Download button next to a model variant (Q4KM is usually a good balance of speed and quality).
- Wait for the download to complete in the Downloads tab.
Step 3: Load the Model¶
Once downloaded, your model will appear in the Local Models library.
- Go to the Models tab.
- Find your downloaded model and click the Load button.
- The status bar at the bottom will show the progress. Once it says "Ready," the model is active in your system's memory.
Step 4: Start Chatting!¶
- Click on the Chat icon in the sidebar.
- Type a message in the text box at the bottom (e.g., "Hello! Can you explain quantum physics in simple terms?").
- Press Enter and watch your local AI respond!
Next Steps¶
Now that you've completed your first chat, explore more advanced features:
- Using Agent Mode to perform complex tasks.
- Connecting Cloud Providers to combine local and cloud AI.
- Benchmarking to see how fast your machine really is.