Skip to content

Quickstart Guide

Get Backend.AI GO up and running and start your first local AI chat in 5 minutes.

Step 1: Install Backend.AI GO

Download the installer for your platform from the official website or the GitHub Releases page.

  • macOS: Open the .dmg file and drag Backend.AI GO to your Applications folder.
  • Windows: Run the .exe or .msi installer and follow the prompts.
  • Linux: Install the .deb package or use the .flatpak bundle.

For more details, see the Installation Guide.

Step 2: Download Your First Model

When you first open Backend.AI GO, the application will be empty. You need to download a model to begin.

  1. Click on the Search (Hugging Face) icon in the sidebar.
  2. Type in a popular model name like Gemma3-4B, Qwen3-4B, or gpt-oss-20B.
  3. Look for models tagged as GGUF (most common) or MLX (if you are on macOS).
  4. Click the Download button next to a model variant (Q4KM is usually a good balance of speed and quality).
  5. Wait for the download to complete in the Downloads tab.

Step 3: Load the Model

Once downloaded, your model will appear in the Local Models library.

  1. Go to the Models tab.
  2. Find your downloaded model and click the Load button.
  3. The status bar at the bottom will show the progress. Once it says "Ready," the model is active in your system's memory.

Step 4: Start Chatting!

  1. Click on the Chat icon in the sidebar.
  2. Type a message in the text box at the bottom (e.g., "Hello! Can you explain quantum physics in simple terms?").
  3. Press Enter and watch your local AI respond!

Next Steps

Now that you've completed your first chat, explore more advanced features: