Setup Configurator
Build your oqto.setup.toml configuration file. This drives the
non-interactive installer so you can deploy oqto on a fresh server without
answering prompts. This config runs offline in your browser, nothing is getting uploaded.
Deployment
How oqto runs on your server. These choices affect isolation, security, and resource usage.
Single User runs everything as your login user. Good for personal
machines and development. All agents share your filesystem and permissions.
Multi User creates isolated Linux users (oqto_username-xxxx) for each
platform user. Each gets their own home directory, systemd services, and file
permissions. Required for shared servers and production.
Local spawns agent processes directly on the host. Lower overhead,
easier debugging, and full access to local tools (compilers, git, etc.).
Container mode (Docker/Podman isolation) is planned but not yet available.
Auto-detect picks whichever is installed. Podman runs rootless by default which is better for security. Docker requires the Docker daemon.
Network
HTTPS, domain, and logging. If you are deploying to a server with a public domain, enable Caddy for automatic TLS.
The public domain pointing to your server (A record). Caddy uses this to request a TLS certificate from Let's Encrypt. Port 80 and 443 must be reachable from the internet for the ACME challenge.
Admin User
The first user account created during setup. This account has full admin privileges: managing users, viewing system status, and configuring providers. The password is never stored in this file, it will be prompted securely during installation.
Used for login and notifications. Can be changed later in the admin panel.
LLM Providers
Select which LLM providers to configure. oqto uses eavs as an LLM proxy that sits
between agents and provider APIs. This gives you a single endpoint for all providers,
per-user virtual API keys, usage tracking, and rate limit management.
API keys are not stored in this file. During setup, the installer will check your
environment for existing keys (e.g. $OPENAI_API_KEY) and prompt for any
missing ones. You can add or change providers later by editing the eavs config.
For azure ai foundry, pick the API type (openai chat, openai responses, or anthropic)
and set the base url for your foundry resource (e.g. https://
~/.config/eavs/config.toml
Agent Tools
CLI tools available to AI agents in their workspace. These extend what agents can
do beyond code editing, searching the web, managing memory, generating documents, etc.
All tools are installed to /usr/local/bin and available to all users.
SearXNG
A self-hosted metasearch engine that aggregates results from multiple search engines
without tracking. Required for the sx tool. Runs on localhost:8888 with
Valkey for result caching. Uses ~100MB RAM.
Server Hardening
Security measures for production servers exposed to the internet. These are standard Linux hardening practices and are strongly recommended for any public-facing deployment. Skip for local development.
Changing from the default 22 reduces automated brute-force attempts. Common alternatives: 2222, 2200, or any high port. Make sure your cloud provider's firewall allows the new port before changing.
Deploy
Copy this command and paste it into an SSH session on your target server. It clones the repo, writes your config, and runs setup. Only passwords and API keys will be prompted interactively.
The URL fragment contains your config as base64. setup.sh decodes it
locally -- nothing is sent to any server.
Bookmark this link or share it with your team. Opening it pre-fills the form with your exact settings. The config is encoded in the URL fragment (#) which is never sent to any server.