Documentation

Build Faster with
OmniSync API

Everything you need to submit AI workloads, run GPU nodes, and integrate OmniSync into your existing ML pipeline — in under 10 minutes.

⚡ Quick Start

Get your first AI job running on OmniSync's decentralized GPU network in three steps.

Install the CLI
bash
# macOS / Linux
curl -sSL https://install.omnisync.io | bash

# Windows (PowerShell)
irm https://install.omnisync.io/win | iex

# Verify installation
omnisync --version
→ omnisync v1.0.0-beta
Authenticate with your API key
bash
omnisync auth login --key omni_sk_your_api_key_here
→ Authenticated as user@example.com
→ Wallet: 7xKp...3mNq (14.2 $OMNI balance)
Submit your first job
bash
omnisync job run \
  --image pytorch/pytorch:2.1.0-cuda11.8 \
  --gpu RTX4090 \
  --vram 24GB \
  --script ./train.py

→ Job ID: job_a9f3b2c1
→ Nodes allocated: 3 (DE, SG, US)
→ Estimated cost: 0.08 $OMNI/hr
→ Status: RUNNING ✓

Installation

System Requirements

ComponentMinimumRecommended
OSUbuntu 20.04 / macOS 12 / Win 10Ubuntu 22.04 LTS
CPU4 cores8+ cores
RAM8 GB16 GB+
GPU (node only)RTX 2070 / 8GB VRAMRTX 4090 / A100
Network100 Mbps1 Gbps+
Disk50 GB SSD500 GB NVMe

Authentication

All API requests require an API key passed in the Authorization header.

http
Authorization: Bearer omni_sk_your_api_key_here
Where to find your API key: Dashboard → Settings → API Keys → Create New Key. Keys are prefixed with omni_sk_.

Node Setup — Become a Provider

Turn your idle GPU into a revenue-generating node. Providers earn $OMNI for every hour their hardware processes jobs on the network.

bash
# Register your node (requires 500 $OMNI stake)
omnisync node register \
  --wallet YOUR_SOLANA_WALLET_ADDRESS \
  --min-price 0.04         # minimum $OMNI per GPU-hour \
  --gpu-tier auto-detect

→ Node ID: OMNI-NODE-7f2a9b4e
→ Hardware detected: RTX 4090 · 24GB VRAM · 82.6 TFLOPS
→ Stake locked: 500 $OMNI
→ Status: ACTIVE — now earning

Node Configuration

The node config file lives at ~/.omnisync/node.toml:

toml
# ~/.omnisync/node.toml

[node]
wallet   = "YOUR_SOLANA_WALLET"
min_price = 0.04      # $OMNI per GPU-hour
max_jobs  = 3         # concurrent jobs
auto_accept = true

[hardware]
gpu_ids  = [0]         # which GPUs to expose
vram_reserve_gb = 2   # keep 2GB for system

[network]
bandwidth_limit_mbps = 500
region = "EU-WEST"

REST API — Jobs

Base URL: https://api.omnisync.io/v1

POST/jobsSubmit a new compute job
GET/jobs/{job_id}Get job status and results
GET/jobsList all jobs for account
DEL/jobs/{job_id}Cancel a running job

POST /jobs — Parameters

ParameterTypeRequiredDescription
imagestringrequiredDocker image to run
gpu_typestringoptionalPreferred GPU model (e.g. RTX4090)
min_vram_gbintegeroptionalMinimum VRAM in GB
scriptstringrequiredPath or URL to entrypoint script
envobjectoptionalEnvironment variables
max_pricefloatoptionalMax $OMNI per GPU-hour
regionstringoptionalPreferred region (e.g. EU, US, APAC)
json — example response
{
  "job_id":     "job_a9f3b2c1",
  "status":     "running",
  "nodes": ["OMNI-NODE-DE-4821", "OMNI-NODE-SG-0042"],
  "cost_per_hr": 0.08,
  "started_at": "2025-05-11T04:25:00Z",
  "proof_hash": "0xf3a9...b2c1"
}

Python SDK

bash
pip install omnisync
python
from omnisync import OmniSync

client = OmniSync(api_key="omni_sk_...")

# Submit a training job
job = client.jobs.create(
    image="pytorch/pytorch:2.1.0-cuda11.8",
    script="./train.py",
    min_vram_gb=16,
    env={"EPOCHS": "10", "BATCH_SIZE": "32"}
)

print(job.id)       # job_a9f3b2c1
print(job.status)   # running

# Wait for completion
result = job.wait(timeout=3600)
print(result.output_url)  # download results

PyTorch Integration

OmniSync is fully compatible with standard PyTorch training scripts. No code changes required — just point OmniSync at your existing train.py.

Data Privacy: Your training data and model weights are never accessible to node operators. All workloads run in encrypted Docker containers with ZK-verified computation.

Error Codes

CodeMeaningResolution
4001Insufficient $OMNI balanceTop up your wallet before submitting jobs
4002No matching nodes availableRelax GPU or VRAM requirements
4003Job timed outIncrease timeout or split into smaller jobs
4004Proof of Computation failedJob was redistributed automatically; contact dev support if persistent
5001Network congestionRetry with exponential backoff

Changelog

v1.0.0-beta — May 2025