Skip to content

What is WorldLand Cloud

WorldLand Cloud is a decentralized GPU cloud service that connects GPU providers with customers who need compute resources for AI/ML workloads.

Service Overview

┌─────────────────────────────────────────────────────────────────┐
│                    WorldLand Cloud Service                      │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│   Customer                                Provider              │
│   ────────                                ────────              │
│   • Browse available GPUs                 • Register GPU nodes  │
│   • Create GPU containers                 • Earn WL tokens     │
│   • SSH access to containers              • Set pricing         │
│   • Pay-as-you-go billing                 • Monitor usage       │
│                                                                 │
│                       ┌─────────────┐                          │
│                       │  WorldLand  │                          │
│                       │   Platform  │                          │
│                       └─────────────┘                          │
│                             │                                   │
│        ┌────────────────────┼────────────────────┐             │
│        ▼                    ▼                    ▼             │
│   ┌─────────┐         ┌─────────┐         ┌─────────┐         │
│   │ API     │         │  K8s    │         │ Smart   │         │
│   │ Server  │         │ Cluster │         │Contract │         │
│   └─────────┘         └─────────┘         └─────────┘         │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Core Features

1. GPU Container Rental

Customers can rent GPU-enabled containers with:

ResourceOptions
GPUNVIDIA GPUs (RTX 4090, RTX 3090, Tesla T4, A100, etc.)
CPU2-64 cores
Memory8GB - 256GB
Storage20GB - 500GB
Duration1 hour - 30 days

2. Instant SSH Access

Every container comes with:

  • Root SSH access
  • Pre-installed CUDA drivers
  • Configurable password
  • Public IP + NodePort
bash
# Connect to your GPU container
ssh root@<provider-ip> -p <nodeport>

# Example
ssh root@123.45.67.89 -p 30001

3. Pre-configured Images

ImageUse Case
nvidia/cuda:12.0.0-devel-ubuntu22.04General CUDA development
pytorch/pytorch:latestPyTorch training/inference
tensorflow/tensorflow:latest-gpuTensorFlow workloads

4. Provider Selection

Choose providers based on:

  • GPU model and count
  • Geographic location
  • Price per hour
  • Availability

Technical Architecture

┌──────────────────────────────────────────────────────────────────┐
│                         FRONTEND (Next.js)                       │
│          Dashboard / Job Management / Provider Console           │
└──────────────────────────────┬───────────────────────────────────┘
                               │ REST API
┌──────────────────────────────▼───────────────────────────────────┐
│                    K8S-PROXY-SERVER (Go/Gin)                     │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────────┐   │
│  │ Job Handler │  │ Provider    │  │ Wallet Auth Handler     │   │
│  │ (GPU Jobs)  │  │ Handler     │  │ (EIP-712 Signature)     │   │
│  └──────┬──────┘  └──────┬──────┘  └────────────┬────────────┘   │
│         │                │                      │                │
│  ┌──────▼──────┐  ┌──────▼──────────────────────▼──────┐         │
│  │ Job Manager │  │          Orchestrator              │         │
│  │             │  │  - Provider Registration           │         │
│  │             │  │  - Node Management                 │         │
│  │             │  │  - Resource Allocation             │         │
│  └──────┬──────┘  └──────────────────────────────────┘          │
└─────────┼────────────────────────────────────────────────────────┘

    ┌─────▼─────────────────────────────────────────────┐
    │           Kubernetes Cluster                      │
    │  ┌─────────────────────────────────────────────┐  │
    │  │ Worker Nodes (GPU Providers)                │  │
    │  │ ┌─────────────┐  ┌─────────────┐            │  │
    │  │ │ GPU Pod     │  │ GPU Pod     │            │  │
    │  │ │ (SSH + GPU) │  │ (SSH + GPU) │            │  │
    │  │ └─────────────┘  └─────────────┘            │  │
    │  └─────────────────────────────────────────────┘  │
    └───────────────────────────────────────────────────┘

Key Components

ComponentTechnologyFunction
FrontendNext.jsUser dashboard and management
API ServerGo/GinREST API for job and provider management
Job ManagerGoGPU container lifecycle management
OrchestratorGoProvider registration and resource allocation
KubernetesK8sContainer orchestration
Smart ContractSolidityPayment (GPUVault)

API Endpoints

Job Management

MethodEndpointDescription
POST/api/v1/jobsCreate GPU container
GET/api/v1/jobsList my jobs
GET/api/v1/jobs/:idGet job status
DELETE/api/v1/jobs/:idDelete job

Provider Management

MethodEndpointDescription
GET/api/v1/providersList providers
GET/api/v1/providers/searchSearch providers
GET/api/v1/providers/gpu-availabilityReal-time GPU availability
GET/api/v1/providers/:idGet provider details

Getting Started

Decentralized GPU Infrastructure for the AI Era