Production-ready decentralized AI inference on Tensora L2
A complete AI compute marketplace where:
- Subnets post AI inference jobs (ONNX models, LLMs)
- Miners execute real AI workloads and earn TORA
- Validators verify results and ensure quality
- Smart contracts handle payments and disputes on BSC
┌─────────────── BSC Mainnet (L1) ───────────────┐
│ │
│ SubnetJobs (0x...) ← Job marketplace │
│ SubnetRegistry ← Subnet tracking │
│ ValidatorRewards ← Reward distribution │
│ TORA Token ← Payment currency │
│ │
└─────────────────────────────────────────────────┘
▼
┌──────────── Tensora L2 (44444444) ─────────────┐
│ │
│ RPC: https://rpc.tensora.sh │
│ Events & transactions │
│ │
└─────────────────────────────────────────────────┘
▼
┌───────────── Off-Chain Services ───────────────┐
│ │
│ ┌────────────┐ ┌──────────────┐ ┌─────────┐│
│ │Miner Node │ │Validator Node│ │Coord ││
│ │ │ │ │ │ ││
│ │ONNX Runtime│ │Verify Results│ │Matchmake││
│ │vLLM Engine │ │Pay Rewards │ │Track ││
│ │ │ │ │ │ ││
│ └────────────┘ └──────────────┘ └─────────┘│
│ │
└─────────────────────────────────────────────────┘
▼
┌──────────── IPFS Storage ─────────────────────┐
│ │
│ Models (ONNX, GGUF) │
│ Input Data (images, text, embeddings) │
│ Output Results (verifiable hashes) │
│ │
└─────────────────────────────────────────────────┘
# Install
npm install -g @tensora/cli
# Start mining
tensora miner start --engine onnx
# Or for LLMs
tensora miner start --engine vllm --model llama2# Post a job
tensora subnet jobs post \
--subnet 1 \
--fee 25 \
--model ipfs://QmModel \
--input ipfs://QmInput \
--spec "4cpu,8ram,8vram,gpu"# Start validator
tensora validator start
# Check pending verifications
tensora validator pending
# Claim rewards
tensora validator claim- Subnet owner approves TORA
- Calls
SubnetJobs.postJob(spec)on BSC - Job broadcast to miners via events
- Miner node polls for jobs matching their hardware
- Downloads model from IPFS
- Executes inference (ONNX or vLLM)
- Generates Proof of Compute:
PoC = sign(resultHash + inputHash + envHash + timestamps) - Submits to chain:
submitResult(jobId, resultHash, proof)
- Validator node monitors submitted jobs
- Re-executes 10% of jobs deterministically
- Challenges if result doesn't match
- After challenge window: calls
finalize(jobId)
- 80% → Miner (instant transfer)
- 20% → ValidatorRewards contract (for validators)
✅ SubnetRegistry (0x3419dfa79a415a4599b2142d30d73c49692829c6)
- Jobs are per-subnet
- Subnet owners control parameters
✅ ValidatorRewards (0x404F245E672AE2832851fB0f1F3A3d8a07BaF34D)
- Validators earn from job verification
- Existing claiming mechanism works
✅ @tensora/cli (v1.0.1)
- Extends with
tensora minercommands - Uses existing wallet/keystore
✅ @tensora/subnet-sdk (v1.0.0)
- Add job management methods
- TypeScript SDK for developers
engine/onnx_runtime.py- ONNX model executionengine/vllm_runtime.py- LLM inference (vLLM)job_queue.py- SQLite-backed job queuejob_worker.py- Main execution loopproof_submitter.ts- Submit to BSC via viem
validator_engine.py- Deterministic re-executionverifier.py- Result comparison logicpayout_agent.ts- Call finalize() on BSC
- Job matchmaking
- Miner registry (hardware specs)
- Health monitoring
import onnxruntime as ort
import numpy as np
def run_onnx(model_path, input_data):
session = ort.InferenceSession(model_path)
outputs = session.run(None, input_data)
return outputsfrom vllm import LLM, SamplingParams
def run_vllm(model_name, prompts):
llm = LLM(model=model_name, seed=0)
params = SamplingParams(temperature=0, max_tokens=100)
outputs = llm.generate(prompts, params)
return [out.outputs[0].text for out in outputs]