Validator Node

Validators query miners, score their outputs, and participate in Yuma consensus. This guide covers hardware requirements, scoring implementation, commit-reveal submission, and monitoring.

Overview

A validator node:

  • Queries Miners: Sends inference tasks to all active miners in a subnet

  • Scores Outputs: Evaluates responses based on accuracy, latency, consistency

  • Commits Scores: Submits hash(scores + salt) on-chain

  • Reveals Scores: Reveals plaintext scores and salt

  • Earns Rewards: Receives 41% of subnet emissions (trust-weighted)

Software Setup

Prerequisites

Copy

# Ubuntu 22.04
sudo apt update && sudo apt upgrade -y
sudo apt install -y docker.io docker-compose git curl

# Add user to docker group
sudo usermod -aG docker $USER
newgrp docker

Install Tensora Validator Client

Copy

# Clone validator client
git clone https://github.com/tensora-labs/validator-client.git
cd validator-client

# Copy environment template
cp .env.example .env

Example .env:

Copy

# RPC
TENSORA_RPC=https://rpc.tensora.org

# Validator identity
VALIDATOR_PRIVATE_KEY=0xabc123...

# Subnet configuration
SUBNET_ID=1
SUBNET_TYPE=Linguista

# Contracts
CONSENSUS_MODULE=0x...
MINER_REGISTRY=0x...
VALIDATOR_REGISTRY=0x...

# Scoring parameters
TASK_COUNT=10
TIMEOUT_SECONDS=120

Docker Compose

docker-compose.yml:

Copy

version: "3.8"
services:
  validator:
    image: tensoralabs/validator:latest
    env_file: .env
    restart: unless-stopped
    volumes:
      - ./logs:/app/logs
      - ./data:/app/data
    ports:
      - "9090:9090"  # Prometheus metrics
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

Start:

Copy

docker-compose up -d

Validator Logic

Pseudocode

Copy

while True:
    # Wait for epoch start
    current_epoch = get_current_epoch(subnet_id)
    
    # Phase 1: Query miners (2 hours)
    miners = get_active_miners(subnet_id)
    tasks = generate_tasks(subnet_id, count=10)
    
    scores = {}
    for miner in miners:
        responses = []
        for task in tasks:
            try:
                response = query_miner(miner.endpoint, task, timeout=120)
                responses.append(response)
            except Timeout:
                responses.append(None)
        
        # Score miner
        accuracy = compute_accuracy(responses, ground_truth)
        latency = compute_avg_latency(responses)
        consistency = compute_consistency(responses)
        
        scores[miner.id] = combine_scores(accuracy, latency, consistency)
    
    # Phase 2: Commit scores (1 hour)
    salt = random_bytes(32)
    commitment = keccak256(abi.encode(scores, salt))
    tx = consensus_module.commitScores(subnet_id, current_epoch, commitment)
    wait_for_confirmation(tx)
    
    # Wait for reveal phase
    wait_until_reveal_phase(current_epoch)
    
    # Phase 3: Reveal scores (30 min)
    tx = consensus_module.revealScores(subnet_id, current_epoch, scores, salt)
    wait_for_confirmation(tx)
    
    # Wait for next epoch
    wait_until_next_epoch()

Task Generation

Tasks should be:

  • Objective: Ground truth verifiable (e.g., known translations, labeled images)

  • Diverse: Cover model capabilities

  • Representative: Match real-world usage

Example (Linguista NLP):

Copy

def generate_translation_tasks(count=10):
    # Use test sets with known translations
    dataset = [
        {"en": "Hello world", "es": "Hola mundo"},
        {"en": "Good morning", "es": "Buenos días"},
        # ... more pairs
    ]
    
    tasks = []
    for i in range(count):
        pair = random.choice(dataset)
        tasks.append({
            "task_id": uuid.uuid4().hex,
            "type": "translation",
            "input": {
                "text": pair["en"],
                "source_lang": "en",
                "target_lang": "es"
            },
            "ground_truth": pair["es"]
        })
    
    return tasks

Scoring Metrics

Linguista (NLP)

BLEU Score (translation quality):

Copy

from nltk.translate.bleu_score import sentence_bleu

def compute_bleu(reference: str, candidate: str) -> float:
    reference_tokens = reference.split()
    candidate_tokens = candidate.split()
    return sentence_bleu([reference_tokens], candidate_tokens)

Combined Score:

Copy

def score_miner(responses, tasks):
    bleu_scores = []
    latencies = []
    
    for response, task in zip(responses, tasks):
        if response is None:
            bleu_scores.append(0)
            latencies.append(999)  # Timeout penalty
        else:
            bleu = compute_bleu(task["ground_truth"], response["result"]["text"])
            bleu_scores.append(bleu)
            latencies.append(response["processing_time_ms"])
    
    avg_bleu = sum(bleu_scores) / len(bleu_scores)
    avg_latency = sum(latencies) / len(latencies)
    
    # Normalize latency (lower is better, map to 0-1 scale)
    latency_score = max(0, 1 - (avg_latency / 10000))  # 10s = 0, 0s = 1
    
    # Combined: 70% BLEU, 20% latency, 10% consistency
    consistency_score = 1 - std_dev(bleu_scores)  # High variance = low consistency
    
    final_score = (avg_bleu * 0.7 + latency_score * 0.2 + consistency_score * 0.1) * 100
    return int(final_score)

Visiona (Vision)

CLIP Score (image-text alignment):

Copy

import torch
from transformers import CLIPProcessor, CLIPModel

def compute_clip_score(prompt: str, image_url: str) -> float:
    model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
    processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
    
    image = Image.open(requests.get(image_url, stream=True).raw)
    inputs = processor(text=[prompt], images=image, return_tensors="pt", padding=True)
    
    outputs = model(**inputs)
    logits_per_image = outputs.logits_per_image
    score = logits_per_image.softmax(dim=1)[0].item()
    
    return score

Predictia (Forecasting)

RMSE (prediction accuracy):

Copy

import numpy as np

def compute_rmse(predictions: list, actuals: list) -> float:
    return np.sqrt(np.mean((np.array(predictions) - np.array(actuals))**2))

On-Chain Registration

Stake TORA

Copy

export STAKING_HUB="0x..."
export TORA_ADDRESS="0x..."
export SUBNET_ID="1"
export STAKE_AMOUNT="15000000000000000000000"  # 15,000 TORA
export LOCK_DURATION="2592000"  # 30 days

# Approve
cast send $TORA_ADDRESS "approve(address,uint256)" $STAKING_HUB $STAKE_AMOUNT \
  --rpc-url https://rpc.tensora.org \
  --private-key $VALIDATOR_PRIVATE_KEY

# Stake
cast send $STAKING_HUB "stake(uint256,uint256,uint256)" \
  $SUBNET_ID $STAKE_AMOUNT $LOCK_DURATION \
  --rpc-url https://rpc.tensora.org \
  --private-key $VALIDATOR_PRIVATE_KEY

Register as Validator

Copy

export VALIDATOR_REGISTRY="0x..."

cast send $VALIDATOR_REGISTRY "registerValidator(uint256,uint256)" \
  $SUBNET_ID $STAKE_AMOUNT \
  --rpc-url https://rpc.tensora.org \
  --private-key $VALIDATOR_PRIVATE_KEY

Verify Registration

Copy

cast call $VALIDATOR_REGISTRY \
  "validators(uint256,address)(address,uint256,uint256,uint256,uint256,bool)" \
  $SUBNET_ID $YOUR_VALIDATOR_ADDRESS \
  --rpc-url https://rpc.tensora.org

Should return your stake amount and active=true.

Commit-Reveal Implementation

Commit Phase (Off-chain Node.js example)

Copy

const ethers = require("ethers");

// Compute commitment
const scores = [85, 70, 90, 65];  // Scores for miners 0-3
const salt = ethers.randomBytes(32);
const commitment = ethers.keccak256(
    ethers.AbiCoder.defaultAbiCoder().encode(
        ["uint256[]", "bytes32"],
        [scores, salt]
    )
);

// Submit commitment
const consensusModule = new ethers.Contract(CONSENSUS_MODULE_ADDRESS, ABI, wallet);
const tx = await consensusModule.commitScores(subnetId, epoch, commitment);
await tx.wait();

// Store salt securely for reveal phase
fs.writeFileSync(`./data/epoch_${epoch}_salt.txt`, ethers.hexlify(salt));

Reveal Phase

Copy

// Load salt
const salt = ethers.hexlify(fs.readFileSync(`./data/epoch_${epoch}_salt.txt`));

// Reveal scores
const tx = await consensusModule.revealScores(subnetId, epoch, scores, salt);
await tx.wait();

Timing

Validator client should:

  • Commit within blocks 0–1000 of epoch

  • Reveal within blocks 1001–1100 of epoch

Cron Schedule (6-hour epochs):

Copy

# Commit at epoch start + 3h
0 3,9,15,21 * * * /app/commit.sh

# Reveal at epoch start + 4h
0 4,10,16,22 * * * /app/reveal.sh

Monitoring

Metrics to Track

MetricDescriptionAlert Threshold

Uptime

% of epochs participated

<95%

Trust Score

Consensus alignment

<0.5

Commit Success Rate

% commits confirmed

<98%

Reveal Success Rate

% reveals confirmed

<98%

Gas Cost per Epoch

BNB or TORA spent

>0.01 BNB

Earnings per Epoch

TORA received

Decreasing trend

Prometheus Exporter

Add to validator client:

Copy

from prometheus_client import Gauge, Counter

trust_score_gauge = Gauge('validator_trust_score', 'Current trust score')
earnings_gauge = Gauge('validator_earnings', 'Earnings this epoch')
commit_success_counter = Counter('validator_commits_success', 'Successful commits')
commit_fail_counter = Counter('validator_commits_failed', 'Failed commits')

# Update metrics
trust_score_gauge.set(get_trust_score())
earnings_gauge.set(get_epoch_earnings())

Grafana Dashboard

Queries:

  • Trust score over time: validator_trust_score

  • Commit success rate: rate(validator_commits_success[1h]) / (rate(validator_commits_success[1h]) + rate(validator_commits_failed[1h]))

  • APY estimate: (validator_earnings * 76 * 100) / stake_amount

Alerts

Discord Webhook:

Copy

import requests

def alert_discord(message):
    webhook_url = os.getenv("DISCORD_WEBHOOK")
    requests.post(webhook_url, json={"content": f"🚨 Validator Alert: {message}"})

# Example usage
if trust_score < 0.5:
    alert_discord(f"Trust score dropped to {trust_score}")

Common Pitfalls

Late Reveal

Symptom: Reveal transaction reverts, 3% slashed.

Solution: Submit reveals early in window (blocks 1001–1010, not 1090–1100) to account for network delays.

Salt Mismatch

Symptom: Reveal fails, 5% slashed.

Cause: Salt not stored correctly, or different salt used in commit.

Solution: Use deterministic salt generation:

Copy

salt = keccak256(epoch + validator_private_key)

Scoring All Identical

Symptom: All miners get same score (e.g., 80).

Cause: Scoring logic not differentiated.

Solution: Use objective metrics with natural variance (BLEU, CLIP, RMSE).

Low Trust Score

Symptom: Trust score drops to 0.3, earnings drop 50%.

Cause: Scores diverge from consensus.

Solution:

  • Verify ground truth accuracy

  • Check if other validators use different metrics

  • Adjust scoring weights to align with consensus

Insufficient Gas

Symptom: Commit/reveal transactions fail with "insufficient funds".

Solution: Maintain 0.1 BNB (or equivalent TORA for Paymaster) buffer.

Multi-Subnet Validation

Run multiple validator instances for different subnets:

Copy

# docker-compose.yml
version: "3.8"
services:
  validator-linguista:
    image: tensoralabs/validator:latest
    environment:
      - SUBNET_ID=1
      - SUBNET_TYPE=Linguista
      - VALIDATOR_PRIVATE_KEY=${VALIDATOR_KEY}
  
  validator-visiona:
    image: tensoralabs/validator:latest
    environment:
      - SUBNET_ID=2
      - SUBNET_TYPE=Visiona
      - VALIDATOR_PRIVATE_KEY=${VALIDATOR_KEY}

Stake and register separately for each subnet.

Earnings Estimation

Assumptions:

  • Subnet: Linguista

  • Stake: 15,000 TORA (30% of total subnet stake)

  • Trust score: 0.75

  • Validator pool: 172.2 TORA per epoch

  • Effective weight: (0.30 × 0.75) / (sum of all effective weights)

Simplified:

  • Your share: ~25% (example)

  • Reward per epoch: 172.2 × 0.25 = 43 TORA

Annual:

  • Epochs per year: 76

  • Annual reward: 43 × 76 = 3,268 TORA

  • APY: (3,268 / 15,000) × 100 = 21.8%

Plus delegator commission (if you accept delegations):

  • Delegator pool for your share: 172.2 × 0.25 = 43 TORA

  • Commission (10%): 4.3 TORA per epoch

  • Annual commission: 4.3 × 76 = 327 TORA

Total annual: 3,268 + 327 = 3,595 TORA (23.97% APY)

Security Best Practices

Private Key Security

  • Store in encrypted keystore, not plaintext .env

  • Use hardware wallet (Ledger) for high-stake validators

  • Separate hot wallet (small amount) for commits/reveals

Server Security

Copy

# Firewall: Allow only necessary ports
sudo ufw allow 22/tcp    # SSH
sudo ufw allow 9090/tcp  # Prometheus (internal only)
sudo ufw enable

# Fail2ban for SSH brute-force protection
sudo apt install fail2ban

# Auto-updates
sudo apt install unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades

Backup

Copy

# Backup salt files and config
crontab -e

# Add: 0 2 * * * rsync -av /home/user/validator-client/data/ /backup/validator-data/

Last updated