Core Concepts

Tensor Commits Protocol

The security base of Theseus: public verifiability and tamper-proof computations with <1% overhead.

<1%
Proof generation overhead
<0.1%
Verification time
~2ms
Check time per proof

Overview

Tensor-commit protocols enable verifiable ML by proving a model was executed correctly. Traditional verification via recomputation is prohibitively expensive for large models.

Theseus' Tensor Commits provide batch verification and reduce opening costs through a novel application of KZG commitment schemes extended to multi-dimensional tensor structures.

How Tensor Commits Work

Tensor Commits Verification Process

Figure Explanation

A sovereign agent's token stream is fed auto-regressively into an LM Transformer whose every weight tensor and intermediate activation is bound by a single tensor commitment. All per-block commitments live in a Terkle "tensor-commit" tree, so a small set of Merkle-path openings plus KZG pairings suffices to verify every attention score, residual update, non-linearity, and final logit computation.

Key Achievements

<1% Proof Generation

Minimal impact on inference performance. Practical for production workloads.

<0.1% Verification Time

Verifiers check proofs in milliseconds. Thousands can audit simultaneously.

Efficient & Scalable

O(log n) verification complexity
Proof size <1MB for frontier models
1000+ simultaneous verifiers
Sublinear scaling with model size

Terkle Trees

A Terkle tree (tensor Merkle tree) has leaves that are sub-tensors and internal nodes that carry tensor commitments instead of hash values.

Structure

  • • Each dimension j has mⱼ blocks
  • • Each leaf cℓ is a commitment of sub-tensor Tℓ
  • • Parents commit to children tensor concatenation
  • • Root cᵣₒₒₜ is the global model fingerprint

Benefits

  • Batch verification: Multiple ops in one proof
  • Selective opening: Without revealing full model
  • Efficient proofs: Logarithmic proof size
  • Hierarchical: Natural fit for NN layers

Verification Process

1

Model Registration

Prover uploads weights with Tensor Commit. Commitment stored on-chain as canonical fingerprint.

2

Inference Execution

Prover runs forward pass, emits proof with opening, input embeddings, layer outputs, and Merkle path.

3

Verification

Every verifier checks every inference. ~2ms check time, gossip once, 2/3 BFT agreement needed.

Performance Comparison

OperationLatencyProof SizeGas Cost
TMATMUL 512x5124.1 ms230 KB18K
TSTREAM 4x5128.6 ms400 KB27K
TCOMMIT 70B22 ms470 KB120K

* Gas costs based on base-load multiplier m = 1.0

LLM-Specific Optimizations

Token Embeddings

Committed polynomially with positional encoding using homomorphic properties

Layer Normalization

Mean/variance via polynomial commitments, inverse sqrt via polynomial approximation

Multi-Head Attention

Q, K, V matrices committed individually, attention scores polynomially approximated

Residual Connections

Handled via commitment homomorphism, layers reuse prior commitments

Mixture-of-Experts

Sparse expert activations committed efficiently, only activated experts contribute

Why This Matters

No recomputation:Verifiers don't re-run the entire model
Hardware independence:Proofs valid regardless of hardware
Privacy preserving:Weights remain private, computation verifiable
Scalable verification:Thousands of validators simultaneously
Documentation