v0.5.5

Decentralized AI Inference
for Everyone

A peer-to-peer protocol for efficient, ethical, and decentralized AI inference. Run 1-bit quantized models on any CPU with 99.6% energy savings.

196 Tests Passing
MIT License
8,000+ Lines of Code

Why ARIA?

AI inference without expensive hardware, excessive energy, or centralized control.

CPU-Efficient

89 t/s

1-bit ternary weights (-1, 0, +1) replace expensive multiplications with simple additions. Runs on any consumer CPU — no GPU required.

Energy-Conscious

11 mJ/token

99.6% energy reduction compared to cloud APIs. A single node uses ~241 kWh/year vs 25,550 kWh for cloud solutions.

Truly Decentralized

P2P

WebSocket-based peer-to-peer networking with pipeline parallelism. No central server, no single point of failure. Your data stays yours.

Extensible Ecosystem

8+ orgs

At least 8 independent organizations produce 1-bit models. No single vendor dependency. Falcon-Edge outperforms Microsoft BitNet (53.17% vs 51.54%).

Real-World Benchmarks

Tested on AMD Ryzen 9 7845HX — 8 threads, reproducible results.

Model Parameters Throughput Energy* Latency (p50) Memory
BitNet-b1.58-large 0.7B 89.65 t/s ~11 mJ/token 88 ms ~400 MB
BitNet-b1.58-2B-4T 2.4B 36.94 t/s ~28 mJ/token 504 ms ~1.3 GB
Llama3-8B-1.58 8.0B 15.03 t/s ~66 mJ/token 1,031 ms ~4.2 GB

*Energy is estimated via CPU-time x TDP. See benchmark report for full methodology.

Coming soon: Falcon3 1.58-bit (1B–10B), Falcon-Edge (outperforms Microsoft BitNet), PT-BitNet conversions

3-Year Total Cost of Ownership

Solution Hardware Running Costs Total vs ARIA
Cloud APIs (frontier) $0 $164,250 $164,250 2,161x
Llama API $0 $32,850 $32,850 432x
RTX 4090 (local) $2,000 $6,533 $8,533 112x
ARIA Protocol $0 $76 $76 1x

Assumptions: 10M tokens/day, existing CPU hardware, electricity at $0.25/kWh.

🧵
Thread Scaling

Optimal at 8 threads. 1-bit LUT kernels are memory-bound, not compute-bound.

🔀
Parallel Inference

3 concurrent streams yield only +11% throughput — validates P2P distribution approach.

📏
Context Length

Stable performance with only -7% degradation from 32 to 1024 tokens.

🧠
Consensus Inference

Multiple 7B models with orchestrated debate reach 92.85% accuracy (Nature 2025, SLM-MATRIX).

💾
Extended Context

KV-Cache NVMe paging targets 500K+ tokens on 8GB RAM via sparse attention + 2-bit quantization.

🌐
1-Bit Ecosystem

8+ independent organizations. Falcon-Edge outperforms Microsoft BitNet: 53.17% vs 51.54% avg benchmark.

Architecture

A 3-layer distributed system designed for resilience and efficiency.

Layer 3: Service
OpenAI-Compatible API Web Dashboard CLI Interface Desktop App (Tauri 2.0 + Electron)
Layer 2: Consensus
Provenance Ledger Proof of Useful Work Proof of Sobriety Consent Contracts
Layer 1: Compute
P2P Network (→ Kademlia DHT in v0.6.0) 1-bit Inference Engine Model Sharding Consent-based Routing

Security & Trust

Five independent defense layers protect every inference. No single point of failure.

5

Privacy & Consent

Consent contracts · Local-first inference · Data minimization

Implemented
4

Economic Security

Staking · Slashing · Time-locked rewards · Reputation

Designed
3

Consensus Security

Proof of Useful Work · Proof of Sobriety · Provenance Ledger

Implemented
2

Protocol Security

Message authentication · Replay protection · Anti-downgrade

Implemented
1

Transport Security

TLS 1.3 · Certificate validation · Perfect forward secrecy

Implemented
Provenance Ledger — Every inference recorded immutably
🛡️

Anti-Sybil

Proof of Useful Work requires real computation. Staking creates economic cost. Rate limiting caps fake node creation.

Anti-Fraud

Output hashes + timing analysis detect falsified results. Energy claims cross-referenced with hardware TDP profiles.

🔒

Privacy-First

Inference runs locally. Only cryptographic hashes transit the network. Consent contracts enforce resource limits.

📋

Full Audit Trail

Every inference recorded on provenance ledger: timestamp, I/O hashes, nodes, energy consumed. Fully auditable.

"Nodes do not trust each other — they verify."

Desktop Application

A beautiful, native desktop experience for ARIA Protocol.

Dashboard

Real-time node monitoring and network stats with live updates.

Model Manager

Download and manage BitNet models directly from HuggingFace.

AI Chat

Local AI chat interface with typewriter effects and streaming.

Energy Dashboard

Track energy savings, CO2 avoided, and unlock achievements.

Settings

12 languages, consent controls, and system preferences.

Cross-platform: Windows, macOS, Linux
Lightweight: Tauri 2.0 (~15 MB) + Electron fallback (~150 MB)
12 Languages: EN, FR, ES, DE, PT, IT, JA, KO, ZH, RU, AR, HI
Premium design: Dark mode with glassmorphism effects
One-click setup: Perfect for non-developers

Coming in v0.6.0+: Infinite Context Mode, Conversation Memory Manager, Consensus Inference Panel, Knowledge Network Browser

Get Started in 3 Commands

Terminal
# Install ARIA Protocol
$ pip install aria-protocol

# Start a node
$ aria node start --port 8765 --model aria-2b-1bit

# Start the API server
$ aria api start --port 3000

Roadmap v3.0

9 Versions · From testnet to production

v0.1–v0.5.2 Genesis → Desktop
v0.6.0 Testnet Alpha
v0.7.0 Smart Layer
v0.7.5 R&D + Docs
v0.8.0 Extended Context
v0.9.0 ARIA-LM
v1.0.0 Production
v1.1.0+ Beyond
54 Tasks
44 New
9 Versions
7 Corrections
View Full Roadmap →

Join the Decentralized AI Movement

ARIA is open-source, MIT licensed, and ready for contributors.