v0.5.5
A peer-to-peer protocol for efficient, ethical, and decentralized AI inference. Run 1-bit quantized models on any CPU with 99.6% energy savings.
AI inference without expensive hardware, excessive energy, or centralized control.
1-bit ternary weights (-1, 0, +1) replace expensive multiplications with simple additions. Runs on any consumer CPU — no GPU required.
99.6% energy reduction compared to cloud APIs. A single node uses ~241 kWh/year vs 25,550 kWh for cloud solutions.
WebSocket-based peer-to-peer networking with pipeline parallelism. No central server, no single point of failure. Your data stays yours.
At least 8 independent organizations produce 1-bit models. No single vendor dependency. Falcon-Edge outperforms Microsoft BitNet (53.17% vs 51.54%).
Tested on AMD Ryzen 9 7845HX — 8 threads, reproducible results.
| Model | Parameters | Throughput | Energy* | Latency (p50) | Memory |
|---|---|---|---|---|---|
| BitNet-b1.58-large | 0.7B | 89.65 t/s | ~11 mJ/token | 88 ms | ~400 MB |
| BitNet-b1.58-2B-4T | 2.4B | 36.94 t/s | ~28 mJ/token | 504 ms | ~1.3 GB |
| Llama3-8B-1.58 | 8.0B | 15.03 t/s | ~66 mJ/token | 1,031 ms | ~4.2 GB |
*Energy is estimated via CPU-time x TDP. See benchmark report for full methodology.
Coming soon: Falcon3 1.58-bit (1B–10B), Falcon-Edge (outperforms Microsoft BitNet), PT-BitNet conversions
| Solution | Hardware | Running Costs | Total | vs ARIA |
|---|---|---|---|---|
| Cloud APIs (frontier) | $0 | $164,250 | $164,250 | 2,161x |
| Llama API | $0 | $32,850 | $32,850 | 432x |
| RTX 4090 (local) | $2,000 | $6,533 | $8,533 | 112x |
| ARIA Protocol | $0 | $76 | $76 | 1x |
Assumptions: 10M tokens/day, existing CPU hardware, electricity at $0.25/kWh.
Optimal at 8 threads. 1-bit LUT kernels are memory-bound, not compute-bound.
3 concurrent streams yield only +11% throughput — validates P2P distribution approach.
Stable performance with only -7% degradation from 32 to 1024 tokens.
Multiple 7B models with orchestrated debate reach 92.85% accuracy (Nature 2025, SLM-MATRIX).
KV-Cache NVMe paging targets 500K+ tokens on 8GB RAM via sparse attention + 2-bit quantization.
8+ independent organizations. Falcon-Edge outperforms Microsoft BitNet: 53.17% vs 51.54% avg benchmark.
A 3-layer distributed system designed for resilience and efficiency.
Five independent defense layers protect every inference. No single point of failure.
Consent contracts · Local-first inference · Data minimization
Staking · Slashing · Time-locked rewards · Reputation
Proof of Useful Work · Proof of Sobriety · Provenance Ledger
Message authentication · Replay protection · Anti-downgrade
TLS 1.3 · Certificate validation · Perfect forward secrecy
Proof of Useful Work requires real computation. Staking creates economic cost. Rate limiting caps fake node creation.
Output hashes + timing analysis detect falsified results. Energy claims cross-referenced with hardware TDP profiles.
Inference runs locally. Only cryptographic hashes transit the network. Consent contracts enforce resource limits.
Every inference recorded on provenance ledger: timestamp, I/O hashes, nodes, energy consumed. Fully auditable.
"Nodes do not trust each other — they verify."
A beautiful, native desktop experience for ARIA Protocol.
Real-time node monitoring and network stats with live updates.
Download and manage BitNet models directly from HuggingFace.
Local AI chat interface with typewriter effects and streaming.
Track energy savings, CO2 avoided, and unlock achievements.
12 languages, consent controls, and system preferences.
Coming in v0.6.0+: Infinite Context Mode, Conversation Memory Manager, Consensus Inference Panel, Knowledge Network Browser
# Install ARIA Protocol
$ pip install aria-protocol
# Start a node
$ aria node start --port 8765 --model aria-2b-1bit
# Start the API server
$ aria api start --port 3000
9 Versions · From testnet to production
ARIA is open-source, MIT licensed, and ready for contributors.