DEPLOY
Mesh Network
Connect machines into a distributed inference network.
How it works
Every TARX node advertises its capabilities to the mesh. When a request arrives, the mesh routes it to the best available peer.
Nodes find each other automatically on the local network or via relay.
Requests route to the peer with the best latency, capacity, and model match.
All peer communication is encrypted. No data leaves your network.
Port 11436
SuperComputer service
The SuperComputer service runs on port 11436. It is a standalone Rust binary that manages peer connections, job routing, and credit accounting.
# Check mesh status
curl http://localhost:11436/mesh/health
# List connected peers
curl http://localhost:11436/mesh/peers
# Check credits balance
curl http://localhost:11436/mesh/creditsDistributed inference
When local inference is busy or a peer has a better model, the mesh transparently routes requests to available nodes. The API is identical — your application code doesn't change.
MCP tools
Nine MCP tools for mesh management:
Check SuperComputer service status
Active jobs and network info
List connected peers
Run distributed inference query
Models available on the network
Credit balance and earnings
Hardware specs of peers
Local device compute score
Node reputation and trust
Autonomous
Research agent
Run autonomous experiment loops distributed across mesh nodes. One machine runs a hundred experiments overnight. Ten machines run a thousand.
tarx_research({
experiment_file: "train.py",
program_file: "instructions.md",
metric: "val_loss",
budget_minutes: 300,
parallel_nodes: 4
})