GET STARTED

Architecture

How the four services work together on your machine.

System overview

textcontent_copy
┌─────────────────────────────────────────┐
│  Your Machine                           │
│                                         │
│  ┌──────────┐  ┌───────────┐  ┌──────┐  │
│  │ Inference │  │ Embedding │  │ Mesh │  │
│  │  :11435   │  │  :11437   │  │:11436│  │
│  └──────────┘  └───────────┘  └──────┘  │
│       │              │            │      │
│  ┌────────────────────────────────────┐  │
│  │          MCP Server Layer          │  │
│  │  tarx-core (29) + tarx-ops (55)   │  │
│  │  + tarx-ui (172) + 3 standalone   │  │
│  └────────────────────────────────────┘  │
│       │                                  │
│  ┌──────────┐                            │
│  │  SQLite  │  memory.db                 │
│  └──────────┘                            │
└─────────────────────────────────────────┘

Four services

Each service runs as a separate process on your machine.

:11435
memory
Inference

llama.cpp server running Qwen 2.5 7B. OpenAI-compatible REST API.

:11436
hub
Mesh

Rust binary for peer-to-peer distributed inference across machines.

:11437
data_array
Embeddings

llama-server running nomic-embed-text-v1.5. 768-dim vectors.

:11438
psychology
Cognitive

Higher-order reasoning and planning services.

MCP servers

Six servers provide 307 tools total, accessible from any MCP-compatible client.

29 tools
database
tarx-core

Memory, knowledge, project context, system health

55 tools
admin_panel_settings
tarx-ops

Admin, orchestration, automation, audit

172 tools
web
tarx-ui

End-to-end UI testing across 18 categories

9 tools
hub
tarx-mesh

Mesh network health, peers, credits

18 tools
campaign
tarx-martech

GTM automation, email, campaigns

24 tools
verified_user
tarx-verify

Identity verification, evidence, scoring

Storage

All persistent state lives in SQLite — no external databases required.

storage
memory.db

Memories, files, embeddings (chunk_embeddings + knowledge_embeddings)

folder
files/

Uploaded documents stored on disk

model_training
models/

Downloaded model weights

receipt_long
audit.jsonl

Admin operation log

Path: ~/Library/Application Support/tarx/

Install

Local-first model

TARX installs as a 2.9MB binary (tarxd) via a single curl command. TARX Local is the foundation — it serves inference, manages models, and exposes the MCP endpoint. Everything else builds on top of it.

1
Install TARX Local
curl -fsSL tarx.com/install | sh — installs tarxd, registers it as a background service
2
Model download
tarxd auto-downloads the 4.7GB model on first run. Menu bar icon shows progress.
3
CLI works immediately
tarx chat, tarx mcp add — all CLI commands talk to TARX Local over localhost:11435
4
Workbench lazy-loads
Desktop app (TARX Workbench) is optional. It connects to the same local runtime — no separate install.

Knowledge

RAG pipeline

1
Ingest
File upload or directory scan
2
Dedup
SHA256 hash check skips already-indexed content
3
Chunk
512-character chunks with 128-character overlap
4
Embed
nomic-embed-text-v1.5 generates 768-dim vectors
5
Store
Vectors saved in knowledge_embeddings table
6
Search
Cosine similarity via tarx_search_knowledge