GET STARTED

Quickstart

Install and run your first inference in 5 minutes.

1

Install TARX Local

bashcontent_copy
curl -fsSL tarx.com/install | sh

This installs the 2.9MB tarxd binary and registers it as a background service. The 4.7GB model downloads automatically. Menu bar icon shows download progress.

2

Verify TARX Local is running

bashcontent_copy
curl http://localhost:11435/health
# → {"status":"ok","model":"tarx-qwen2.5-7b"}
3

First request

Choose your interface:

PythoncurlNode.js
pythoncontent_copy
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:11435/v1",
    api_key="none"
)

response = client.chat.completions.create(
    model="tarx-qwen2.5-7b",
    messages=[{"role": "user", "content": "Hello"}]
)

print(response.choices[0].message.content)
4

Migrate from OpenAI

Already using OpenAI? One line.

pythoncontent_copy
# Before
client = OpenAI(api_key="sk-...")

# After
client = OpenAI(base_url="http://localhost:11435/v1", api_key="none")

All endpoints, parameters, and response shapes are identical to OpenAI's API.

5

Connect MCP

Give your AI access to 310 local tools:

CLI (recommended)ManualCloud (no install)
bashcontent_copy
tarx mcp add claude        # Claude Desktop
tarx mcp add claude-code   # Claude Code
tarx mcp add cursor        # Cursor
tarx mcp add vscode        # VS Code

Writes the correct config, validates the output, and backs up your existing settings.