v2.13  ·  Apache 2.0  ·  Grok-Native

grok-install

The universal standard for Grok-native agents

One YAML file. One command. Your agent is live on X.

Install in 60 seconds Browse agents
zsh — grok-install

      
Spec v2.13 Apache 2.0 xAI SDK Native
02 · The Problem

Stop wiring SDKs by hand.

Every Grok-native agent reinvents the same plumbing. grok-install replaces 250 lines of Python with 30 lines of YAML — and ships the rest.

Without grok-install
reply_bot.py
# reply_bot.py — wire xAI SDK by hand
import os, time, json, logging
from xai_sdk import Client
from xai_sdk.chat import user, system, tool
from tweepy import Client as XClient
from tenacity import retry, stop_after_attempt
from ratelimit import limits, sleep_and_retry

logging.basicConfig(level=logging.INFO)
log = logging.getLogger("reply-bot")

# --- env wiring ---
GROK_KEY = os.environ["GROK_API_KEY"]
X_BEARER = os.environ["X_BEARER_TOKEN"]
if not GROK_KEY or not X_BEARER:
    raise RuntimeError("missing keys")

grok = Client(api_key=GROK_KEY)
x    = XClient(bearer_token=X_BEARER)

# --- declare the tool by hand ---
REPLY_TOOL = {
    "type": "function",
    "function": {
        "name": "reply_to_mention",
        "description": "Reply to an X mention",
        "parameters": {
            "type": "object",
            "properties": {
                "mention_id": {"type": "string"},
                "reply_text": {"type": "string", "maxLength": 280},
            },
            "required": ["mention_id", "reply_text"],
        },
    },
}

# --- rate limit by hand ---
@sleep_and_retry
@limits(calls=200, period=86400)
@retry(stop=stop_after_attempt(3))
def post_reply(mention_id, text):
    return x.create_tweet(text=text, in_reply_to_tweet_id=mention_id)

# --- cost guard by hand ---
SPENT_TODAY = 0.0
DAILY_USD   = 3.0

def guard(usage):
    global SPENT_TODAY
    cost = usage.prompt_tokens * 3e-6 + usage.completion_tokens * 15e-6
    SPENT_TODAY += cost
    if SPENT_TODAY > DAILY_USD:
        raise RuntimeError("daily cost cap exceeded")

# --- safety scan by hand (just the start...) ---
BLOCKED = ["deepfake", "mass dm", "spam"]
def safety_check(text):
    low = text.lower()
    return not any(b in low for b in BLOCKED)

# --- main loop ---
def main():
    while True:
        mentions = x.get_users_mentions(id="me").data or []
        for m in mentions:
            chat = grok.chat.create(model="grok-4", tools=[REPLY_TOOL])
            chat.append(system("You reply to X mentions."))
            chat.append(user(m.text))
            r = chat.sample()
            guard(r.usage)
            for tc in r.tool_calls or []:
                args = json.loads(tc.function.arguments)
                if safety_check(args["reply_text"]):
                    post_reply(args["mention_id"], args["reply_text"])
        time.sleep(60)

if __name__ == "__main__":
    main()

# ...and you still need: deploy config, telemetry, structured
# output validation, Dockerfile, requirements.txt, env loader,
# pre-install scan, log rotation, parallel tool dispatch...


# reply_bot.py — wire xAI SDK by hand
import os, time, json, logging
from xai_sdk import Client
from xai_sdk.chat import user, system, tool
from tweepy import Client as XClient
from tenacity import retry, stop_after_attempt
from ratelimit import limits, sleep_and_retry
~250 lines of Python + Dockerfile · requirements · CI · safety
With grok-install
grok-install.yaml
---
version: "2.13"
name: "My Reply Bot"
description: "Replies to X mentions using Grok"

llm:
  provider: "xai"
  model:    "grok-4"
  api_key_env: "GROK_API_KEY"

tools:
  - name: "reply_to_mention"
    description: "Reply to a specific X mention"
    parameters:
      type: "object"
      properties:
        mention_id: { type: "string" }
        reply_text: { type: "string", maxLength: 280 }
      required: ["mention_id", "reply_text"]

rate_limits:
  reply_to_mention:
    qps:        0.5
    daily_cap:  200

cost_limits:
  daily_usd:  3.00
  on_limit:   "block"

safety:
  pre_install_scan:    true
  minimum_keys_only:   true
30 lines of YAML runtime · safety · limits · all included
That's an 88% reduction in code — and you didn't write a single line of safety, retry, or rate-limit logic.
03 · 60-Second Quickstart

Four commands. One agent.

From pip install to live on X. No Docker, no config files, no guesswork.

~/my-reply-bot

      
12s
avg install
1
YAML file
0
Docker steps
3
deploy targets
05 · The Ecosystem

One spec. Five repos.

grok-install is the kernel. Hover any node to see what it does.

grok-install spec · v2.13 CLI runtime Templates 10+ agents .grok/ extensions Docs guides

Works with: xAI SDK (native) · LiteLLM · Semantic Kernel · OpenAI-compatible clients

06 · Live Validator

Edit YAML. See errors. Ship.

Validated against the official v2.13 schema in your browser. Nothing leaves this page.

grok-install.yaml
spec v2.13
Editor loads when you scroll here…
Validation
waiting
Scroll into view to load the editor and start validating.

Safety First. Always.

Every agent installed via grok-install goes through an automated scan before any key is requested.

Pre-install Scan

Every file in the repo is scanned before Grok asks for any secrets.

Approval Gates

X-posting tools require explicit user sign-off by default. No surprise tweets.

Rate Limits

Declarative QPS caps and daily hard stops. Runaway posting is impossible.

Minimal Permissions

minimum_keys_only: true — Only the secrets this agent actually needs.

✓  Only agents that pass all checks earn the Grok-Native Certified badge.

Why not just write the SDK directly?

Here's the honest breakdown.

Feature Writing xAI SDK manually With grok-install
Lines of code (reply bot) ~250 Python 30 YAML
Time to first deploy 2–4 hours Under 5 minutes
Multi-agent orchestration DIY from scratch Built-in
Safety scanning You build it Automated pre-install
Rate limiting DIY One declarative block
Cost controls DIY Hard USD stops
Tool schema validation Write JSON Schema manually Declared in YAML
Passive growth engine None Auto-welcome + weekly highlights

Works with your entire stack

Native xAI SDK support. Interoperable with everything else.

xAI SDK
Grok-4
LiteLLM
Semantic Kernel
OpenAI-compatible
Railway
Vercel
Docker
GitHub Actions

What builders are saying

Coming soon — reach out on X to share your story.

"Your testimonial will appear here."

Builder @handle

"Your testimonial will appear here."

Builder @handle

"Your testimonial will appear here."

Builder @handle
Built something with grok-install? Share your experience →

Ready to ship your agent?

Join the builders who are making AI agents on X actually accessible.

Install now — pip install grok-install Read the spec →

Get spec updates and new agents in your inbox. No spam. Unsubscribe anytime.

No spam · Unsubscribe anytime · Spec updates only