gl1tch

Shell does the work.
LLM does the thinking.
You own the workflow.

$ brew install 8op-org/tap/glitch
;; this is a workflow

(workflow "pr-review"

  (step "fetch"
    (run "gh pr view {{.input}} --json title,body"))

  (step "diff"
    (run "gh pr diff {{.input}}"))

  (step "review"
    (llm
      :prompt ```
        PR: {{step "fetch"}}
        Diff: {{step "diff"}}
        Review as a senior engineer.
        ```)))
01

S-expressions, not YAML

Workflows are parenthesized lists. Every construct composes — retry wraps timeout wraps step. No indentation wars. No anchor hacks.

(retry 2
  (timeout "30s"
    (step "fetch"
      (run "curl -sf ..."))))
02

Phase gates

Verification built into the language. Gates must pass before the phase completes. If they fail, the phase retries.

(phase "verify" :retries 1
  (gate "check"
    (run "python3 verify.py")))
03

Tiered escalation

Start on your local model for free. gl1tch self-evaluates the output. Escalates to cloud only when quality demands it.

;; tier 0: lm-studio (free)
;; tier 1: copilot
;; tier 2: claude
(llm :tier 0 :format "json" :prompt "...")
04

Plugins are directories

A plugin is a folder of .glitch files. Each file is a subcommand. Args become flags. No compilation. No release pipeline.

$ glitch plugin github prs --since week
["Fix flaky test", "Add retry logic"]
05

Knowledge index

glitch index ingests your repos into Elasticsearch. glitch observe queries them in natural language. Workflows use it as memory.

$ glitch observe "PRs that failed CI this week"
06

Batch comparison

Run the same workflow across Ollama, Claude, and Copilot. A neutral local model grades the outputs. Same inputs, different brains.

$ for v in local claude copilot; do
    glitch workflow run "issue-to-pr-$v"
  done

One command. gl1tch figures out the rest.

01 ask
$ glitch ask "review PR #42"
→ routes to pr-review workflow

Smart routing matches your question to the right workflow via local LLM. Nothing leaves your machine.

02 shell gathers data
(step "diff"
  (run "gh pr diff {{.input}}"))

Shell steps call gh, git, curl, jq — free and deterministic.

03 LLM reasons about it
(step "review"
  (llm :prompt ```
    Review: {{step "diff"}}
    ```))

Swap providers per step. Ollama, Claude, Copilot, LM Studio.

This site builds itself.

Every page was created by a gl1tch workflow. AI writes the content from real repo context. Shell gates verify no hallucinations before anything ships.

$ glitch workflow run site-create-page --set topic="batch comparison runs"

>> site-create-page
  > existing-docs
  > examples
  > repo-structure
  > existing-pages
  > generate               lm-studio
    tier 0: lm-studio accepted
   generate               3965 tok · 52.6s
  > save-stub
  > rebuild-json

>>> Phase "verify"
   gate no-hallucinations  PASS
   gate stub-coverage      PASS
   gate structure-and-tone PASS
site-create-page --set topic="..." AI generates a page from repo context, gates verify
site-update-page --set page=plugins AI rewrites a page with your instructions
site-update Regenerate all pages, verify, build
site-dev Dev server with hot reload

Give your agents gl1tch.

Claude Code
$ npx @anthropic-ai/superpower install gl1tch
Cursor / Copilot
$ curl -sLo .cursorrules https://raw.githubusercontent.com/8op-org/gl1tch/main/skills/cursor/.cursorrules
Any agent
$ curl -sLo AGENTS.md https://raw.githubusercontent.com/8op-org/gl1tch/main/skills/generic/AGENTS.md