gl1tch
Shell does the work.
LLM does the thinking.
You own the workflow.
;; this is a workflow (workflow "pr-review" (step "fetch" (run "gh pr view {{.input}} --json title,body")) (step "diff" (run "gh pr diff {{.input}}")) (step "review" (llm :prompt ``` PR: {{step "fetch"}} Diff: {{step "diff"}} Review as a senior engineer. ```)))
S-expressions, not YAML
Workflows are parenthesized lists. Every construct composes — retry wraps timeout wraps step. No indentation wars. No anchor hacks.
(retry 2 (timeout "30s" (step "fetch" (run "curl -sf ..."))))
Phase gates
Verification built into the language. Gates must pass before the phase completes. If they fail, the phase retries.
(phase "verify" :retries 1 (gate "check" (run "python3 verify.py")))
Tiered escalation
Start on your local model for free. gl1tch self-evaluates the output. Escalates to cloud only when quality demands it.
;; tier 0: lm-studio (free) ;; tier 1: copilot ;; tier 2: claude (llm :tier 0 :format "json" :prompt "...")
Plugins are directories
A plugin is a folder of .glitch files. Each file is a subcommand. Args become flags. No compilation. No release pipeline.
$ glitch plugin github prs --since week ["Fix flaky test", "Add retry logic"]
Knowledge index
glitch index ingests your repos into Elasticsearch. glitch observe queries them in natural language. Workflows use it as memory.
$ glitch observe "PRs that failed CI this week" Batch comparison
Run the same workflow across Ollama, Claude, and Copilot. A neutral local model grades the outputs. Same inputs, different brains.
$ for v in local claude copilot; do
glitch workflow run "issue-to-pr-$v"
done One command. gl1tch figures out the rest.
$ glitch ask "review PR #42" → routes to pr-review workflow
Smart routing matches your question to the right workflow via local LLM. Nothing leaves your machine.
(step "diff" (run "gh pr diff {{.input}}"))
Shell steps call gh, git, curl, jq — free and deterministic.
(step "review" (llm :prompt ``` Review: {{step "diff"}} ```))
Swap providers per step. Ollama, Claude, Copilot, LM Studio.
This site builds itself.
Every page was created by a gl1tch workflow. AI writes the content from real repo context. Shell gates verify no hallucinations before anything ships.
$ glitch workflow run site-create-page --set topic="batch comparison runs" >> site-create-page > existing-docs > examples > repo-structure > existing-pages > generate lm-studio tier 0: lm-studio accepted ✓ generate 3965 tok · 52.6s > save-stub > rebuild-json >>> Phase "verify" ✓ gate no-hallucinations PASS ✓ gate stub-coverage PASS ✓ gate structure-and-tone PASS
Give your agents gl1tch.
$ npx @anthropic-ai/superpower install gl1tch $ curl -sLo .cursorrules https://raw.githubusercontent.com/8op-org/gl1tch/main/skills/cursor/.cursorrules
$ curl -sLo AGENTS.md https://raw.githubusercontent.com/8op-org/gl1tch/main/skills/generic/AGENTS.md