14 open source tools compared. Sorted by stars — scroll down for our analysis.
| Tool | Stars | Velocity | Language | License | Score |
|---|---|---|---|---|---|
NemoClaw Run OpenClaw more securely inside NVIDIA OpenShell with managed inference | 16.6k | +4919/wk | JavaScript | Apache License 2.0 | 94 |
Crucix Your personal intelligence agent. Watches the world from multiple data sources and pings you when something changes. | 6.8k | +2928/wk | JavaScript | GNU Affero General Public License v3.0 | 68 |
Understand-Anything Claude Code skills that turn any codebase into an interactive knowledge graph you can explore, search, and ask questions about (Multi-platform e.g., Codex are supported). | 6.1k | +4949/wk | TypeScript | MIT License | 78 |
gsd-2 A powerful meta-prompting, context engineering and spec-driven development system that enables agents to work for long periods of time autonomously without losing track of the big picture | 3.1k | +1146/wk | TypeScript | MIT License | 78 |
ppt-master AI generates editable, beautifully designed PPTX from any document — no design skills needed | 15 examples, 229 pages | 2.9k | +819/wk | Python | MIT License | 70 |
openclaw-control-center Turn OpenClaw from a black box into a local control center you can see, trust, and control. | 2.7k | +612/wk | TypeScript | MIT License | 72 |
chrome-cdp-skill Give your AI agent access to your live Chrome session — works out of the box, connects to tabs you already have open | 2.7k | +497/wk | JavaScript | MIT License | 74 |
awesome-codex-subagents A collection of 130+ specialized Codex subagents covering a wide range of development use cases. | 2.6k | +2122/wk | MIT License | 78 | |
prompt-master A Claude skill that writes the accurate prompts for any AI tool. Zero tokens or credits wasted. Full context and memory retention | 2.3k | +1729/wk | MIT License | 74 | |
engram Persistent memory system for AI coding agents — agent-agnostic Go binary with SQLite + FTS5, MCP server, HTTP API, and CLI. | 1.9k | +311/wk | Go | MIT License | 65 |
MiroFish-Offline Offline multi-agent simulation & prediction engine. English fork of MiroFish with Neo4j + Ollama local stack. | 1.3k | +703/wk | Python | GNU Affero General Public License v3.0 | 57 |
OpenSquirrel For people who get distracted by agents. A native Rust/GPUI control plane for running Claude Code, Codex, Cursor, and OpenCode side by side — because if you're going to be squirrely, you might as well optimize for it. | 1.3k | +263/wk | Rust | MIT License | 71 |
claude-peers-mcp Allow all your Claude Codes to message each other ad-hoc! | 1.2k | +1169/wk | TypeScript | — | 57 |
clui-cc Clui CC — Command Line User Interface for Claude Code | 1.1k | +473/wk | TypeScript | MIT License | 70 |
NemoClaw is NVIDIA saying "we'll make your AI agent less dangerous." It wraps OpenClaw in NVIDIA's OpenShell runtime with sandboxed execution, declarative network policies, and managed inference using Nemotron models locally. Every file access, network request, and inference call goes through policy enforcement. One CLI orchestrates the full stack. If you're deploying OpenClaw in any environment where security matters — corporate, regulated, multi-user — NemoClaw adds the guardrails OpenClaw doesn't ship by default. The declarative YAML policy (deny-all-except-allowlist for network egress) is the right security model. Running Nemotron locally keeps your data off external APIs. Alternatives: nono provides kernel-level sandboxing. Docker/Firecracker give you container isolation but require manual setup. No other vendor bundles sandbox + inference + policy for OpenClaw specifically. The catch: early preview as of March 2026 — NVIDIA says "not production-ready" themselves. You need NVIDIA hardware to run Nemotron locally. The tight coupling to OpenShell means you're in NVIDIA's ecosystem, and their open-source track record is mixed. Apache 2.0 license is good, but the full stack has proprietary dependencies.
Crucix is your personal intelligence analyst running 24/7 on your own hardware. It pulls from 27 OSINT feeds every 15 minutes — satellite fire detection, flight tracking, radiation monitoring, economic indicators, conflict data, sanctions lists, social sentiment — and renders everything on a single Jarvis-style dashboard. Hook up an LLM and it pushes tiered alerts to Telegram. If you're a trader, journalist, or security researcher who needs real-time cross-domain awareness without trusting a cloud service, Crucix is purpose-built. The three-tier alert system (FLASH, PRIORITY, ROUTINE) with delta computation means you're notified about changes, not noise. Nothing else does this self-hosted — commercial OSINT platforms like Recorded Future or Palantir cost six figures. The catch: AGPL license means your modifications are on the hook. Twenty-seven data sources means twenty-seven potential points of failure (though Promise.allSettled handles individual crashes gracefully). The real limitation is analysis quality — the dashboard shows you everything, but making sense of cross-domain signals still requires human judgment. It's a firehose with filters, not an analyst.
Understand-Anything turns your codebase into an interactive knowledge graph. A multi-agent pipeline scans every file, function, class, and dependency, then visualizes it as a color-coded dashboard you can explore, search, and query in natural language. This is onboarding on steroids. New to a repo? Run /understand-anything and get architectural layers (API, Service, Data, UI, Utility) mapped automatically, 12 programming patterns identified in context, and plain-English explanations of any node. Supports incremental updates — only re-analyzes changed files. Compared to CodeSee (commercial, shuttered) or manual architecture docs, this is automated and always current. Use this when joining a complex codebase or doing architectural review. Skip this for small projects where you can read every file in an afternoon. The catch: the analysis is only as good as the LLM interpreting your code. Unconventional patterns or deeply nested abstractions may be mislabeled. And running a multi-agent pipeline on a large repo costs real tokens.
GSD-2 is what happens when someone takes "context rot" seriously. Most AI coding agents lose the plot after 30 minutes of work — GSD fixes that with spec-driven development, structured context injection, and automatic context clearing between tasks. Built on the Pi SDK, it can use different models for different phases: Opus for planning, Sonnet for execution. If you're building anything that takes more than a single session to complete, GSD keeps your agent on the rails. The meta-prompting approach means your agent gets exactly the right files at dispatch time instead of stuffing everything into context. Trusted by engineers at Amazon, Google, and Shopify. Alternatives: raw CLAUDE.md files work for small projects. Aider has its own context management but less structure. Cursor's composer is the commercial equivalent. The catch: GSD adds ceremony. You need specs, milestones, and structured planning before writing code. If you're hacking a weekend project, that overhead kills the vibe. Best for teams or serious solo builders shipping real products.
ppt-master turns any document — PDF, DOCX, URL, Markdown — into editable PowerPoint presentations with AI. Drop in a file, get beautifully designed slides in 16:9, social media cards, or marketing posters. Each page exports as SVG, so you can convert to editable shapes in PowerPoint. This is the tool every indie hacker needs when an investor asks for a deck and you don't have a designer. Claude Opus produces the best results, but even Kimi 2.5 and MiniMax 2.7 work decently. The skill-based architecture keeps token costs low. Compared to Slidespilot (commercial), this is free and open source. Compared to Gamma (web app), this runs locally. Use this when you need presentation-quality slides from existing content and don't want to wrestle with Figma or Canva. Skip this if you need pixel-perfect brand compliance — AI-generated layouts are good, not perfect. The catch: SVG-to-shape conversion in PowerPoint is a one-way trip — once converted, you can't go back. And the "beautifully designed" claim depends heavily on which LLM you use. Budget models produce budget slides.
openclaw-control-center turns OpenClaw from a black box into a dashboard you can actually read. Token attribution shows which jobs consume credits. A staff page shows who's working, who's idle. A collaboration page maps parent-child agent relays. A security page shows your risk posture and version gaps. If you run multiple OpenClaw agents and have no idea where your money goes, this answers that question. The token attribution alone justifies installation. Compared to tenacitOS (mission control vibes) and tugcantopaloglu's openclaw-dashboard (auth + TOTP focus), this one is the most operationally complete. Use this when you're running OpenClaw at scale and need visibility into cost, security, and agent coordination. Skip this if you run one agent on one project — it's overkill. The catch: tied to the OpenClaw ecosystem. If you're using Claude Code or Codex directly, this won't help. And "control center" implies control — it's mostly observation, not intervention.
Chrome CDP Skill connects your AI agent directly to the Chrome browser you already have open. No Puppeteer, no headless browser, no separate login — your agent sees your Gmail, your GitHub, your internal dashboards, exactly as you do. One toggle in chrome://inspect and you're live. If you're building agent workflows that need to interact with authenticated web apps, this is the path of least resistance. Take screenshots, fill forms, evaluate JavaScript, navigate pages — all through Chrome DevTools Protocol. Playwright requires separate browser sessions and re-authentication. Browser-use is heavier. The official Claude browser plugin works but has connection reliability issues. The catch: you're giving an AI agent full access to your live browser session. Every logged-in account, every open tab, every cookie. There's no permission scoping — the agent can do anything you can do in that browser. For personal development machines, that's a calculated risk. For shared or production environments, that's a security incident waiting to happen. Use it knowingly.
awesome-codex-subagents is a cookbook of 130+ pre-built AI agents for OpenAI's Codex, organized across 10 domains — core development, language specialists, infrastructure, security, AI/ML, data, docs, and more. Each subagent comes with a tuned model profile, sandbox permissions, and focused instructions. Think of it as a hiring agency for your AI workforce. Instead of writing custom prompts for every task, you grab a specialist — a Rust reviewer, a Terraform auditor, a Python type-checker. The independent context windows prevent cross-contamination between tasks. VoltAgent also maintains awesome-claude-code-subagents (100+ for Claude) and awesome-agent-skills (700+). Use this when you're on Codex and want domain-specific agents without writing all the prompts yourself. Skip this if you're not on Codex — these are TOML configs for Codex's agent directory. The catch: Codex doesn't auto-spawn custom subagents — you must delegate explicitly in prompts. And 130+ agents sounds comprehensive until you realize quality varies significantly across entries. Test before trusting.
Prompt Master writes optimized prompts for 20+ AI tools — Claude, ChatGPT, Gemini, Cursor, Midjourney, DALL-E, ElevenLabs, and more. It extracts 9 dimensions of intent (task, constraints, success criteria, audience) and generates a prompt tuned for the target tool's specific quirks. Think of it as a translator between what you want and what each AI actually needs to hear. The Universal Fingerprint (4 questions) handles tools it's never seen. Safety-conscious: explicitly excludes techniques known to cause hallucinations like Tree of Thought. Compared to writing prompts yourself, this is faster. Compared to prompt libraries (awesome-chatgpt-prompts), this is dynamic and tool-aware. Use this when you use multiple AI tools and waste time reformatting prompts between them. Skip this if you only use one tool and already know its prompting patterns well. The catch: it's a Claude skill, so you need Claude Code to run it. And optimizing prompts for 20+ tools means optimizing deeply for none — specialist knowledge of any single tool will beat generic optimization.
engram is persistent memory for AI coding agents — a single Go binary with SQLite and full-text search that works with any MCP-compatible agent. Claude Code, Codex, Cursor, Gemini CLI, whatever. No Node.js, no Python, no Docker. One binary, one SQLite file, runs on a Raspberry Pi. The agent decides what's worth remembering (not a firehose of raw tool calls) and calls mem_save with structured What/Why/Where/Learned content. FTS5 indexing makes recall sub-millisecond. Compared to CLAUDE.md (manual, no search), engram is automated and queryable. Compared to Mem0 (cloud-based), engram is fully local. Compared to custom SQLite hacks, engram is production-ready. Use this when your agents keep forgetting decisions and patterns across sessions. Skip this if you work on one small project where context fits in a single session. The catch: the agent must proactively call mem_save — if it forgets to save, the memory system is empty. Quality depends entirely on the LLM's judgment about what's worth persisting. MIT license, no concerns there.
MiroFish-Offline builds a miniature society and watches what happens. Instead of feeding data into a statistical model, it creates hundreds of distinct agent-personas and simulates hour-by-hour public reaction to any document — press release, policy draft, financial report. It's prediction through simulation, not regression. The fully local stack (Ollama + Neo4j) means no API calls and no data leaving your machine. Hybrid retrieval (0.7 vector + 0.3 BM25) with 768-d embeddings is a solid architecture choice. Nothing else does this offline — most prediction tools need cloud LLMs. The original MiroFish is cloud-based with crypto/blockchain ties. Use this when you need to predict public sentiment before publishing something sensitive. Skip this unless you have a beefy local machine — Neo4j + Ollama + Flask needs real resources. The catch: AGPL-3.0 license, so your modifications must be shared. And simulated personas are only as good as the LLM driving them — Ollama's local models are less capable than GPT-4 or Claude, which limits prediction quality.
OpenSquirrel is a GPU-rendered control plane for running Claude Code, Codex, Cursor, and OpenCode side by side — built in Rust with GPUI (same engine as Zed editor). If you're juggling multiple agents, this gives you a native tiled layout instead of four terminal tabs. The coordinator/worker delegation is the standout feature: a primary Opus agent can spawn focused sub-agents, with workers returning condensed results. Sessions persist through restarts. Remote machine targeting via SSH + tmux means your local Mac can orchestrate cloud GPUs. No Electron, no web views — pure Metal rendering. Use this when you're a multi-agent power user who wants visual control without Electron overhead. Skip this on Linux or Windows — macOS-only (Metal GPU required). The catch: very early stage from a solo developer (Elliot Arledge). The GPUI dependency ties it to Zed's rendering stack, and macOS-only limits the audience significantly. Cool demo, uncertain maintenance trajectory.
claude-peers-mcp lets your Claude Code sessions talk to each other. A local broker daemon on localhost:7899 with SQLite handles discovery and messaging — each session registers, polls every second, and receives messages instantly via the claude/channel protocol. This solves a real problem: you're running Claude Code on your API project and your frontend project simultaneously, and they can't share context. Now they can. The broker auto-launches, auto-cleans dead peers, and everything stays on localhost. Nothing else does peer-to-peer Claude session communication — it's a novel idea. Use this when you're running multiple Claude Code sessions that need to coordinate — like a backend agent that needs to tell the frontend agent about schema changes. Skip this if you work on one project at a time. The catch: no license specified on the repo, which is a legal red flag for commercial use. And inter-session messaging is powerful but dangerous — one confused agent can spam another with bad context.
Clui CC wraps Claude Code in a floating macOS overlay with multi-tab sessions, voice input via Whisper, and a permission approval UI. It's what Claude Code would feel like if it had a proper desktop app instead of a terminal. The transparent pill interface, persistent conversation history, and skills marketplace make context-switching between projects seamless. No telemetry, no analytics, fully offline operation. Compared to Claude Code's native terminal, you get tabs and visual permission management. Claudia (claudia.so) is a similar concept but more polished. The official VS Code extension takes a different approach entirely. Use this when you live in Claude Code all day and want a better desktop experience on macOS. Skip this on Windows or Linux — it's macOS-only, no workarounds. The catch: it's a UI wrapper, not a new capability. If Claude Code's CLI works fine for you, Clui CC adds convenience, not power. And macOS-only means teams can't standardize on it.