An AI agent that lives in every messaging app you use, plus 3 more open source standouts
AI agents stopped being a single product. We have agents in our terminals, our chat apps, our IDEs, our browsers, and now we have launchers to switch between them. Three of the four tools in this issue are agents. The fourth uses one as a building block. Hermes is the most interesting one to me. Nous Research built it as a self-improving agent that runs in your terminal but also plugs into Telegram, Discord, Slack, WhatsApp, Signal, and email. Point it at any LLM provider you want, and it remembers what you taught it across sessions. opencode is the same energy for code: terminal-first, model-agnostic, MIT licensed. cc-switch exists because if you've installed Claude Code, Codex, OpenCode, openclaw, and Gemini CLI on the same machine, you need something to keep them straight. The sneaky one is llm_wiki. Drop a folder of PDFs and Word files into it and it builds a structured wiki with cross-references and a knowledge graph, using whatever LLM you point at it. Not an agent in the chat sense, but the same pattern applied to documents instead of conversations. Persistent state, built up incrementally, owned by you. The open source version of the AI tools we used to subscribe to is starting to feel like a coherent stack instead of a collection.
The agent that grows with you
The Lens
Hermes is Nous Research's open-source autonomous agent. It builds skills from experience, remembers them across sessions, and connects to Telegram, Discord, Slack, WhatsApp, Signal, and email out of the box. Works with 200+ models through OpenRouter, OpenAI, Anthropic, or Hugging Face endpoints. Install is one curl command on Linux, macOS, WSL2, or Termux. After `hermes setup`, point it at any provider; switching models is a single CLI flag with no code changes. Runs on a $5 VPS or a GPU cluster. Pick this if you want one agent deployed across messaging apps without rebuilding for each. The closed learning loop (skills accumulated from prior runs) is a real differentiator vs framework-first kits like LangChain or AutoGen. Solo and small teams pay only their model bill. Large teams running their own RL stack already have this layer. The catch: it's research-y. Nous is an AI research lab, not a SaaS company. Docs are dense, support is community-driven, and "self-improving" claims always come with caveats. Treat it as an experiment, not a production-grade agent.
A cross-platform desktop All-in-One assistant tool for Claude Code, Codex, OpenCode, openclaw & Gemini CLI.
The Lens
Cc-switch wraps them into a single Tauri-based GUI. Cross-platform, open source, and free. Consider it a launcher that lets you switch between agents without context-switching between terminals. Setup is straightforward: download the app, configure your API keys, and pick which agents you want active. It doesn't add intelligence on top of the agents themselves. It's a convenience layer. The value is entirely in the unified interface and the ability to compare agent outputs side-by-side. Solo developers who already use multiple coding agents will get the most out of this. Teams probably don't need it since most teams standardize on one agent. If you only use one coding agent, there's nothing here for you. The catch: it's a wrapper, not a product. If the underlying agents change their CLI interfaces (which they do, frequently), cc-switch breaks until someone updates the integration. You're adding a dependency on a third-party GUI for tools that already work fine in a terminal.
The open source coding agent.
The Lens
opencode is an open-source coding agent that runs in your terminal as a TUI. Built by the terminal.shop team, it ships with two agents (build for full access, plan for read-only) and works with Claude, OpenAI, Google, or local models. MIT licensed, no provider lock-in. Setup is a one-line install via bash, brew, or your package manager of choice. A client/server split lets you run the agent on a remote box and connect from any machine. LSP support is built in. The desktop app is still in beta if you'd rather not live in the terminal. Solo developers and small teams get the best deal. You bring your own model API key, you keep the data local, and you can switch providers without changing tools. Teams already paying for Cursor or Copilot don't need this. Use it if you want one open agent across every model you touch. The catch: you manage your own model accounts and bills. No integrated subscription, no SSO, no team policy yet. If finance wants one invoice and security wants one audit log, this isn't there.
LLM Wiki is a cross-platform desktop application that turns your documents into an organized, interlinked knowledge base — automatically. Instead of traditional RAG (retrieve-and-answer from scratch every time), the LLM incrementally builds and maintains a persistent wiki from your sources。
The Lens
LLM Wiki turns your documents into a structured, interlinked knowledge base using any LLM you want. Drop in PDFs, Word files, or web clips and the app runs a two-step chain-of-thought process: analyze the content, then generate wiki pages with source traceability and automatic cross-references. The knowledge graph visualization surfaces connections you didn't know existed. Built with Tauri and React, it runs as a native desktop app on macOS, Windows, and Linux. Pre-built binaries mean no build step. It works with OpenAI, Anthropic, Google, Ollama (for zero API cost), or any OpenAI-compatible endpoint. Optional vector search via embedded LanceDB adds semantic lookup. The wiki directory is Obsidian-compatible, so you can open it directly in Obsidian for manual editing. Solo researchers and writers get an AI-powered second brain that keeps all data local. Small teams sharing a knowledge base get automatic entity extraction and gap detection. The Louvain community detection algorithm finds knowledge clusters automatically. The catch: every document you ingest costs LLM tokens. Large libraries add up fast, especially with the two-step analysis. The Deep Research feature requires a paid Tavily API key. And the quality of generated pages depends entirely on which LLM you're using.
Get the next issue in your inbox
Free. No spam. Unsubscribe anytime.