24 open source tools compared. Sorted by stars. Scroll down for our analysis.
| Tool | Stars | Velocity | Score |
|---|---|---|---|
Axios Promise-based HTTP client | 109.0k | - | 86 |
graphify AI coding assistant skill (Claude Code, Codex, OpenCode, OpenClaw). Turn any folder of code, docs, papers, or images into a queryable knowledge graph | 45.7k | +3706/wk | 83 |
postiz-app 📨 The ultimate social media scheduling tool, with a bunch of AI 🤖 | 30.2k | +268/wk | 73 |
| 25.7k | +107/wk | 81 | |
chromium The official GitHub mirror of the Chromium source | 23.6k | +51/wk | 83 |
ky Tiny elegant HTTP client based on Fetch | 16.8k | +20/wk | 79 |
react-pdf Create PDF files using React | 16.6k | +7/wk | 85 |
got Human-friendly HTTP request library for Node.js | 14.9k | +3/wk | 83 |
SponsorBlock Skip YouTube video sponsors (browser extension) | 13.2k | +32/wk | 72 |
winget-pkgs The Microsoft community Windows Package Manager manifest repository | 10.6k | +20/wk | 83 |
oapi-codegen Generate Go client and server boilerplate from OpenAPI 3 specifications | 8.3k | +15/wk | 81 |
codeburn See where your AI coding tokens go. Interactive TUI dashboard for Claude Code, Codex, and Cursor cost observability. | 6.0k | +788/wk | 79 |
| 5.6k | +9/wk | 75 | |
boneyard Auto generated skeleton loading framework | 5.5k | +82/wk | 75 |
PureMac Free, open-source macOS cleaner. CleanMyMac alternative with zero telemetry. Native SwiftUI, scheduled auto-cleaning, Xcode/Homebrew/system cache cleanup. MIT licensed. | 4.0k | +237/wk | 73 |
nix Rust friendly bindings to *nix APIs | 3.0k | - | 76 |
renode Renode - Antmicro's open source simulation and virtual development framework for complex embedded systems | 2.5k | +13/wk | 74 |
whatcable macOS menu bar app that tells you, in plain English, what each USB-C cable plugged into your Mac can actually do | 2.3k | +781/wk | 68 |
| 1.7k | - | 67 | |
clawsweeper ClawSweeper scans all issues and PRs and suggest what we can close, and why. It runs every PR / Issue once a week. | 1.6k | +95/wk | 67 |
requests-cache Persistent HTTP cache for python requests | 1.5k | +1/wk | 69 |
claude-usage A local dashboard for tracking your Claude Code token usage, costs, and session history. Pro and Max subscribers get a progress bar. This gives you the full picture. | 1.5k | +51/wk | 67 |
codex-plusplus Codex++ tweak system for the Codex desktop app | 1.3k | +458/wk | 64 |
dev3000 Captures your web app's complete development timeline - server logs, browser events, console messages, network requests, and automatic screenshots - in a unified, timestamped feed for AI debugging. | 1.3k | +119/wk | 66 |
Stay ahead of the category
New tools and momentum shifts, every Wednesday.
It wraps the messy parts of XMLHttpRequest (browser) and http (Node.js) into a clean, promise-based API that works the same in both environments. MIT license. You `npm install axios`, call `axios.get` or `axios.post`, and get back a promise with your data. It handles JSON parsing, request/response interceptors, timeouts, cancellation, and automatic transforms. The API is clean and the docs are solid. Everything is free. It's an npm package with no paid tier. The catch: the `fetch` API is now available everywhere (browsers and Node.js 18+). For simple requests, native fetch does what Axios does without adding a dependency. Axios still wins on interceptors, automatic retries, request cancellation (AbortController works but is clunkier), and upload progress tracking. But the gap is shrinking. For new projects, consider whether you actually need Axios or if fetch with a thin wrapper (like ky or ofetch) is enough. At it's not going anywhere, but you should know the alternative is already built into your runtime.
Graphify reads your entire codebase, docs, PDFs, and even screenshots, then builds a knowledge graph you can actually navigate. It parses 19 languages via tree-sitter for code and uses an LLM for everything else. The result is an interactive HTML visualization showing how your architecture, concepts, and files connect. The first extraction pass costs real API tokens (Claude or GPT), proportional to your corpus size. After that, incremental updates via SHA256 caching mean re-runs only process changed files. The 71x token compression claim is real for subsequent queries, not the initial scan. Runs as a /graphify slash command inside Claude Code, Codex, or OpenCode. For developers onboarding to large or unfamiliar codebases: this is genuinely useful. Architecture reviews, cross-referencing code with design docs, understanding how a monorepo fits together. Exports to Neo4j, Obsidian vaults, or standalone wikis. The catch: it is a plugin, not a standalone tool. You need Claude Code or a compatible AI assistant as the runtime. Quality of inferred relationships depends on the underlying LLM, and the initial scan of a large repo is not cheap.
Postiz does that as an open source app you can self-host. It supports Twitter/X, LinkedIn, Instagram, Facebook, TikTok, and more. The AI features generate post variations and suggest optimal posting times. You get a content calendar, team collaboration, and analytics. Self-hosting is free under AGPL-3.0. The cloud version at postiz.com has a free tier and paid plans; details vary but expect typical SaaS pricing for social scheduling ($15-30/mo range for individuals, more for teams). The catch: AGPL means if you modify the code and offer it as a service, you must open source your changes. The growth is impressive but the project is young, expect rough edges and breaking changes. Self-hosting requires Docker, PostgreSQL, Redis, and an LLM API key for the AI features (OpenAI, etc.) which adds cost. And honestly, if you're one person managing one account, Buffer's free tier is less work to set up.
A coding font with ligatures, designed to make code easier to read. If you stare at a terminal or editor all day and care about how your code looks (arrow operators lining up, distinct characters for similar glyphs (0 vs O, 1 vs l vs I)), Maple Font is worth trying. What sets it apart: it's one of the few coding fonts that nails CJK (Chinese, Japanese, Korean) character support alongside Latin characters. If you work in a multilingual codebase or just want consistent rendering across languages, most coding fonts fall apart here. Maple doesn't. One of the fastest-growing font projects on GitHub. The community clearly wanted this. Fully free for the base version. There's a "Maple Font Plus" with extra features but the standard Maple Mono covers everything most developers need: ligatures, Nerd Font variants, variable weight support. The catch: font preference is deeply personal. You might love it or hate it in 30 seconds. The ligatures are opinionated. If you don't like != rendering as a single glyph, you'll need to disable them per-editor. Also, being newer means less battle-testing across every terminal and editor combo. Try it in your actual workflow before committing.
This is the open source browser engine that powers Google Chrome, Microsoft Edge, Brave, Opera, and most other browsers you use daily. If you're building a browser, an embedded web view, or anything that needs to render web pages, Chromium is the engine, on the GitHub mirror, BSD-3 licensed. To be clear: this is not a tool you install from npm. This is one of the largest open source projects in existence, millions of lines of C++. The GitHub repo is a mirror of Google's internal repository. Fully free and open source. Google funds most development. You can build Chromium from source and ship your own browser. The catch: unless you're building a browser or doing engine-level development, you don't interact with this repo directly. Most developers use Chromium through Electron, Puppeteer, or Playwright. Building from source takes hours on a powerful machine and requires specific toolchains. The star velocity reflects interest, not usability. This is infrastructure that 3 billion people use daily but almost no one builds from source.
Ky wraps Fetch in a tiny, elegant API that handles all of it. Same author as Got, but built on Fetch so it works in browsers, Deno, Bun, and Node 18+. MIT licensed. It's intentionally small, about 5KB. You get retries on failure, timeout support, hooks (beforeRequest, afterResponse), and JSON shortcuts. `ky.get(url).json` instead of `const res = await fetch(url); if (!res.ok) throw.; return res.json`. Fully free. npm package, no service, no paid tier. The catch: Ky is minimal by design. If you need advanced features like request cancellation with progress tracking, HTTP/2, or streaming uploads, Got or Axios have more batteries included. And since Ky is built on Fetch, it inherits Fetch's limitations: no built-in cookie jar, no proxy support in Node without extra config. For pure API calls where you want a thin wrapper over Fetch, Ky is perfect. For complex HTTP needs, it might not be enough.
React-pdf lets you build them using JSX components. Instead of wrestling with a PDF library's API, you write `<Document><Page><View><Text>` just like you'd write a React component. The mental model is the same, the output is a PDF. Everything is free under MIT. No paid tier, no premium features. The library handles layout (flexbox-based), fonts, images, SVG, links, and page breaks. It works in Node.js (server-side generation) and in the browser. There's nothing to host for the library itself. `npm install @react-pdf/renderer` and start building. If you need server-side PDF generation at scale, you'll run a Node.js service, but that's your infrastructure choice. Solo developers: perfect for adding PDF export to a React app. Invoices, reports, anything you'd otherwise build with a Python PDF library. Small teams: great for any app that needs branded PDF output. The component model makes templates maintainable. Growing teams: it scales, but complex layouts with many pages can be slow to render. The catch: the flexbox layout engine is close to CSS flexbox but not identical. Some properties behave slightly differently, and you'll spend time debugging layout issues that "should work." Also, rendering speed. Complex multi-page documents can take seconds to generate. For high-volume PDF generation, you might want a dedicated service.
Got handles all of it with a clean API. It's what `axios` wishes it was for server-side Node. MIT licensed, maintained by Sindre Sorhus (who maintains half the npm ecosystem). Got is specifically designed for Node.js, not the browser. It gives you automatic retries, request cancellation, HTTP/2 support, progress events, and RFC-compliant caching out of the box. Fully free. It's an npm package. No paid tier, no service, no account. The catch: Got is Node.js only. If you need a client that works in both browser and Node, use Axios or Ky. Got also doesn't support the Fetch API; it's its own thing. With Fetch now built into Node 18+, some teams are moving toward lighter Fetch wrappers instead. Got is feature-rich but it's also 2.2MB installed; if bundle size matters for your serverless functions, consider Ky or native Fetch.
SponsorBlock automatically skips sponsor segments, intros, outros, and other non-content sections in YouTube videos. It's a browser extension powered by a crowdsourced database: users mark sponsor segments, and everyone else's player skips them automatically. GPL v3. Covers more than just sponsors: intros, outros, "subscribe" reminders, non-music sections of music videos, filler, and previews. The community database has hundreds of millions of submitted segments. Works on YouTube in Chrome, Firefox, and Safari, plus third-party integrations in apps like NewPipe and Invidious. Fully free. No paid tier. Community-driven and donation-supported. Install the extension, watch a video, sponsor segments skip automatically. That's it. You can also submit segments yourself when they're missing. The catch: it depends on the community submitting segments. Popular videos get covered quickly. Obscure videos might not have segments marked yet. YouTube could also break the extension with player changes (it's happened before and been fixed quickly). And some creators argue this hurts their sponsorship revenue. Fair point, and you should decide where you stand on that. The tool works. Whether you should use it is a personal call.
winget-pkgs is the community-maintained package repository behind Windows Package Manager (winget). It is to Windows what Homebrew's formulae repo is to macOS: the catalog of installable software. You run "winget install firefox" and this repo is where that manifest lives. Contributing a package means submitting a YAML manifest via pull request. Microsoft validates the installer (MSI, MSIX, EXE formats), runs automated checks, and merges it. The repo has thousands of packages and grows daily. If you maintain Windows software, getting your app into winget-pkgs is the easiest distribution channel Microsoft offers. For Windows developers, winget is now the default. It ships with Windows 11 and Windows 10 (recent builds). Chocolatey and Scoop still have larger catalogs and more flexibility for power users, but winget has the advantage of being built into the OS. Most teams will end up using winget for standard installs and Chocolatey or Scoop for the long tail. The catch: this is a manifest repo, not a tool you run. The value is indirect. And winget still lags behind Homebrew and apt in package count and community tooling. If you are on macOS or Linux, this is irrelevant to you.
oapi-codegen turns OpenAPI 3.0 specs into Go code. Generate server boilerplate for Chi, Echo, Fiber, Gin, gorilla/mux, Iris, or net/http; generate API clients with custom request editors; or just generate types from your schemas. Apache 2.0, free, idiomatic Go output. No service to run; it's a code generator you invoke from your build. The strict server mode handles request and response marshaling automatically. Recent versions added import mapping so you can split a giant spec across packages, which actually matters when your API surface gets big. This is the standard for OpenAPI-first Go services. If you have a spec, you should be using this. Solo, small, large, all the same answer: free, run it as part of your codegen pipeline, save yourself days of boilerplate. The catch: OpenAPI 3.1 support is experimental, the release cadence is loose (pinning to commits is recommended for unreleased features), and the generated code aims for simplicity over refactoring, so you'll see some duplication. Implicit additionalProperties are also ignored by default. Read the docs before you assume your spec round-trips perfectly.
Codeburn shows you where your AI coding tokens go. Install it, run it, and get a TUI dashboard breaking down cost by project, model, and session across Claude Code, Codex, Cursor, Copilot, and others. No API keys needed: it reads the session files these tools already store on your machine. Setup is one command: npx codeburn. Pricing data pulls from LiteLLM and caches for 24 hours. Beyond raw cost, it classifies your sessions into task types (debugging, testing, coding) and calculates one-shot success rates so you can see which tasks burn tokens on retries. Anyone paying for AI coding tools should run this once just to see the numbers. The macOS menubar app gives passive cost awareness without opening a terminal. The optimize command flags waste patterns like repeated failures on the same task. The catch: it only knows about tools that store session data locally. If your AI tool does not write to disk (or you have not used it on this machine), codeburn cannot see it.
Extism is a WebAssembly (Wasm) plugin system. It creates a sandbox where third-party code runs safely, can't access your filesystem or network unless you explicitly allow it. It's essentially a bouncer for third-party code. You define what the plugin can do, compile it to Wasm, and Extism handles the execution boundary. Works from Go, Rust, Python, Node.js, Ruby, and more. BSD-3 licensed. Fully free and open source. The team behind it (Dylibso) offers consulting but the framework itself has no paid tier. The catch: WebAssembly plugins are powerful but the developer experience is still rough. Writing plugins means compiling to Wasm, which limits your language choices and debugging tools. The ecosystem is young. You won't find a marketplace of pre-built Extism plugins. This is for teams building platforms where extensibility is a core feature, not for adding a quick plugin system to a side project.
Boneyard generates skeleton loading screens by snapshotting your actual UI. Instead of hand-coding placeholder layouts that drift from your real components, you wrap them in a Skeleton component, run the CLI, and it captures the exact layout at multiple breakpoints. The skeletons match because they are derived from the real thing. Works with React, Svelte 5, and React Native. The CLI opens a headless browser, finds your Skeleton wrappers, and outputs static JSON bone files. React Native support uses fiber tree walking instead of a browser, which is clever. Three animation styles (pulse, shimmer, solid), dark mode support, and incremental caching so rebuilds skip unchanged components. Frontend developers tired of manually measuring skeleton states: this saves real time. Especially valuable on data-heavy apps where perceived load time matters. Teams get loading states that stay in sync as designs evolve. The catch: the CLI needs a running dev server to snapshot from. Dynamic layouts that change shape based on data will not produce perfect skeletons. And if you change a component, you need to re-run the build.
PureMac cleans junk off your Mac: caches, logs, Xcode leftovers, Homebrew cruft, mail attachments. Native SwiftUI, zero telemetry, completely free. It's the CleanMyMac alternative that doesn't cost $40/year and doesn't phone home. There's nothing to self-host. Download, install, run. It scans your system and shows exactly what it wants to delete before touching anything. Scheduled auto-cleaning is built in, so you can set it and forget it. macOS 13+ required. This is a personal productivity tool, not an enterprise play. Developers with a cluttered Mac who don't want to pay for CleanMyMac and don't trust closed-source cleaners with full disk access have an obvious choice here. The code is MIT-licensed and publicly auditable. The catch: no malware scanning, no app uninstaller, no smart file deduplication. It cleans known junk paths well but doesn't try to be a full system utility suite.
This gives you safe, idiomatic Rust wrappers instead of raw unsafe libc calls. That's the whole pitch. The nix crate covers POSIX APIs across Linux, macOS, FreeBSD, and more. You get typed enums instead of integer constants, Result types instead of checking errno, and zero-cost abstractions over things like mmap, ioctl, and ptrace. It's been around since 2015 and is a dependency in hundreds of Rust projects. Everything is free. MIT licensed, no paid tier, no cloud service. You add it to your Cargo.toml and go. Solo developers building anything systems-level in Rust should already have this in their toolkit. Teams don't need to coordinate around it. It's a library, not infrastructure. The catch: it's Unix-only. Windows developers need something else entirely. And the API surface is massive. Not everything is equally well-documented. You'll occasionally hit a function that sends you straight to the man pages.
Renode does exactly that. Picture a virtual hardware lab where you run your actual firmware binary against simulated chips. built by Antmicro (embedded systems consultancy). Supports ARM Cortex-M/A, RISC-V, Xtensa, and other architectures. You define your hardware in a configuration file: CPU, memory, UART, SPI, I2C, GPIOs, and Renode simulates it cycle-accurately enough to run real firmware. Fully free to use. The license is listed as 'Other': it's the MIT license for most components. Antmicro provides commercial support, custom platform models, and integration services for enterprise customers, but the tool itself is free. Solo embedded developers: run firmware tests without buying dev boards. Small teams: CI/CD integration: test firmware on simulated hardware in your pipeline. Medium to large: simulate multi-device networks and test inter-device communication without a hardware lab. The catch: simulation is never perfect. Timing-sensitive firmware may behave differently on real hardware. Not every peripheral is modeled; you may need to write custom peripheral models for uncommon chips. And the documentation, while improving, assumes you already know embedded development. If you're not writing firmware, this tool has no use case for you.
whatcable answers a question macOS hides: what can this USB-C cable actually do? Plug a charger or peripheral into a Mac, and the menu bar tells you the cable's real specs (USB 2.0 vs 5/10/20/40/80 Gbps), its power rating (3A or 5A up to 60W/100W/240W), and why charging might be slower than expected. MIT licensed, free. The data already exists in macOS via IOKit. whatcable surfaces it. No private APIs, no kernel extensions, no daemons. Install via Homebrew (`brew install --cask whatcable`) or grab the signed and notarized .app from GitHub Releases. Apple Silicon only, macOS 14 or later. The "why is my Mac charging slowly" diagnostic is the killer feature. It tells you whether the cable, the charger, or the Mac itself is the bottleneck. This is a niche utility. If you have a drawer of identical-looking USB-C cables and one of them is silently a USB 2.0 charge-only cable, you have already needed this. Free, no paid tier, single-developer maintenance. The catch: Apple Silicon only. Intel Macs use older Thunderbolt controllers that do not expose the PD state and cable e-marker data the app reads.
Sling is a small library that makes building and sending those requests less tedious. Instead of manually constructing http.Request objects, setting headers, encoding query parameters, and parsing responses, Sling gives you a chainable builder pattern. A Go equivalent of Python's requests library, but lighter. MIT license, Go. The API is clean: `sling.New.Base(url).Get(path).QueryStruct(params).ReceiveSuccess(response)`. Supports JSON encoding/decoding, form data, custom headers, and base URL composition. No external dependencies beyond the standard library. Fully free. It's a library. Install it with `go get`, use it in your code. No service, no hosting, nothing to pay for. The catch: this is a mature-but-quiet project. The Go standard library's `net/http` is already good. Sling saves typing but doesn't add capabilities. If your team has strong opinions about minimizing dependencies, the standard library does everything Sling does with more code. And for complex API clients, you might want a full SDK generator like OpenAPI instead of a request builder.
ClawSweeper is a GitHub maintenance bot that uses an LLM to identify stale issues and PRs and propose closing them with reasoning attached. It also has a commit sweeper that flags potential issues in code. MIT licensed, free, runs on GitHub Actions or as a GitHub App. Schedule-driven: hourly for hot items, daily for items under 30 days, weekly for older ones. Or trigger it on demand against a SHA range. The pitch vs stale-action and probot/stale is that it actually reads the issue, the PR, and main, then produces evidence ("implemented in commit X" or "duplicates issue Y") instead of closing on timeouts. Pick this if you maintain an OSS repo with hundreds of stale issues and not enough time to review them. Solo maintainers: huge time saver, you only review suggestions. Small teams: same. Large engineering orgs probably build this internally; the public version is geared toward OSS maintenance patterns. The catch: LLM-assisted bots make mistakes confidently. ClawSweeper proposes closures, it does not auto-close, and that's the right default. If you wire it to auto-close, you will lose real bug reports.
requests-cache adds persistent caching to Python's `requests` library. Pip install it, wrap your session, and identical HTTP calls return cached responses instead of hitting the network. Backends include SQLite (default), Redis, MongoDB, DynamoDB, and flat files. Install and a one-liner to enable caching on your session. No service to run. The default SQLite backend lives in `~/.cache/`. Switch backends with a parameter when you need persistence across machines or processes. Pick this for code that hits the same API repeatedly: scrapers, data pipelines, ETL jobs, integration tests against rate-limited services. Solo and small teams: drop it in, pays for itself the first time you stop hammering an API. Large teams at scale probably want a dedicated cache layer (Redis, Varnish, CDN), but this is still useful for the long tail. The catch: cache expiration is your problem. By default cached responses live forever; you set `expire_after` or rely on `Cache-Control` headers. Forget that and your scraper happily serves six-month-old data.
Claude Usage reads the JSONL session logs that Claude Code writes to your machine and turns them into charts. Per-model token breakdowns, cache hit rates, cost estimates, and session history, all in a local browser dashboard. Anthropic's own UI gives you a progress bar. This gives you the full picture. Zero dependencies. Standard library Python only, no pip install. Clone the repo, run the dashboard command, and it serves a Chart.js UI at localhost:8080 that auto-refreshes every 30 seconds. A SQLite database at ~/.claude/usage.db caches the parsed data for fast incremental re-scans. CLI commands cover scan, today, stats, and dashboard. Pro and Max subscribers who want to understand their actual token consumption per session and per model need this. The cost estimates use current API pricing, which is useful even for subscription users as a proxy for understanding usage patterns. The catch: only captures local Claude Code sessions. Cowork sessions (server-side) are not included. Cost estimates reflect API pricing, not what you actually pay on a Pro/Max subscription. Single-maintainer project, so pricing tables need manual updates when Anthropic changes models.
Codex++ is a tweak system for OpenAI's Codex desktop app. Inject custom features, fix UI bugs, and run a tweak manager without rebuilding the app. It's userscripts for your AI coding assistant. MIT licensed, available on macOS, Linux, and Windows. Install via Bun, a bash script, or PowerShell. The installer patches your local Codex app, backs it up, manages signatures, and installs a watcher that re-applies tweaks after Codex updates. Hot-reload means save a tweak file and it's live. Two default tweaks ship: custom keyboard shortcuts and UI improvements. Pick this if you use Codex daily and have a list of "I wish it just did X" complaints. Solo: free, the watcher handles updates. Small teams: works fine but each user installs separately. Large orgs running managed Codex deployments: don't, this voids code signatures and your security team will not love it. The catch: it's unofficial. Modifying Codex.app voids code signatures. Updates from the official app overwrite patches; the watcher re-applies them, but a structural change by OpenAI breaks your tweaks until the maintainer catches up. This is a power-user toy, not production tooling.
Dev3000 is a CLI tool that captures your entire development session: server logs, browser console output, network requests, screenshots, and user interactions in one unified timeline. From Vercel Labs. Free, MIT-licensed. Install via npm or bun globally, run d3k in your project, and it hooks into your Node.js server and browser. AI agents (Claude Code, Cursor, Windsurf) can read the live timeline for context. No persistent server required. The cloud collaboration features use Vercel Sandbox if you need to share sessions. If you use AI coding assistants and spend time context-switching between terminal logs and browser DevTools, this is directly useful. The CLI is free with no limits. Solo developers working with agentic tools are the target audience here. The catch: Vercel Labs means experimental. Headless capture mode has rough edges. Deeper cloud features will likely require deeper Vercel platform integration over time.