
torchrec
Pytorch domain library for recommendation systems
The Lens
TorchRec is Meta's PyTorch library for training recommendation systems at scale. It is the engine behind the kind of "you might also like" model that runs on a feed of billions of items, packaged as a library you can use too. Free, BSD-3 licensed, used in production at Meta, Twitter, and Databricks.
Running this means GPU clusters, sharded embedding tables, and pipelined training. It gives you the parallelism primitives (model sharding, communication kernels, a planner that figures out the layout) but you bring the recommender model and the data pipeline. Single-GPU experiments work fine, the value shows up when your embedding table no longer fits in memory.
Solo: skip unless you are researching recsys at scale. Small teams: probably overkill, simpler libraries cover most product needs. Large teams shipping recommendations against tens of millions of items: this is what Meta built for itself, which is the strongest signal you will find.
The catch is that this is infrastructure, not a recommender. You still need to know what model you are training, how you are sharding it, and how to feed it. TorchRec makes the hard parts possible, not easy.
Get tools like this every Wednesday
One featured tool, three on the radar. No fluff.
Free vs Self-Hosted vs Paid
fully free**Free tier:** Full library under BSD-3. No paid version, no hosted variant.
**Self-hosted:** All you have. Runs on your own GPUs. Production deployments at Meta scale use multi-node clusters with NVIDIA hardware and a real ML infra team behind it.
**Paid:** None from the project itself. The cost lives in the hardware and the team running it.
Free and BSD-3 licensed. Cost is the GPU infrastructure and the people who know how to use it.
License: BSD 3-Clause "New" or "Revised" License
Use freely. No endorsement clause.
Commercial use: ✓ Yes
About
- Owner
- Meta PyTorch (Organization)
- Stars
- 2,535
- Forks
- 642
Explore Further
More tools in the directory
TensorRT-LLM
TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.
13.6k ★OpenMythos
A theoretical reconstruction of the Claude Mythos architecture, built from first principles using the available research literature.
11.7k ★pygraphistry
PyGraphistry is a Python library to quickly load, shape, embed, and explore big graphs with the GPU-accelerated Graphistry visual graph analyzer
2.5k ★