Nabla Engine
Nabla brings JIT-accelerated Automatic Differentiation (AD) to the Mojo programming language 🔥, a vital technique for gradient-based optimization and physics simulations. Nabla is still limited to CPU execution; our plan is to achieve full GPU integration by Q3 2025.
Explore Usage
Currently, Nabla executes all programs lazily, which - in combination with Mojo's unique memory management capabilities - allows for two quite different programming styles within one framework: functional programming with JAX-like transformations (e.g. vmap, grad, jit) as well as PyTorch-like imperative programming.
Nabla leverages Mojo's unique combination of performance and type/memory safety to address limitations inherent to Python-based SciComp libraries. Built on top of MAX (Modular's hardware-agnostic, high-performance Graph compiler), Nabla empowers researchers to tackle the most complex challenges in scientific simulation, mechanistic interpretability, and large-scale training.
Unlike frameworks that retrofit JIT onto eager systems (like PyTorch’s Dynamo), Nabla adopts a slightly different approach: We started this project with building a dynamic compilation system on top of MAX first (initially for CPU targets), then added full AD support (forward/reverse modes), and are now integrating eager execution. This order avoids architectural dead ends and yields a uniquely modular and performant system (with GPU support for Nabla in Mojo via MAX actively under development, building on this foundation).
Train models with a familiar PyTorch-like style, leveraging Nabla's dynamic compilation for high performance.
Use composable transforms (vmap, jit, grad) – powerful tools familiar from frameworks like JAX.
Integrate specialized differentiable CPU kernels, gaining fine-grained control unseen in Python (GPU kernel integration via MAX is planned).
Benefit from seamless integration with Mojo's growing ecosystem for cutting-edge high-performance computing.
Connect with researchers and developers. Discuss features, share use cases, report issues, and contribute to the future of high-performance Scientific Computing.
Discussions on GitHub