Change the repository type filter
All
Repositories list
530 repositories
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
- CUDA Core Compute Libraries
- BioNeMo Framework: For building and adapting AI models in drug discovery at scale
- C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
- Scalable data pre processing and curation toolkit for LLMs
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
- AIStore: scalable storage for AI applications
- Framework providing pythonic APIs, algorithms and utilities to be used with Modulus core to physics inform model training as well as higher level abstraction for domain experts
- Open-source deep-learning framework for building, training, and fine-tuning deep learning models using state-of-the-art Physics-ML methods