Skip to content
@NVIDIA

NVIDIA Corporation

Pinned Loading

  1. cuopt cuopt Public

    GPU accelerated decision optimization

    Cuda 631 104

  2. cuopt-examples cuopt-examples Public

    NVIDIA cuOpt examples for decision optimization

    Jupyter Notebook 391 62

  3. open-gpu-kernel-modules open-gpu-kernel-modules Public

    NVIDIA Linux open GPU kernel module source

    C 16.5k 1.6k

  4. aistore aistore Public

    AIStore: scalable storage for AI applications

    Go 1.7k 231

  5. nvidia-container-toolkit nvidia-container-toolkit Public

    Build and run containers leveraging NVIDIA GPUs

    Go 4k 456

  6. GenerativeAIExamples GenerativeAIExamples Public

    Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.

    Jupyter Notebook 3.7k 944

Repositories

Showing 10 of 645 repositories
  • NVFlare Public

    NVIDIA Federated Learning Application Runtime Environment

    NVIDIA/NVFlare’s past year of commit activity
    Python 852 Apache-2.0 226 15 16 Updated Dec 28, 2025
  • NV-Kernels Public

    Ubuntu kernels which are optimized for NVIDIA server systems

    NVIDIA/NV-Kernels’s past year of commit activity
    C 74 49 0 7 Updated Dec 28, 2025
  • JAX-Toolbox Public

    JAX-Toolbox

    NVIDIA/JAX-Toolbox’s past year of commit activity
    Python 370 Apache-2.0 68 80 40 Updated Dec 28, 2025
  • cccl Public

    CUDA Core Compute Libraries

    NVIDIA/cccl’s past year of commit activity
    C++ 2,096 310 1,134 (5 issues need help) 201 Updated Dec 28, 2025
  • edk2-platforms Public

    NVIDIA fork of tianocore/edk2-platforms

    NVIDIA/edk2-platforms’s past year of commit activity
    C 10 4 0 0 Updated Dec 28, 2025
  • OSMO Public

    The developer-first platform for scaling complex Physical AI workloads across heterogeneous compute—unifying training GPUs, simulation clusters, and edge devices in a simple YAML

    NVIDIA/OSMO’s past year of commit activity
    Python 61 Apache-2.0 6 23 13 Updated Dec 28, 2025
  • TensorRT-LLM Public

    TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

    NVIDIA/TensorRT-LLM’s past year of commit activity
    Python 12,488 1,981 522 470 Updated Dec 28, 2025
  • gpu-operator Public

    NVIDIA GPU Operator creates, configures, and manages GPUs in Kubernetes

    NVIDIA/gpu-operator’s past year of commit activity
    Go 2,465 Apache-2.0 431 94 67 Updated Dec 28, 2025
  • aistore Public

    AIStore: scalable storage for AI applications

    NVIDIA/aistore’s past year of commit activity
    Go 1,711 MIT 231 0 0 Updated Dec 28, 2025
  • Model-Optimizer Public

    A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM, TensorRT, vLLM, etc. to optimize inference speed.

    NVIDIA/Model-Optimizer’s past year of commit activity
    Python 1,735 Apache-2.0 227 56 58 Updated Dec 28, 2025