#
cusparselt
Here are 2 public repositories matching this topic...
Custom CUDA kernels for accelerating 1.58-bit ternary LLM inference with 2:4 structured sparsity on consumer Ampere GPUs. Exploits both ternary arithmetic (no multiplies) and hardware sparse tensor cores to maximize throughput on RTX 3060. Based on the Sparse-BitNet paper (Zhang et al., 2026).
sparsity gpu cuda transformer cpp17 cuda-kernels quantization ampere structured-sparsity bitnet ternary-weights cusparselt llm-inference
-
Updated
Mar 11, 2026 - Cuda
Improve this page
Add a description, image, and links to the cusparselt topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the cusparselt topic, visit your repo's landing page and select "manage topics."