Skip to content
View DefTruth's full-sized avatar
๐ŸŽฏ
#pragma unroll
๐ŸŽฏ
#pragma unroll

Block or report DefTruth

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
DefTruth/README.md

cuda-learn-notes

Pinned Loading

  1. lite.ai.toolkit lite.ai.toolkit Public

    ๐Ÿ›  A lite C++ toolkit of 100+ Awesome AI models, support ORT, MNN, NCNN, TNN and TensorRT. ๐ŸŽ‰๐ŸŽ‰

    C++ 3.7k 706

  2. vllm-project/vllm vllm-project/vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 33.2k 5.1k

  3. Awesome-LLM-Inference Awesome-LLM-Inference Public

    ๐Ÿ“–A curated list of Awesome LLM/VLM Inference Papers with codes, such as FlashAttention, PagedAttention, Parallelism, etc. ๐ŸŽ‰๐ŸŽ‰

    3.1k 211

  4. CUDA-Learn-Notes CUDA-Learn-Notes Public

    ๐Ÿ“š150+ Tensor/CUDA Cores Kernels, โšก๏ธflash-attn-mma, โšก๏ธhgemm with WMMA, MMA and CuTe (98%~100% TFLOPS of cuBLAS/FA2 ๐ŸŽ‰๐ŸŽ‰).

    Cuda 1.9k 198

  5. Awesome-Diffusion-Inference Awesome-Diffusion-Inference Public

    ๐Ÿ“–A curated list of Awesome Diffusion Inference Papers with codes, such as Sampling, Caching, Multi-GPUs, etc. ๐ŸŽ‰๐ŸŽ‰

    140 8

  6. faster-prefill-attention faster-prefill-attention Public

    ๐Ÿ“š[WIP] FFPA: Yet another Faster Flash Prefill Attention with O(1)๐ŸŽ‰GPU SRAM complexity for headdim > 256, ~1.5x๐ŸŽ‰faster than SDPA EA.

    Cuda 20 1