Ecosyste.ms: Timeline
Browse the timeline of events for every public repo on GitHub. Data updated hourly from GH Archive.
DefTruth published a release on DefTruth/CUDA-Learn-Notes
HGEMM Up to 113 TFLOPS on L20
## What's Changed * [Mat][Trans] Add f32/f32x4 row/col first kernel by @bear-zd in https://github.com/DefTruth/CUDA-Learn-Notes/pull/89 * [Docs][Contribute] Add How to contribute Notes by @DefTru...DefTruth created a tag on DefTruth/CUDA-Learn-Notes
v2.4.13 - 🎉 Modern CUDA Learn Notes with PyTorch: fp32/tf32, fp16/bf16, fp8/int8, flash_attn, rope, sgemm, sgemv, warp/block reduce, dot, elementwise, softmax, layernorm, rmsnorm.
DefTruth created a branch on DefTruth/CUDA-Learn-Notes
opt-hgemm-mma - 🎉 Modern CUDA Learn Notes with PyTorch: fp32/tf32, fp16/bf16, fp8/int8, flash_attn, rope, sgemm, sgemv, warp/block reduce, dot, elementwise, softmax, layernorm, rmsnorm.
DefTruth pushed 1 commit to main DefTruth/CUDA-Learn-Notes
- [HGEMM] Update HGEMM WMMA Benchmark (#97) * Update README.md * Update hgemm.py * Update README.md * Update ... 0aeb450
DefTruth created a branch on DefTruth/CUDA-Learn-Notes
opt-hgemm-mma - 🎉 Modern CUDA Learn Notes with PyTorch: fp32/tf32, fp16/bf16, fp8/int8, flash_attn, rope, sgemm, sgemv, warp/block reduce, dot, elementwise, softmax, layernorm, rmsnorm.
DefTruth pushed 1 commit to main DefTruth/CUDA-Learn-Notes
- [HGEMM] Refactor HGEMM WMMA 161616 kernels (#96) * update hgemm benchmark option * update hgemm benchmark option ... 2e9e997