Ecosyste.ms: Timeline

Browse the timeline of events for every public repo on GitHub. Data updated hourly from GH Archive.

YangWang92

YangWang92 starred October2001/Awesome-KV-Cache-Compression
YangWang92 starred microsoft/LLM2CLIP
YangWang92 starred NVIDIA/Star-Attention
YangWang92 starred Nullus157/async-compression
YangWang92 starred NVlabs/Sana
YangWang92 starred google-research/vmoe
YangWang92 starred BICLab/Spike-Driven-Transformer-V3
YangWang92 starred InternLM/InternLM
YangWang92 starred PrimeIntellect-ai/OpenDiloco
YangWang92 starred GraySwanAI/circuit-breakers
YangWang92 starred yangchris11/samurai
YangWang92 starred sangwoohwang/SpikedAttention
YangWang92 starred NVIDIA/ngpt
YangWang92 created a comment on a pull request on huggingface/transformers
> Thanks the PR ! Excited to see more VTPQ models on the hub. Just a few nits. Thanks also for the nice tests and the documentation Thank @SunMarc, for your review! We are actively developing ne...

View on GitHub

YangWang92 starred WangXuan95/LLMA
YangWang92 created a comment on an issue on UbiquitousLearning/mllm
I mean that my colleagues and I can support VPTQ for mllm, but we’re not sure how to integrate it more easily. I hope you can help us identify and avoid some potential pitfalls. We will contribute ...

View on GitHub

YangWang92 starred SqueezeAILab/SqueezedAttention
YangWang92 starred allenai/open-instruct
YangWang92 starred hitcslj/Awesome-AIGC-3D
YangWang92 starred 2-fly-4-ai/V0-system-prompt
YangWang92 pushed 1 commit to m300 VPTQ/hessian_collector

View on GitHub

YangWang92 starred kizniche/ttgo-tbeam-ttn-tracker
YangWang92 starred luliyucoordinate/cute-flash-attention
YangWang92 created a comment on an issue on mlc-ai/mlc-llm
+1

View on GitHub

YangWang92 starred QwenLM/Qwen2.5
YangWang92 starred gordicaleksa/pytorch-original-transformer
YangWang92 starred PaulPauls/llama3_interpretability_sae
YangWang92 opened an issue on UbiquitousLearning/mllm
Can VPTQ (an extreme-low bit quantization method) Be Adapted to MLLM?
Hi all, We are currently working on an extreme low-bit LLM compression project called [VPTQ](https://github.com/microsoft/VPTQ). My colleagues and I are exploring ways to deploy larger LLMs/VLMs...
Load more