Ecosyste.ms: Timeline

Browse the timeline of events for every public repo on GitHub. Data updated hourly from GH Archive.

cyzero-kim

cyzero-kim created a comment on an issue on ggerganov/llama.cpp
It’s a slightly different model, but it works well with MobileVLM, which uses CLIP. It doesn’t seem to be an issue with CLIP itself. `C:\work\llm\cyzero\llama.cpp\build\bin\Release> .\llama-llav...

View on GitHub

cyzero-kim created a branch on cyzero-kim/llama.cpp

outetts - LLM inference in C/C++

cyzero-kim pushed 439 commits to master cyzero-kim/llama.cpp
  • metal : single allocation of encode_async block (#9747) * Single allocation of encode_async block with non-ARC captu... 96b6912
  • ggml : add metal backend registry / device (#9713) * ggml : add metal backend registry / device ggml-ci * meta... d5ac8cf
  • flake.lock: Update (#9753) Flake lock file updates: • Updated input 'flake-parts': 'github:hercules-ci/flake... 6279dac
  • Update building for Android (#9672) * docs : clarify building Android on Termux * docs : update building Android ... f1af42f
  • ggml : add backend registry / device interfaces to BLAS backend (#9752) * ggml : add backend registry / device inter... 6374743
  • scripts : fix spelling typo in messages and comments (#9782) Signed-off-by: Masanari Iida <[email protected]> fa42aa6
  • server : better security control for public deployments (#9776) * server : more explicit endpoint access settings ... 458367a
  • ggml : fix BLAS with unsupported types (#9775) * ggml : do not use BLAS with types without to_float * ggml : retu... dca1d4b
  • examples : remove llama.vim An updated version will be added in #9787 3dc48fe
  • perplexity : fix integer overflow (#9783) * perplexity : fix integer overflow ggml-ci * perplexity : keep n_vo... e702206
  • cmake : do not build common library by default when standalone (#9804) c81f3bb
  • examples : do not use common library in simple example (#9803) * examples : do not use common library in simple exam... c7499c5
  • musa: add docker image support (#9685) * mtgpu: add docker image support Signed-off-by: Xiaodong Ye <xiaodong.ye@... cf8e0a3
  • rpc : add backend registry / device interfaces (#9812) * rpc : add backend registry / device interfaces * llama :... 0e9f760
  • common : use common_ prefix for common library functions (#9805) * common : use common_ prefix for common library fu... 7eee341
  • ggml : move more prints to the ggml log system (#9839) * ggml : move more prints to the ggml log system * show BL... 9677640
  • musa : update doc (#9856) Signed-off-by: Xiaodong Ye <[email protected]> 943d20b
  • llama : improve infill support and special token detection (#9798) * llama : improve infill support ggml-ci * ... 11ac980
  • server : remove legacy system_prompt feature (#9857) * server : remove legacy system_prompt feature ggml-ci * ... 95c76e8
  • server : remove self-extend features (#9860) * server : remove self-extend ggml-ci * server : fix context limi... 1bde94d
  • and 419 more ...

View on GitHub

cyzero-kim opened an issue on THUDM/GLM-Edge
Qualcomm measurement data.
### System Info / 系統信息 N/A ### Who can help? / 谁可以帮助到您? N/A ### Information / 问题信息 - [ ] The official example scripts / 官方的示例脚本 - [ ] My own modified scripts / 我自己修改的脚本和任务 ### Reproduction / ...
cyzero-kim created a comment on a pull request on ggerganov/llama.cpp
@0cc4m Hello, I solved the problem. It was the wrong order of parameters. Please review again. PS C:\work\llm\cyzero\llama.cpp\build\bin\Release> ./test-backend-ops -o POOL_2D ggml_vulkan: F...

View on GitHub

cyzero-kim pushed 92 commits to mobile_vlm cyzero-kim/llama.cpp
  • metal : single allocation of encode_async block (#9747) * Single allocation of encode_async block with non-ARC captu... 96b6912
  • ggml : add metal backend registry / device (#9713) * ggml : add metal backend registry / device ggml-ci * meta... d5ac8cf
  • flake.lock: Update (#9753) Flake lock file updates: • Updated input 'flake-parts': 'github:hercules-ci/flake... 6279dac
  • Update building for Android (#9672) * docs : clarify building Android on Termux * docs : update building Android ... f1af42f
  • ggml : add backend registry / device interfaces to BLAS backend (#9752) * ggml : add backend registry / device inter... 6374743
  • scripts : fix spelling typo in messages and comments (#9782) Signed-off-by: Masanari Iida <[email protected]> fa42aa6
  • server : better security control for public deployments (#9776) * server : more explicit endpoint access settings ... 458367a
  • ggml : fix BLAS with unsupported types (#9775) * ggml : do not use BLAS with types without to_float * ggml : retu... dca1d4b
  • examples : remove llama.vim An updated version will be added in #9787 3dc48fe
  • perplexity : fix integer overflow (#9783) * perplexity : fix integer overflow ggml-ci * perplexity : keep n_vo... e702206
  • cmake : do not build common library by default when standalone (#9804) c81f3bb
  • examples : do not use common library in simple example (#9803) * examples : do not use common library in simple exam... c7499c5
  • musa: add docker image support (#9685) * mtgpu: add docker image support Signed-off-by: Xiaodong Ye <xiaodong.ye@... cf8e0a3
  • rpc : add backend registry / device interfaces (#9812) * rpc : add backend registry / device interfaces * llama :... 0e9f760
  • common : use common_ prefix for common library functions (#9805) * common : use common_ prefix for common library fu... 7eee341
  • ggml : move more prints to the ggml log system (#9839) * ggml : move more prints to the ggml log system * show BL... 9677640
  • musa : update doc (#9856) Signed-off-by: Xiaodong Ye <[email protected]> 943d20b
  • llama : improve infill support and special token detection (#9798) * llama : improve infill support ggml-ci * ... 11ac980
  • server : remove legacy system_prompt feature (#9857) * server : remove legacy system_prompt feature ggml-ci * ... 95c76e8
  • server : remove self-extend features (#9860) * server : remove self-extend ggml-ci * server : fix context limi... 1bde94d
  • and 72 more ...

View on GitHub

cyzero-kim pushed 1 commit to mobile_vlm cyzero-kim/llama.cpp
  • [fix] Correct the incorrect order of the parameters. fix casting to int. Signed-off-by: Changyeon Kim <cyzero.kim@s... 3932fd5

View on GitHub