Ecosyste.ms: Timeline
Browse the timeline of events for every public repo on GitHub. Data updated hourly from GH Archive.
sixsixcoder created a review comment on a pull request on THUDM/GLM-4
Yes, I tested it on vllm=0.6.3.post2, and these parameters have been removed
sixsixcoder opened a pull request on THUDM/GLM-4
Support for GLM-4-9B-Chat-hf and GLM-4v-9B models on vLLM >= 0.6.3 and transformers >= 4.46.0
Support for GLM-4-9B-Chat-hf and GLM-4v-9B models on vLLM >= 0.6.3 and transformers >= 4.46.0, the major changes are in`GLM-4/basic_demo`sixsixcoder created a comment on an issue on THUDM/GLM-4
Please try to install gradio==4.44.1
sixsixcoder pushed 7 commits to main sixsixcoder/GLM-4
- Update finetune.py 上一个commit在配置文件添加了freezeV选项,但在finetune.py中没有支持。https://github.com/THUDM/GLM-4/commit/1c2676415c213... f6ee34e
- Merge pull request #585 from sixsixcoder/main Add GLM-4v-9B model support for vllm framework 5142bdb
- Merge pull request #563 from huolongguo1O/patch-1 解决finetune.py在加载配置文件时的错误 4e9b473
- Update README.md 9cd635a
- transforemrs>=4.46 support c2c28bc
- update for transformers 4.46 94776fb
- remove wrong path 6bf9f85
sixsixcoder created a comment on an issue on THUDM/GLM-4
What version of gradio are you using?
sixsixcoder created a comment on an issue on THUDM/GLM-4
This model file can be used to emdedding images. https://huggingface.co/THUDM/glm-4v-9b/blob/main/visual.py
sixsixcoder closed an issue on THUDM/GLM-4
LlamaFactory lora微调glm4过拟合严重
### System Info / 系統信息 3090 x4 CentOS ### Who can help? / 谁可以帮助到您? _No response_ ### Information / 问题信息 - [ ] The official example scripts / 官方的示例脚本 - [X] My own modified scripts / 我自己修改的脚本和任务...sixsixcoder closed an issue on THUDM/GLM-4
glm-4v-9b支持在910B上部署么
### System Info / 系統信息 glm-4v-9b支持在910B上部署么 ### Who can help? / 谁可以帮助到您? @Z ### Information / 问题信息 - [X] The official example scripts / 官方的示例脚本 - [ ] My own modified scripts / 我自己修改的脚本和任务 ###...sixsixcoder closed an issue on THUDM/GLM-4
Need chat template when using vllm as an openai api compatible server with glm-4v-9b
### System Info / 系統信息 PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ub...sixsixcoder created a comment on an issue on THUDM/GLM-4
楼上是正解, > @Linlp 去掉glm_server.py里面的这一行。 ` "use_beam_search": False,` 我印象中好像还有别的参数也需要去掉,去掉之后你多运行几次试试。 > > 另外,我记得如果是vllm高版本的话,这行的`inputs=inputs`可能需要改为`prompt=inputs`。 `async for output in engine...
sixsixcoder created a comment on an issue on THUDM/GLM-4
建议你可以先把`glm-4v-9b`模型文件下载到本地,然后替换"THUDM/glm-4v-9b"为本地模型路径
sixsixcoder opened a pull request on THUDM/GLM-4-Voice
Support loading and launching web programs from vllm=0.6.3
Support loading and launching web programs from vllm=0.6.3sixsixcoder pushed 1 commit to vllm063 sixsixcoder/GLM-4-Voice
- Support loading and starting web from vllm=0.6.3 bf1bb7d