-
Notifications
You must be signed in to change notification settings - Fork 463
Issues: InternLM/lmdeploy
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Bug] gemma2 model reports an error when doing evaluation
#3048
opened Jan 17, 2025 by
zhulinJulia24
3 tasks
Is there a convenient way to perform model conversion?
awaiting response
#3043
opened Jan 16, 2025 by
Lanbai-eleven
[Bug] history tokens is not correct with /v1/chat/interactive.
#3032
opened Jan 15, 2025 by
zhulinJulia24
3 tasks
[Bug] Tokenizer Parallelism Leads to (500 : Internal Server Error)
#3025
opened Jan 14, 2025 by
Mr-Loevan
3 tasks done
[Bug] 请问在T4上进行qwen2-14B awq4版模型推理耗时远远大于相同模型在vllm上推理的耗时,是参数哪里有问题吗?相同配置在A800上性能确实能提升
#3012
opened Jan 12, 2025 by
sundayKK
3 tasks
[Bug] CUDA error: an illegal memory access was encountered. Vicuna results wrong
#3004
opened Jan 9, 2025 by
AllentDan
3 tasks
[Bug] 使用lmdeploy0.6.4部署llava-hf/llava-v1.6-vicuna-13b-hf之后,openai调用输出为空字符串
#3003
opened Jan 9, 2025 by
bang123-box
3 tasks
[Bug] RuntimeError: Triton Error [CUDA]: an illegal memory access was encountered
awaiting response
#2999
opened Jan 8, 2025 by
YSShannon
1 of 3 tasks
[Bug] internvl2_8b, 4 3090 cards, CUDA OOM error
#2993
opened Jan 7, 2025 by
zhaowenZhou
3 tasks done
[Bug] generation profile hangs on Mixtral-8x7B-Instruct-v0.1 with pytorch backend
#2948
opened Dec 24, 2024 by
zhulinJulia24
3 tasks
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.