You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use DeepSeek R1 as backend LLM service of MM AI copilot, it fails when calling the backend API, the server logs shows: message: invalid character '<' looking for beginning of value
As we know, the DeepSeek R1 response starts with <think> part, some AI client tools will display the 'think' part as a single breakdown UI.
My question is, do we have plans to fix the reason model or fully support the reason model in the future?
Steps to reproduce
Config DeepSeek R1 model as backup back AI service and post a message to the copilot.
The text was updated successfully, but these errors were encountered:
My mistake, standard DS API doesn't return think part, it was designed by qwen.
After I switched to the latest version of VLLM and re-run the DeepSeek R1 model, the copilot is now able to properly accept the LLM's output. However, the reasoning part still experiences lag in the UI and does not display the reasoning process. Will this feature be implemented in the UI in the future?
Description
I am trying to use DeepSeek R1 as backend LLM service of MM AI copilot, it fails when calling the backend API, the server logs shows:
message: invalid character '<' looking for beginning of value
As we know, the DeepSeek R1 response starts with
<think>
part, some AI client tools will display the 'think' part as a single breakdown UI.My question is, do we have plans to fix the reason model or fully support the reason model in the future?
Steps to reproduce
Config DeepSeek R1 model as backup back AI service and post a message to the copilot.
The text was updated successfully, but these errors were encountered: