-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with Non-Streaming LLM Responses (Observed with o1 Model) #959
Comments
@sunner can you help confirm? Thanks. |
@PeterDaveHello You are right. The entire project assumes that LLM is working in streaming mode. I tested #958 by changing |
@sunner o1 itself doesn't support strraming yet, that's the reason why, and I dont want to add a preview model at this moment that the GA version was released. |
I don't know why |
|
Use https://platform.openai.com/settings/organization/limits or https://api.openai.com/v1/models to see if |
Describe the bug / 描述问题
I've encountered an issue when using the OpenAI
o1
model(#958). While the model returns a response, it doesn't seem to be processed and displayed correctly within the application.Referring to the
LangChainBot.js
code, the implementation appears to heavily rely on callbacks likehandleLLMNewToken
. This suggests an expectation of streaming responses from the underlying LLM.Could this be the reason for the issue observed with the
o1
model? Does the current design primarily target streaming scenarios, and is the lack of proper handling for non-streaming responses causing this problem?To Reproduce / 复现步骤
Checkout code from #958, build and run the app, use the o1 model, and see the empty result. A valid result can be observed from the dev tool.
Expected behavior / 期望行为
LLMs work without
streaming: true
Screenshots / 截图
No response
Devtools Info / 开发者工具信息
N/A
OS and version / 操作系统版本
Ubuntu
ChatALL version / ChatALL 版本
main branch
Network / 网络
N/A
Additional context / 其它相关信息
No response
The text was updated successfully, but these errors were encountered: