Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama App1 Request Fails, ResponseError: model is required (status code: 400) #195

Open
Wayneless opened this issue Feb 25, 2025 · 0 comments

Comments

@Wayneless
Copy link

When using the genai-stack-bot-1 container for conversation, the Streamlit application throws an error with the following traceback:
ResponseError: model is required (status code: 400)
File "/usr/local/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 542, in _run_script
exec(code, module.dict)
File "/app/bot.py", line 182, in
chat_input()
File "/app/bot.py", line 95, in chat_input
result = output_function(
^^^^^^^^^^^^^^^^
File "/app/chains.py", line 120, in generate_llm_output
answer = chain.invoke(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3024, in invoke
input = context.run(step.invoke, input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 284, in invoke
self.generate_prompt(
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 860, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 690, in generate
self._generate_with_cache(
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 925, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 644, in _generate
final_chunk = self._chat_stream_with_aggregation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 545, in _chat_stream_with_aggregation
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File "/usr/local/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 527, in _create_chat_stream
yield from self._client.chat(
File "/usr/local/lib/python3.11/site-packages/ollama/_client.py", line 168, in inner
raise ResponseError(e.response.text, e.response.status_code) from None

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant