Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not handled out of request count, save the state instead. #713

Closed
logunovFGP opened this issue Sep 17, 2023 · 4 comments
Closed

Not handled out of request count, save the state instead. #713

logunovFGP opened this issue Sep 17, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@logunovFGP
Copy link

Policy and info

  • Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
  • Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/

Expected Behavior

gpt-engineer not crashing, saving the state and proposes continue later

Current Behavior

GPT-engineer crashes in endless loop attempting to request the data. The whole conversation is not cached so I need to perform input from scratch and waste credits to re-input information.

Failure Information

GPT-4 LLM.

Steps to Reproduce

Just attempt to give large input when gpt-engineer has lots of questions, I spent 30 mins answering questions.

Failure Logs

Nothing more to clarify.
INFO:openai:error_code=rate_limit_exceeded error_message='Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.' error_param=None error_type=tokens message='OpenAI API error received' stream_error=False
WARNING:langchain.llms.base:Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues..
INFO:openai:error_code=rate_limit_exceeded error_message='Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.' error_param=None error_type=tokens message='OpenAI API error received' stream_error=False
WARNING:langchain.llms.base:Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues..
INFO:openai:error_code=rate_limit_exceeded error_message='Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.' error_param=None error_type=tokens message='OpenAI API error received' stream_error=False
WARNING:langchain.llms.base:Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues..
INFO:openai:error_code=rate_limit_exceeded error_message='Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.' error_param=None error_type=tokens message='OpenAI API error received' stream_error=False
WARNING:langchain.llms.base:Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 8.0 seconds as it raised RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues..
INFO:openai:error_code=rate_limit_exceeded error_message='Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.' error_param=None error_type=tokens message='OpenAI API error received' stream_error=False
WARNING:langchain.llms.base:Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 10.0 seconds as it raised RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues..
INFO:openai:error_code=rate_limit_exceeded error_message='Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.' error_param=None error_type=tokens message='OpenAI API error received' stream_error=False
Traceback (most recent call last):

File "", line 198, in _run_module_as_main

File "", line 88, in _run_code

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Scripts\gpt-engineer.exe_main_.py", line 7, in
sys.exit(app())
^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\gpt_engineer\main.py", line 96, in main
messages = step(ai, dbs)
^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\gpt_engineer\steps.py", line 199, in gen_clarified_code
messages = ai.next(messages, dbs.preprompts["generate"], step_name=curr_fn())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\gpt_engineer\ai.py", line 173, in next
response = self.llm(messages, callbacks=callsbacks) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\langchain\chat_models\base.py", line 551, in call
generation = self.generate(
^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\langchain\chat_models\base.py", line 309, in generate
raise e

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\langchain\chat_models\base.py", line 299, in generate
self._generate_with_cache(

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\langchain\chat_models\base.py", line 446, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\langchain\chat_models\openai.py", line 334, in _generate
for chunk in self._stream(

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\langchain\chat_models\openai.py", line 305, in _stream
for chunk in self.completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\langchain\chat_models\openai.py", line 278, in completion_with_retry
return _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\tenacity_init_.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\tenacity_init_.py", line 379, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\tenacity_init_.py", line 325, in iter
raise retry_exc.reraise()
^^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\tenacity_init_.py", line 158, in reraise
raise self.last_attempt.result()
^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\Logun\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^

File "C:\Users\Logun\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self._exception

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\tenacity_init_.py", line 382, in call
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\langchain\chat_models\openai.py", line 276, in _completion_with_retry
return self.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(

File "G:\REPOS\PROJECTS\aviator-game-test.venv\Lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(

openai.error.RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-8JuGfI1rBZ1rbbaeM3CSLawd on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.

@logunovFGP logunovFGP added bug Something isn't working triage Interesting but stale issue. Will be close if inactive for 3 more days after label added. labels Sep 17, 2023
@logunovFGP
Copy link
Author

logunovFGP commented Sep 17, 2023

To be clear, the main issue here that I have answered already, but gpt-engineer not handled exception lead to the context not saved, so when I started the project again, it started from the same question, which means:

  • double money spent on API
  • double time spent on answering question (less, if they saved and you expect the crash)
  • no guarantee it won't crash again after feeling-in the data.

Is that possible to each answer in memory immediately instead waiting for all answer to handle such case as workaround?

@ATheorell
Copy link
Collaborator

Thanks for the report! First of all:

@lukaspetersson (as reviewer of ai.py) do we need to add a catch somewhere for RateLimitError?

Generally: We are actively working on making gpt-engineer simpler, with shorter prompts etc, for example PR #733 @logunovFGP

@ATheorell ATheorell removed the triage Interesting but stale issue. Will be close if inactive for 3 more days after label added. label Sep 25, 2023
@lukaspetersson
Copy link
Contributor

@lukaspetersson (as reviewer of ai.py) do we need to add a catch somewhere for RateLimitError?

#741

@ATheorell
Copy link
Collaborator

Closing with the merging of #741 . Let us know if the error persists @logunovFGP

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants