You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
lyft_index = VectorStoreIndex.from_documents(lyft_docs)
uber_index = VectorStoreIndex.from_documents(uber_docs)
RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
The following ran smoothly and gave an output:
response = llm.complete("Who is Paul Graham?")
print(response)
I do not know how to resolve this. I am running this with CPU only, on 1 node and 4 cores, with 32GB per core.
The text was updated successfully, but these errors were encountered:
Question Validation
Question
Hello,
I am going through the tutorial on using the llama 3 model (https://docs.llamaindex.ai/en/stable/examples/cookbooks/llama3_cookbook/#react-agent-with-rag-queryengine-tools). Why am I getting an error related to openai when using a llama3 model?
The following ran smoothly and gave an output:
I do not know how to resolve this. I am running this with CPU only, on 1 node and 4 cores, with 32GB per core.
The text was updated successfully, but these errors were encountered: