-
-
Notifications
You must be signed in to change notification settings - Fork 6.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Langchain integration #466
Conversation
…sed for debugging
…sed for debugging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not 100% we want langchain in here:)
gpt_engineer/ai.py
Outdated
logger.debug(f"Chat completion finished: {messages}") | ||
return messages | ||
|
||
def lastMessageContent(self, messages): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python uses snake case
gpt_engineer/ai.py
Outdated
if messages and isinstance(messages, list) and len(messages) > 0: | ||
r = json.dumps(messages_to_dict(messages)) | ||
except Exception as e: | ||
logging.warn("Exception serializing messages, returning empty array", e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This exception handling should not exist imo?
@@ -97,15 +98,19 @@ def logs_to_string(steps: List[Step], logs: DB): | |||
chunks = [] | |||
for step in steps: | |||
chunks.append(f"--- {step.__name__} ---\n") | |||
messages = json.loads(logs[step.__name__]) | |||
chunks.append(format_messages(messages)) | |||
# messages = json.loads(logs[step.__name__]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comments?
Thanks @hrobbins for contributing to improving the repository! I can see the case for being able to use other language models, but I think we can do that without langchain, and we want to keep it simple. Are there other advantages? We only merge PRs that use python pep8, doesn't leave comments hanging around etc. So since this is not the focus right now, I will close the PR, but feel free to open it if you are committed to fixing everything and can argue for why we want langchain today. |
self.chat = ChatOpenAI(model=self.modelid, temperature=temperature) | ||
except Exception as e: | ||
# ? try a different model | ||
logging.warning(e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should not catch this error – it should raise!
logger.debug(f"Chat completion finished: {messages}") | ||
return messages | ||
|
||
def lastMessageContent(self, messages): | ||
m = messages[-1].content | ||
if m: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't need if statement here
gpt_engineer/ai.py
Outdated
return m | ||
|
||
def serializeMessages(messages): | ||
# dicts = messages_to_dict(history.messages) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
?
gpt_engineer/ai.py
Outdated
# dicts = messages_to_dict(history.messages) | ||
r = "[]" | ||
try: | ||
if messages and isinstance(messages, list) and len(messages) > 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why check?
There is a lot of good stuff here so happy to reopen |
@hrobbins thought more about it and I like many things here would need to address the comments and make sure streaming is still working before we merge though. Have you given up on this or are you up for the challenge? |
@hrobbins if you don't have the time to do it, let me know - id be happy to do it! need this to finish #431 (cc @AntonOsika) |
Can probably polish it up today.
Already made most of the recommended changes, looks like it just needs a
callback handler for the streaming.
For other reasons, I started working on breaking down the processes into
more isolated requests to the LLM so each one is individually a smaller
response anyway. Streaming is less important when we're not asking it to
do everything at once with each call. But I can go ahead and re-add it.
…-H
On Fri, Jul 7, 2023, 4:31 AM UmerHA ***@***.***> wrote:
@hrobbins <https://github.com/hrobbins> if you don't have the time to do
it, let me know - id be happy to do it!
need this to finish #431
<#431>
(cc @AntonOsika <https://github.com/AntonOsika>)
—
Reply to this email directly, view it on GitHub
<#466 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGVMKNSEEQ2IT3WWP5KUDDXO7XSFANCNFSM6AAAAAAZ3E6INA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hey @hrobbins! Sorry, I was too eager 🙈 I had already implemented it before I saw your response. Here's the branch: https://github.com/UmerHA/gpt-engineer/commits/langchain-integration. Iiuc, the scope of this PR is to make it work for (i) LangChain models, (ii) HuggingFace models, and (iii) local models. I've done (i), so you could still do (ii) and (iii). If you want, I can merge my code into yours? Will do what you decide. |
No worries, go for it.
…-H
On Fri, Jul 7, 2023, 7:38 AM UmerHA ***@***.***> wrote:
Hey @hrobbins <https://github.com/hrobbins>! Sorry, I was too eager 🙈 I
had already implemented it before I saw your response.
Here's the branch:
https://github.com/UmerHA/gpt-engineer/commits/langchain-integration.
Iiuc, the scope of this PR is to make it work for (i) LangChain models,
(ii) HuggingFace models, and (iii) local models. I've done (i), so you
could still do (ii) and (iii).
If you want, I can merge my code into yours?
Or should I open a PR instead?
Will do what you decide.
—
Reply to this email directly, view it on GitHub
<#466 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGVMKN5P6IPHSDFQNPNJKDXPANPZANCNFSM6AAAAAAZ3E6INA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Reviewed what you did. Nicely done! |
Trying to keep things simple for this initial conversion to LangChain. The only difference with the original functionality is this version doesn't stream the output from OpenAI, so there is bit of a delay in waiting for the result.
This branch will enable using any LLM, from hugging-face, locally, any LangChain supported LLM service, such as Anthropic/Claude, or pointing it to a Custom LLM.