-
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sometimes it hallucinates despite fetching accurate data! #2
Comments
My current prompt looks like this: datasette-chatgpt-plugin/datasette_chatgpt_plugin/__init__.py Lines 5 to 12 in 7686433
|
Ben Hammersley suggested looking at how Wolfram Alpha do this. Found their prompt in https://www.wolframalpha.com/.well-known/ai-plugin.json Here's their
|
So maybe I could tell it "Any time you select from a string column use |
https://platform.openai.com/docs/plugins/getting-started/plugin-manifest
The fact that the limit is going to decrease over time is worrying: I could add code now which returns an error if the response would be longer than that... but it won't help if the limit decreases again in the future without me realizing. |
For the moment I'm going to add code that can detect if the response would be longer than that 100,000 character limit and returns an error message (with the table schema bundled in as a useful reminder) if that limit is exceeded. |
The thing that would really help here is if ChatGPT could indicate to me what that length limit was in the requests it makes. Since that limit may change over time, the ideal way to do this would be as a custom incoming HTTP request header - maybe like this:
I think returning the limit as both chars and tokens would be good here - with the tokens value being the "true" limit and the chars value being an estimate. That way developers who are willing to put in the extra work to use something like |
This is a really bad bug. Can I improve this with some more prompt engineering?
The text was updated successfully, but these errors were encountered: