Replies: 4 comments 4 replies
-
From a meetup suggestion: Give Perplexity a whirl in some of these tests. |
Beta Was this translation helpful? Give feedback.
-
Some of these tests (particularly the dev ones) should probably also be tested with Github CoPilot (or similar IDE integrations) |
Beta Was this translation helpful? Give feedback.
-
I gather from the presentation (good job!) that LLM performance drafting sql queries is greatly improved by having a textual explanation of the reason and purpose behind the tables and columns of interest and an explanation of how/why they're linked. Should there be a standard to create and store those narratives in the postgres comments section for each table and perhaps even at the column level and then have Superscript read and use those comments? This could help result in more consistent and better results and lessen the demands on users to create or know how to use such descriptions. |
Beta Was this translation helpful? Give feedback.
-
I mentioned in some WeChat group that I hope to use LLM to understand users' naturally expressed intentions, and quickly render charts with superset. |
Beta Was this translation helpful? Give feedback.
-
As discussed in a recent blog post, and the corresponding meetup, there are NUMEROUS ways that the new wave of Large Language Models (LLMs, such as GPT, Bard, and others) are able to provide new opportunities for the Superset user. These may take the shape of new workflows, integrations, or features. We'd love to hear your feedback on the things we've tried so far, things you think are worth exploring more deeply, and other ideas we might not yet have thought of. Hopefully we can turn this thread into future experiments/content, and product features if all goes well! We welcome your input, and we encourage your upvoting of ideas/points in this thread that you want to support!
(Thread will remain locked until the meetup ends)
Beta Was this translation helpful? Give feedback.
All reactions