Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an anti-hallucination node in agentic workflow #282

Open
1 task done
RamiAwar opened this issue Jul 31, 2024 · 1 comment
Open
1 task done

Add an anti-hallucination node in agentic workflow #282

RamiAwar opened this issue Jul 31, 2024 · 1 comment
Assignees

Comments

@RamiAwar
Copy link
Owner

Privileged issue

  • I'm @RamiAwar or he asked me directly to create an issue here.

Issue Content

Sometimes, LLM hallucinates data in the response, even when data security is enabled.

This is problematic because:

  1. Confuses users and makes them think security is compromised when it's not
@RamiAwar RamiAwar self-assigned this Jul 31, 2024
@shyamdurai
Copy link

Hey! Did you solve this at all?

Here is what we built - It will be open sourced early Jan 2025:

https://provably.ai/blog/introducing-proving-a-technique-to-rapidly-verify-and-trust-ai-answers

You are welcome to try it - I d love your feedback. Just fill out the form or mail me on shyam at provably dot ai.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants