Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LlamaIndex+本地部署InternLM实践RAG的时候,结果不理想,想知道怎么优化呢? #2858

Open
jamesbondzhou opened this issue Dec 27, 2024 · 0 comments

Comments

@jamesbondzhou
Copy link

jamesbondzhou commented Dec 27, 2024

我按照文档《LlamaIndex+本地部署InternLM实践》中的示例代码,使用一个pdf文件作为知识库,模型我有尝试替换成qwen7b和chatglm6b的,embed模型也换成了bge-large-zh,但是基于知识库的问答,结果比较差,想知道问题出在哪里呢,怎么优化呢?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant