Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Support for LM Studio/Ollama Local LLMs #349

Open
1 task done
Armandeus66 opened this issue Aug 26, 2024 · 1 comment
Open
1 task done

[Feature]: Support for LM Studio/Ollama Local LLMs #349

Armandeus66 opened this issue Aug 26, 2024 · 1 comment
Assignees
Labels
Feature This topic is related to a new feature.

Comments

@Armandeus66
Copy link

Please make sure this feature request hasn't been suggested before.

  • I searched previous Issues and didn't find any similar feature requests.

Feature description

Please allow the user the option of using a local LLM.

Solution

Being able to input a local URL, etc. to point to a local LLM installation (LM Studio or Ollama) in lieu of an OpenAI API key would be ideal, and much cheaper too.

Alternatives

There may be other local LLMs I am not aware of.

Additional Information

Thank you.

@Armandeus66 Armandeus66 added the Feature This topic is related to a new feature. label Aug 26, 2024
@TheRealJoci
Copy link

TheRealJoci commented Aug 30, 2024

Hey,

I'm not a maintainer but from what I see you are suggesting is to create a common interface for AI chatbots other than ChatGPT?

I've looked into Ollama and connecting to it is as straightforward as it gets since Ollama has the REST API option for querying the whatever models you would prefer to run, the same way as you connect to ChatGPT.

From what I see the modification on the RPG Manager side would require to make a generalized version of what the ChatGPTService is which is the "hard" part since the way that the ChatGPT works is very specific already.

I'm sure that the maintainers would welcome a more detailed proposition of this feat from the design perspective, maybe provide a prototype on a different branch?

I personally don't see anyone having the means to use the local LLMs to a degree of satisfaction since it requires a local machine. A paid for version that would use cloud resources that you could connect to with a REST API sounds more reasonable but ultimately would fall short of OpenAI options in my opinion.

A way to improve the current solution would be to way to generalize the way to use OpenAI services, e.g. use newer models, fine-tune models or make assistants with RAG capabilities using tools to retrieve vector databases(all hosted by OpenAI, all paid for by the people working on this feat). Mind you it's not a small task and more importantly it's not free(not too expensive but would certainly require funding).

@carlonicora what do you think of this proposition?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature This topic is related to a new feature.
Projects
None yet
Development

No branches or pull requests

4 participants