Pass API key to Ollama requests #1167
Replies: 2 comments
-
The best scenario is to have Ollama enable API key protection. Obsidian Copilot is a client-side application and it covers all sorts of chat clients for different providers. That said, when you set the API key with custom model, is it really ignored? The |
Beta Was this translation helpful? Give feedback.
-
Just stumbled onto this. I'm doing something similar where I'm using Open WebUI's reverse proxied Ollama API endpoint. A 403 is thrown if the user isn't authenticated with Open WebUI. Authentication can happen either through standard user login, or passing the API key (with the OpenAI API key standard as linked to above). I can enter in my API key from Open WebUI into the Add Custom Chat Model, however I never see it passed in the POST over to Ollama. I do see it passed if I use something like OpenAI Format as my provider. It would be amazingly awesome if there was a way to pass the key. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Is your feature request related to a problem? Please describe.
No
Describe the solution you'd like
I have a home lab where I run Ollama backend as a docker container. I can expose it to the internet using Nginx Proxy Manager, making it accessible via https://ollama.myaddress.com (fictional address). In Obsidian copilot, I can add my ollama models as custom models by setting this address as base URLs. This works perfectly, and I manager to use my self-hosted AIs over the internet with obsidian copilot. However, there is a major security risk: anyone with this URL can use my self-hosted Ollama. Thus, ideally, it would be good to make Ollama only accessible over the internet with an API key. Currently, as far as I know, Ollama does not support setting API keys to limit access, but I can use Nginx Proxy Manager to allow access only if the HTML request contain a
Authorization
header containing a pre-set API-key. This works great when I manually pull requests to my server, but I can't implement it with obsidian copilot. Even if I add the API key to theAdd Custom Model
configuration in obsidian co-pilot, it seems the API key is just ignored when the provider is set to Ollama.I think this could be solved if Ollama requests follow the OpenAI API key authentication standard, by setting a HTML header as:
Describe alternatives you've considered
I could also limit access with a whitelist based on the devices IPs in Nginx Proxy Manager, but this is not optimal as devices IPs change.
Additional context
None
Beta Was this translation helpful? Give feedback.
All reactions