Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for locally-hosted ollama LLM instead of chatgpt #353

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -724,6 +724,22 @@ To activate ChatGPT in RPG Manager, you will need an OpenAI API key. Follow thes
2. **Generate API Key:** Once you have created your account and logged in, navigate to the 'API' section in your OpenAI account dashboard. Here, you will find the option to generate a new API key. Follow the on-screen instructions to create your key.
3. **Enter API Key in RPG Manager:** Open RPG Manager and navigate to the settings section. Input your newly generated OpenAI API key into the designated field.

### 6.5. Alternatives to ChatGPT

An open source alternative to ChatGPT exists, called Ollama. If you have a sufficiently-powered computer with a modern GPU, it is possible to run a ChatGPT-like experience on your own machine, only for the price of your own electricity and entirely privately.

To use Ollama, you must run an ollama server on the computer running obsidian, or another one to which you have access.

To setup locally:

1. **Install the software:** Go to [ollama.com](https://ollama.com) and download ollama for your platform
2. **Download a model:** Use a run command to download a model: `ollama run llama3.1` on command line. This downloads a model and starts an ollama server on your own computer.
3. **Set the rpg-manager config:** Set the ollamaUrl and ollamaModel vars in the config:
- **REQUIRED** ollamaUrl: default ollama setup puts the server on "http://localhost:11434"
- **OPTIONAL** ollamaMode; default model is `llama3.1`. You can explore different models based on your memory and GPU capabilities
4. **Exclusivity Note:** Ollama is mutually exclusive with ChatGPT. If the ollamaUrl is set, ollama is used and chatGPT will **not** be used, regardless of api key
5. **Shutting down:** When you are finished, shut down ollama (for Windows, it's in the system tray), as it holds a large amount of memory resources. You will need to use `ollama run llama3.1` the next time to want to run this again

## 7. Contributing

RPG Manager is an open-source project, and we welcome contributions of all kinds - from code contributions, bug reports, to documentation and any other help you can provide.
Expand Down
2 changes: 1 addition & 1 deletion manifest.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@
"author": "Carlo Nicora",
"authorUrl": "https://carlonicora.com",
"isDesktopOnly": false
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ function EditComponent({

let chatGpt: ChatGptNonPlayerCharacterModel | undefined = undefined;

if (api.settings.chatGptKey !== undefined && api.settings.chatGptKey !== "") {
if (api.settings.hasLLM) {
chatGpt = new ChatGptNonPlayerCharacterModel(
api,
element.type === ElementType.Campaign ? element : element.campaign,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ export default function NonPlayerCharacterWizardComponent({
chatGpt.current.weaknesses = weaknesses.join(", ");
};

if (api.settings.chatGptKey !== undefined && api.settings.chatGptKey !== "" && !chatGpt.current) {
if (api.settings.hasLLM && !chatGpt.current) {
chatGpt.current = new ChatGptNonPlayerCharacterModel(api, element?.campaign ?? campaign, element?.name ?? name);
}

Expand Down Expand Up @@ -259,7 +259,7 @@ export default function NonPlayerCharacterWizardComponent({
const [step, setStep] = React.useState(1);

const updateStep = (newStep: number) => {
if (api.settings.chatGptKey !== undefined && api.settings.chatGptKey !== "") {
if (api.settings.hasLLM) {
setCurrentChatGPT();
}

Expand Down Expand Up @@ -334,7 +334,7 @@ export default function NonPlayerCharacterWizardComponent({
<button className="rpgm-danger pl-3 pr-3 mr-6" onClick={() => close()}>
{t("buttons.cancel")}
</button>
{api.settings.chatGptKey !== undefined && api.settings.chatGptKey !== "" && chatGpt.current && step > 2 && (
{api.settings.hasLLM && chatGpt.current && step > 2 && (
<button className="rpgm-secondary pl-3 pr-3 mr-6" onClick={() => createAutomatically()}>
{t("wizards.npc.create")}
</button>
Expand Down
21 changes: 15 additions & 6 deletions src/services/ChatGptService/ChatGptService.ts
Original file line number Diff line number Diff line change
Expand Up @@ -51,10 +51,12 @@ Each option should be qualitative, not a short sentence and will allow the story
// new ChatGptMessage("system", 'Format your response as: {"responses":[{"response":"YOUR RESPONSE"}]}')
// );
try {
const endpoint = this._api.settings.ollamaUrl ? this._api.settings.ollamaUrl + '/v1/chat/completions' : this._endpoint;
const model = this._api.settings.ollamaUrl ? this._api.settings.ollamaModel : this._model;
const response = await axios.post(
this._endpoint,
endpoint,
{
model: this._model,
model: model,
messages: messages,
},
{
Expand All @@ -74,11 +76,18 @@ Each option should be qualitative, not a short sentence and will allow the story
}

private _processLatestMessage(latestMessage: string): ChatGptResponse[] {
const splitResponses = latestMessage.trim().split("\n");

const results: ChatGptResponse[] = splitResponses
.filter((resp) => resp.trim() !== "")
.map((resp) => ({ response: resp.replace(/^\d+\.\s*/, "").trim() }));
let results : ChatGptResponse[] = [];
if (this._api.settings.ollamaUrl) {
// formatting rules aren't ollama's strong suit, so we just return everything and let the user edit
results.push({response: latestMessage });
}
else {
const splitResponses = latestMessage.trim().split("\n");
results = splitResponses
.filter((resp) => resp.trim() !== "")
.map((resp) => ({ response: resp.replace(/^\d+\.\s*/, "").trim() }));
}

return results;
}
Expand Down
3 changes: 3 additions & 0 deletions src/services/UpdaterService/components/UpdaterComponent.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,9 @@ function Upgrading(): React.ReactElement {
updater.updateVault().then(() => {
const settings: RpgManagerSettingsInterface = {
chatGptKey: undefined,
ollamaUrl: undefined,
ollamaModel: "llama3.1",
hasLLM: false,
templatesFolder: (api.settings as any).templateFolder,
assetsFolder: (api.settings as any).imagesFolder,
automaticMove: false,
Expand Down
53 changes: 48 additions & 5 deletions src/settings/RpgManagerSettings.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ import { RpgManagerInterface } from "src/RpgManagerInterface";

export interface RpgManagerSettingsInterface {
chatGptKey: string | undefined;
ollamaUrl: string | undefined;
ollamaModel: string;
hasLLM: boolean;
templatesFolder: string | undefined;
assetsFolder: string | undefined;
automaticMove: boolean;
Expand All @@ -17,6 +20,9 @@ export type PartialSettings = Partial<RpgManagerSettingsInterface>;

export const rpgManagerDefaultSettings: RpgManagerSettingsInterface = {
chatGptKey: undefined,
ollamaUrl: undefined,
ollamaModel: "llama3.1",
hasLLM: false,
templatesFolder: undefined,
assetsFolder: undefined,
automaticMove: false,
Expand Down Expand Up @@ -128,9 +134,9 @@ export class RpgManagerSettings extends PluginSettingTab {
});
});

containerEl.createEl("h3", { text: "ChatGPT", cls: "mt-3" });
const ChatGPT = containerEl.createEl("p");
ChatGPT.appendText("Set up all the add-ons for the plugin. ");
containerEl.createEl("h3", { text: "AI Plugins", cls: "mt-3" });
const AIPlugins = containerEl.createEl("p");
AIPlugins.appendText("Set up all the add-ons for the plugin. ");
const ChatGPTWarning = containerEl.createEl("p");
ChatGPTWarning.appendChild(
createEl("span", {
Expand All @@ -140,14 +146,51 @@ export class RpgManagerSettings extends PluginSettingTab {
);

new Setting(containerEl)
.setName("OpenAI Key")
.setName("OpenAI/ChatGPT Key")
.setDesc("Insert your OpenAI key here.")
.addText((text) =>
text
.setPlaceholder("")
.setValue(this._plugin.settings.chatGptKey)
.onChange(async (value: string) => {
await this.saveSettings({ chatGptKey: value });
await this.saveSettings({
chatGptKey: value,
hasLLM: true
});
})
);

const OllamaWarning = containerEl.createEl("p");
OllamaWarning.appendChild(
createEl("span", {
text: "Please note: Ollama is an open-source LLM you can run on your own machine(s). See ollama.com for more information",
cls: "text-[--text-warning]",
})
);

new Setting(containerEl)
.setName("Ollama URL")
.setDesc("Insert the root URL of your local ollama server (eg. http://localhost:11434)")
.addText((text) =>
text
.setPlaceholder("http://localhost:11434")
.setValue(this._plugin.settings.ollamaUrl)
.onChange(async (value: string) => {
await this.saveSettings({
ollamaUrl: value,
hasLLM: true
});
})
);
new Setting(containerEl)
.setName("Ollama Model")
.setDesc("Enter the ollama model (eg. llama3.1)")
.addText((text) =>
text
.setPlaceholder("")
.setValue(this._plugin.settings.ollamaModel)
.onChange(async (value: string) => {
await this.saveSettings({ ollamaModel: value });
})
);
}
Expand Down