Skip to content

Commit

Permalink
feat: Integrate Livepeer LLM provider (elizaOS#2154)
Browse files Browse the repository at this point in the history
* add livepeer on index.ts as llm provider

* updated livepeer models

* add livepeer as llm provider

* add retry logic on livepeer img gen

* add handlelivepeer

* update test

* add livepeer model keys on .example.env

* Merge pull request #2 from Titan-Node/livepeer-doc-updates

Updated docs for Livepeer LLM integration

* add endpoint on livepeer on models.ts

* edit livepeer model config at model.ts

* Add Livepeer to image gen plugin environments

Fixes this error
```
Error handling message: Error: Image generation configuration validation failed:
: At least one of ANTHROPIC_API_KEY, NINETEEN_AI_API_KEY, TOGETHER_API_KEY, HEURIST_API_KEY, FAL_API_KEY, OPENAI_API_KEY or VENICE_API_KEY is required
    at validateImageGenConfig (file:///root/eliza-test/eliza-livepeer-integration/packages/plugin-image-generation/dist/index.js:38:19)
```

* add comments on livepeer model sizes

* remove retry logic from livepeer generate text and img

* Fixed .env naming convention and fixed mismatch bug within code

* add bearer on livepeer calls

* change in parsing to accomodate for new livepeer update

* addadd nineteen api key on the message

---------

Co-authored-by: Titan Node <[email protected]>
  • Loading branch information
UD1sto and Titan-Node authored Jan 17, 2025
1 parent f70c1cd commit d55a3c7
Show file tree
Hide file tree
Showing 9 changed files with 218 additions and 30 deletions.
8 changes: 6 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -141,8 +141,12 @@ MEDIUM_AKASH_CHAT_API_MODEL= # Default: Meta-Llama-3-3-70B-Instruct
LARGE_AKASH_CHAT_API_MODEL= # Default: Meta-Llama-3-1-405B-Instruct-FP8

# Livepeer configuration
LIVEPEER_GATEWAY_URL= # Free inference gateways and docs: https://livepeer-eliza.com/
LIVEPEER_IMAGE_MODEL= # Default: ByteDance/SDXL-Lightning

LIVEPEER_GATEWAY_URL=https://dream-gateway.livepeer.cloud # Free inference gateways and docs: https://livepeer-eliza.com/
IMAGE_LIVEPEER_MODEL= # Default: ByteDance/SDXL-Lightning
SMALL_LIVEPEER_MODEL= # Default: meta-llama/Meta-Llama-3.1-8B-Instruct
MEDIUM_LIVEPEER_MODEL= # Default: meta-llama/Meta-Llama-3.1-8B-Instruct
LARGE_LIVEPEER_MODEL= # Default: meta-llama/Meta-Llama-3.1-8B-Instruct

# Speech Synthesis
ELEVENLABS_XI_API_KEY= # API key from elevenlabs
Expand Down
5 changes: 5 additions & 0 deletions agent/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -512,6 +512,11 @@ export function getTokenForProvider(
character.settings?.secrets?.DEEPSEEK_API_KEY ||
settings.DEEPSEEK_API_KEY
);
case ModelProviderName.LIVEPEER:
return (
character.settings?.secrets?.LIVEPEER_GATEWAY_URL ||
settings.LIVEPEER_GATEWAY_URL
);
default:
const errorMessage = `Failed to get token - unsupported model provider: ${provider}`;
elizaLogger.error(errorMessage);
Expand Down
58 changes: 40 additions & 18 deletions docs/docs/advanced/fine-tuning.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ enum ModelProviderName {
REDPILL,
OPENROUTER,
HEURIST,
LIVEPEER,
}
```

Expand Down Expand Up @@ -272,24 +273,45 @@ const llamaLocalSettings = {

```typescript
const heuristSettings = {
settings: {
stop: [],
maxInputTokens: 32768,
maxOutputTokens: 8192,
repetition_penalty: 0.0,
temperature: 0.7,
},
imageSettings: {
steps: 20,
},
endpoint: "https://llm-gateway.heurist.xyz",
model: {
[ModelClass.SMALL]: "hermes-3-llama3.1-8b",
[ModelClass.MEDIUM]: "mistralai/mixtral-8x7b-instruct",
[ModelClass.LARGE]: "nvidia/llama-3.1-nemotron-70b-instruct",
[ModelClass.EMBEDDING]: "", // Add later
[ModelClass.IMAGE]: "FLUX.1-dev",
},
settings: {
stop: [],
maxInputTokens: 32768,
maxOutputTokens: 8192,
repetition_penalty: 0.0,
temperature: 0.7,
},
imageSettings: {
steps: 20,
},
endpoint: "https://llm-gateway.heurist.xyz",
model: {
[ModelClass.SMALL]: "hermes-3-llama3.1-8b",
[ModelClass.MEDIUM]: "mistralai/mixtral-8x7b-instruct",
[ModelClass.LARGE]: "nvidia/llama-3.1-nemotron-70b-instruct",
[ModelClass.EMBEDDING]: "", // Add later
[ModelClass.IMAGE]: "FLUX.1-dev",
},
};
```

### Livepeer Provider

```typescript
const livepeerSettings = {
settings: {
stop: [],
maxInputTokens: 128000,
maxOutputTokens: 8192,
repetition_penalty: 0.4,
temperature: 0.7,
},
endpoint: "https://dream-gateway.livepeer.cloud",
model: {
[ModelClass.SMALL]: "meta-llama/Meta-Llama-3.1-8B-Instruct",
[ModelClass.MEDIUM]: "meta-llama/Meta-Llama-3.1-8B-Instruct",
[ModelClass.LARGE]: "meta-llama/Llama-3.3-70B-Instruct",
[ModelClass.IMAGE]: "ByteDance/SDXL-Lightning",
},
};
```

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/core/characterfile.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ The character's display name for identification and in conversations.

#### `modelProvider` (required)

Specifies the AI model provider. Supported options from [ModelProviderName](/api/enumerations/modelprovidername) include `anthropic`, `llama_local`, `openai`, and others.
Specifies the AI model provider. Supported options from [ModelProviderName](/api/enumerations/modelprovidername) include `anthropic`, `llama_local`, `openai`, `livepeer`, and others.

#### `clients` (required)

Expand Down
7 changes: 7 additions & 0 deletions docs/docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,10 +92,17 @@ Eliza supports multiple AI models:
- **Heurist**: Set `modelProvider: "heurist"` in your character file. Most models are uncensored.
- LLM: Select available LLMs [here](https://docs.heurist.ai/dev-guide/supported-models#large-language-models-llms) and configure `SMALL_HEURIST_MODEL`,`MEDIUM_HEURIST_MODEL`,`LARGE_HEURIST_MODEL`
- Image Generation: Select available Stable Diffusion or Flux models [here](https://docs.heurist.ai/dev-guide/supported-models#image-generation-models) and configure `HEURIST_IMAGE_MODEL` (default is FLUX.1-dev)
<<<<<<< HEAD
- **Llama**: Set `OLLAMA_MODEL` to your chosen model
- **Grok**: Set `GROK_API_KEY` to your Grok API key and set `modelProvider: "grok"` in your character file
- **OpenAI**: Set `OPENAI_API_KEY` to your OpenAI API key and set `modelProvider: "openai"` in your character file
- **Livepeer**: Set `LIVEPEER_IMAGE_MODEL` to your chosen Livepeer image model, available models [here](https://livepeer-eliza.com/)
=======
- **Llama**: Set `XAI_MODEL=meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo`
- **Grok**: Set `XAI_MODEL=grok-beta`
- **OpenAI**: Set `XAI_MODEL=gpt-4o-mini` or `gpt-4o`
- **Livepeer**: Set `SMALL_LIVEPEER_MODEL`,`MEDIUM_LIVEPEER_MODEL`,`LARGE_LIVEPEER_MODEL` and `IMAGE_LIVEPEER_MODEL` to your desired models listed [here](https://livepeer-eliza.com/).
>>>>>>> 95f56e6b4 (Merge pull request #2 from Titan-Node/livepeer-doc-updates)
You set which model to use inside the character JSON file
Expand Down
35 changes: 35 additions & 0 deletions packages/core/__tests__/models.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ vi.mock("../settings", () => {
LLAMACLOUD_MODEL_LARGE: "mock-llama-large",
TOGETHER_MODEL_SMALL: "mock-together-small",
TOGETHER_MODEL_LARGE: "mock-together-large",
LIVEPEER_GATEWAY_URL: "http://gateway.test-gateway",
IMAGE_LIVEPEER_MODEL: "ByteDance/SDXL-Lightning",
},
loadEnv: vi.fn(),
};
Expand Down Expand Up @@ -125,6 +127,26 @@ describe("Model Provider Configuration", () => {
);
});
});
describe("Livepeer Provider", () => {
test("should have correct endpoint configuration", () => {
expect(models[ModelProviderName.LIVEPEER].endpoint).toBe("http://gateway.test-gateway");
});

test("should have correct model mappings", () => {
const livepeerModels = models[ModelProviderName.LIVEPEER].model;
expect(livepeerModels[ModelClass.SMALL]).toBe("meta-llama/Meta-Llama-3.1-8B-Instruct");
expect(livepeerModels[ModelClass.MEDIUM]).toBe("meta-llama/Meta-Llama-3.1-8B-Instruct");
expect(livepeerModels[ModelClass.LARGE]).toBe("meta-llama/Meta-Llama-3.1-8B-Instruct");
expect(livepeerModels[ModelClass.IMAGE]).toBe("ByteDance/SDXL-Lightning");
});

test("should have correct settings configuration", () => {
const settings = models[ModelProviderName.LIVEPEER].settings;
expect(settings.maxInputTokens).toBe(128000);
expect(settings.maxOutputTokens).toBe(8192);
expect(settings.temperature).toBe(0);
});
});
});

describe("Model Retrieval Functions", () => {
Expand Down Expand Up @@ -224,3 +246,16 @@ describe("Environment Variable Integration", () => {
);
});
});

describe("Generation with Livepeer", () => {
test("should have correct image generation settings", () => {
const livepeerConfig = models[ModelProviderName.LIVEPEER];
expect(livepeerConfig.model[ModelClass.IMAGE]).toBe("ByteDance/SDXL-Lightning");
expect(livepeerConfig.settings.temperature).toBe(0);
});

test("should use default image model", () => {
delete process.env.IMAGE_LIVEPEER_MODEL;
expect(models[ModelProviderName.LIVEPEER].model[ModelClass.IMAGE]).toBe("ByteDance/SDXL-Lightning");
});
});
87 changes: 85 additions & 2 deletions packages/core/src/generation.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1188,6 +1188,55 @@ export async function generateText({
break;
}

case ModelProviderName.LIVEPEER: {
elizaLogger.debug("Initializing Livepeer model.");

if (!endpoint) {
throw new Error("Livepeer Gateway URL is not defined");
}

const requestBody = {
model: model,
messages: [
{
role: "system",
content: runtime.character.system ?? settings.SYSTEM_PROMPT ?? "You are a helpful assistant"
},
{
role: "user",
content: context
}
],
max_tokens: max_response_length,
stream: false
};

const fetchResponse = await runtime.fetch(endpoint+'/llm', {
method: "POST",
headers: {
"accept": "text/event-stream",
"Content-Type": "application/json",
"Authorization": "Bearer eliza-app-llm"
},
body: JSON.stringify(requestBody)
});

if (!fetchResponse.ok) {
const errorText = await fetchResponse.text();
throw new Error(`Livepeer request failed (${fetchResponse.status}): ${errorText}`);
}

const json = await fetchResponse.json();

if (!json?.choices?.[0]?.message?.content) {
throw new Error("Invalid response format from Livepeer");
}

response = json.choices[0].message.content.replace(/<\|start_header_id\|>assistant<\|end_header_id\|>\n\n/, '');
elizaLogger.debug("Successfully received response from Livepeer model");
break;
}

default: {
const errorMessage = `Unsupported provider: ${provider}`;
elizaLogger.error(errorMessage);
Expand Down Expand Up @@ -1721,7 +1770,6 @@ export const generateImage = async (
}
},
});

// Convert the returned image URLs to base64 to match existing functionality
const base64Promises = result.data.images.map(async (image) => {
const response = await fetch(image.url);
Expand Down Expand Up @@ -1822,15 +1870,18 @@ export const generateImage = async (
if (!baseUrl.protocol.startsWith("http")) {
throw new Error("Invalid Livepeer Gateway URL protocol");
}

const response = await fetch(
`${baseUrl.toString()}text-to-image`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: "Bearer eliza-app-img",
},
body: JSON.stringify({
model_id: model,
model_id:
data.modelId || "ByteDance/SDXL-Lightning",
prompt: data.prompt,
width: data.width || 1024,
height: data.height || 1024,
Expand Down Expand Up @@ -2108,6 +2159,8 @@ export async function handleProvider(
return await handleOllama(options);
case ModelProviderName.DEEPSEEK:
return await handleDeepSeek(options);
case ModelProviderName.LIVEPEER:
return await handleLivepeer(options);
default: {
const errorMessage = `Unsupported provider: ${provider}`;
elizaLogger.error(errorMessage);
Expand Down Expand Up @@ -2395,6 +2448,36 @@ async function handleDeepSeek({
});
}

async function handleLivepeer({
model,
apiKey,
schema,
schemaName,
schemaDescription,
mode,
modelOptions,
}: ProviderOptions): Promise<GenerateObjectResult<unknown>> {
console.log("Livepeer provider api key:", apiKey);
if (!apiKey) {
throw new Error("Livepeer provider requires LIVEPEER_GATEWAY_URL to be configured");
}

const livepeerClient = createOpenAI({
apiKey,
baseURL: apiKey // Use the apiKey as the baseURL since it contains the gateway URL
});

return await aiGenerateObject({
model: livepeerClient.languageModel(model),
schema,
schemaName,
schemaDescription,
mode,
...modelOptions,
});
}


// Add type definition for Together AI response
interface TogetherAIImageResponse {
data: Array<{
Expand Down
37 changes: 32 additions & 5 deletions packages/core/src/models.ts
Original file line number Diff line number Diff line change
Expand Up @@ -932,11 +932,38 @@ export const models: Models = {
},
},
[ModelProviderName.LIVEPEER]: {
// livepeer endpoint is handled from the sdk
endpoint: settings.LIVEPEER_GATEWAY_URL,
model: {
[ModelClass.SMALL]: {
name:
settings.SMALL_LIVEPEER_MODEL ||
"meta-llama/Meta-Llama-3.1-8B-Instruct",
stop: [],
maxInputTokens: 8000,
maxOutputTokens: 8192,
temperature: 0,
},
[ModelClass.MEDIUM]: {
name:
settings.MEDIUM_LIVEPEER_MODEL ||
"meta-llama/Meta-Llama-3.1-8B-Instruct",
stop: [],
maxInputTokens: 8000,
maxOutputTokens: 8192,
temperature: 0,
},
[ModelClass.LARGE]: {
name:
settings.LARGE_LIVEPEER_MODEL ||
"meta-llama/Meta-Llama-3.1-8B-Instruct",
stop: [],
maxInputTokens: 8000,
maxOutputTokens: 8192,
temperature: 0,
},
[ModelClass.IMAGE]: {
name:
settings.LIVEPEER_IMAGE_MODEL || "ByteDance/SDXL-Lightning",
settings.IMAGE_LIVEPEER_MODEL || "ByteDance/SDXL-Lightning",
},
},
},
Expand All @@ -948,21 +975,21 @@ export const models: Models = {
stop: [],
maxInputTokens: 128000,
maxOutputTokens: 8192,
temperature: 0.6,
temperature: 0,
},
[ModelClass.MEDIUM]: {
name: settings.MEDIUM_INFERA_MODEL || "mistral-nemo:latest",
stop: [],
maxInputTokens: 128000,
maxOutputTokens: 8192,
temperature: 0.6,
temperature: 0,
},
[ModelClass.LARGE]: {
name: settings.LARGE_INFERA_MODEL || "mistral-small:latest",
stop: [],
maxInputTokens: 128000,
maxOutputTokens: 8192,
temperature: 0.6,
temperature: 0,
},
},
},
Expand Down
Loading

0 comments on commit d55a3c7

Please sign in to comment.