Skip to content

getjavelin/javelin-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

bfccecc · Feb 28, 2025
Nov 7, 2024
Jan 20, 2025
Feb 28, 2025
Feb 20, 2025
Feb 26, 2025
Feb 20, 2025
Jan 13, 2025
Aug 2, 2024
Feb 20, 2025
Feb 26, 2025
Aug 8, 2023
Aug 14, 2024
Feb 26, 2025
Sep 26, 2024
Aug 13, 2024
Feb 10, 2025

Repository files navigation

Javelin: an Enterprise-Scale, Fast LLM Gateway

This is the Python client package for Javelin.

For more information about Javelin, see https://getjavelin.io
Javelin Documentation: https://docs.getjavelin.io

Development

For local development, Please change version = "RELEASE_VERSION" with any semantic version example : version = "v0.1.10" in pyproject.toml

Make sure that the file pyproject.toml reverted before commit back to main

Installation

  pip install javelin_sdk

Quick Start Guide

Development Setup

Setting up Virtual Environment

Windows

# Create virtual environment
python -m venv venv

# Activate virtual environment
venv\Scripts\activate

# Install dependencies
pip install poetry
poetry install

macOS/Linux

# Create virtual environment
python -m venv venv

# Activate virtual environment
source venv/bin/activate

# Install dependencies
pip install poetry
poetry install

Building and Installing the SDK

# Uninstall any existing version
pip uninstall javelin_sdk -y

# Build the package
poetry build

# Install the newly built package
pip install dist/javelin_sdk-<version>-py3-none-any.whl

Direct OpenAI-Compatible Usage

from openai import OpenAI

# Initialize client with Javelin endpoint
client = OpenAI(
    base_url="https://api.javelin.live/v1/query/your_route",
    api_key="your_api_key"
)

# Make requests using standard OpenAI format
response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ]
)

Using Javelin SDK

import os
from openai import OpenAI
import dotenv
dotenv.load_dotenv()
# Configure regular route with Javelin headers
javelin_api_key = os.getenv("JAVELIN_API_KEY")
llm_api_key = os.getenv("OPENAI_API_KEY")
javelin_headers = {
    "x-api-key": javelin_api_key,
}

client = OpenAI(
    base_url="https://api-dev.javelin.live/v1/query/<route>",
    default_headers=javelin_headers
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "hello"}
    ],
)

print(response.model_dump_json(indent=2))

Using Universal Endpoints in OpenAI-Compatible Format

from javelin_sdk import JavelinClient, JavelinConfig

# Setup client configuration
config = JavelinConfig(
    base_url="https://api.javelin.live",
    javelin_api_key="your_javelin_api_key"
)

client = JavelinClient(config)

# Set headers for universal endpoint
custom_headers = {
    "Content-Type": "application/json",
    "x-javelin-route": "univ_bedrock"  # Change route as needed (univ_azure, univ_bedrock, univ_gemini)
}
client.set_headers(custom_headers)

# Make requests using OpenAI format
response = client.chat.completions.create(
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What are the three primary colors?"}
    ],
    temperature=0.7,
    max_tokens=150,
    model="amazon.titan-text-express-v1"  # Use appropriate model for your endpoint
)

Additional Integration Patterns

For more detailed examples and integration patterns, check out:

Javelin provides universal endpoints that allow you to use a consistent interface across different LLM providers. Here are the main patterns:

Azure OpenAI

Bedrock

Gemini

Agent Examples

Basic Examples

Advanced Examples