Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Azure, OpenAI, Palm, Anthropic, Cohere Models - using litellm #19

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@

[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/streamlit/llm-examples?quickstart=1)

[![litellm](https://img.shields.io/badge/%20%F0%9F%9A%85%20liteLLM-chatGPT%7CAzure%7CAnthropic-blue?color=green)](https://github.com/BerriAI/litellm)


Starter examples for building LLM apps with Streamlit.

## Overview of the App
Expand All @@ -15,6 +18,7 @@ Current examples include:
- LangChain Quickstart
- LangChain PromptTemplate
- LangChain Search
- LiteLLM Playground - Run 1 prompt on Claude2, Claude 1.2, GPT-3.5, GPT-4

## Demo App

Expand Down
74 changes: 74 additions & 0 deletions pages/2_lite_LLM_Quickstart.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
import streamlit as st
import threading
import os
from litellm import completion
from dotenv import load_dotenv

# load .env, so litellm reads from .env
load_dotenv()

# Function to get model outputs
def get_model_output(prompt, model_name):
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt},
]
response = completion(messages=messages, model=model_name)

return response['choices'][0]['message']['content']

# Function to get model outputs
def get_model_output_thread(prompt, model_name, outputs, idx):
output = get_model_output(prompt, model_name)
outputs[idx] = output

# Streamlit app

st.title("liteLLM API Playground - use 50+ LLM Models")
st.markdown("[Powered by liteLLM - one package for Anthropic, Cohere, OpenAI, Replicate](https://github.com/BerriAI/litellm/)")

# Sidebar for user input
with st.sidebar:
st.header("User Settings")
anthropic_api_key = st.text_input("Enter your Anthropic API key:", type="password")
openai_api_key = st.text_input("Enter your OpenAI API key:", type="password")
set_keys_button = st.button("Set API Keys")

if set_keys_button:
if anthropic_api_key:
os.environ["ANTHROPIC_API_KEY"] = anthropic_api_key
if openai_api_key:
os.environ["OPENAI_API_KEY"] = openai_api_key
st.success("API keys have been set.")

# User Input section
with st.sidebar:
st.header("User Input")
prompt = st.text_area("Enter your prompt here:")
submit_button = st.button("Submit")

# Main content area to display model outputs
st.header("Model Outputs")

# List of models to test
model_names = ["claude-instant-1.2", "claude-2", "gpt-3.5-turbo", "gpt-4", ] # Add your model names here

cols = st.columns(len(model_names)) # Create columns
outputs = [""] * len(model_names) # Initialize outputs list with empty strings

threads = []

if submit_button and prompt:
for idx, model_name in enumerate(model_names):
thread = threading.Thread(target=get_model_output_thread, args=(prompt, model_name, outputs, idx))
threads.append(thread)
thread.start()

for thread in threads:
thread.join()

# Display text areas and fill with outputs if available
for idx, model_name in enumerate(model_names):
with cols[idx]:
st.text_area(label=f"{model_name}", value=outputs[idx], height=300, key=f"output_{model_name}_{idx}") # Use a unique key

File renamed without changes.
File renamed without changes.
File renamed without changes.
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@ openai
duckduckgo-search
anthropic>=0.3.0
trubrics>=1.4.3
litellm>=0.1.380