Skip to content

Simple application that utilizes OpenAI Chatgpt API. The app uses gradio front-end for chat, which houses several specific system prompts for tasks around coding with python.

Notifications You must be signed in to change notification settings

bkocis/chatgpt-api-app

Repository files navigation

Gradio app streaming prompt completion with OpenAI API

Python application


About

Build your own ChatGPT app and functionalities

This is a simple web application that uses Gradio to stream chat using OpenAI chatgpt API. A docker container can be built and hosted on your own server, so you can have "almost" the same user experience with ChatGPT, as with the official site.

Multiple tabs for accessing different models and settings

Installation

OpenAI API Key

The api key is required to run the application. You can get it from OpenAI

On local machine, you can put your api key in your .bashrc or .zshrc file, or add to the list of environment variables in your IDE.

export OPENAI_API_KEY=<YOUR_API_KEY>

When building the docker image, the api key is passed as a build argument. Make sure to have it to the instance where you build the image.

Getting started

Virtual enviroment

python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Running the application on local machine

cd chatgptApp
python main.py

Docker

Build and run docker container using commands from the Makefile

docker build --tag=${app_name} --build-arg OPENAI_API_KEY=${OPENAI_API_KEY} .
docker run -p ${port}:${port} ${app_name}

Deploy headless (target from Makefile)

deploy_headless:
	docker build --tag=${app_name} --build-arg OPENAI_API_KEY=${OPENAI_API_KEY} .
	docker run -dit -p ${port}:${port} ${app_name}

Check the container with docker logs

Volumes and storage on host When a docker volume is used, the data can be exported, or copied from the container:

docker exec -it tender_bose /bin/bash
docker cp tender_bose:/opt/app/chat_sessions.db ./resources/

Nginx

In case of deploying the app to your own server, you can use nginx for reverse proxy. Use the following endpoint block as a suggestion for the nginx config file:

    location /chatgpt-app {
        proxy_pass http://0.0.0.0:8083;
        proxy_redirect off;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
    }

Important to add the server_name, port, and root_path to the demo.launch method in main.py:

demo.launch(
    server_name="0.0.0.0",
    server_port=8083,
    root_path="/openai-chatgpt-gradio-app")

Read more about deploying:

Some issues encountered, with solution:

Reference repo - kudos to FrancescoSaverioZuppichini 👍

About

Simple application that utilizes OpenAI Chatgpt API. The app uses gradio front-end for chat, which houses several specific system prompts for tasks around coding with python.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published