psql postgresql://langflow:langflow@localhost:5432/langflow
-- Inside the psql prompt
CREATE TABLE security_requirements (
id SERIAL PRIMARY KEY,
requirement_description TEXT NOT NULL,
creation_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
\dt
select * from langchain_pg_collection limit 5;
https://bugbytes.io/posts/vector-databases-pgvector-and-langchain/
https://github.com/ollama/ollama/blob/main/docs/api.md
export PGPASSWORD='langflow' psql -U langflow -d langflow
export PGPASSWORD='langflow' psql -U langflow -d langflow -h localhost
SELECT document, (embedding <=> '[0.008690234273672104, -0.020522210747003555]') as cos_dist
FROM langchain_pg_embedding
ORDER by cos_dist
LIMIT 2
This repository provides a local deployment setup for a Large Language Model (LLM) using Docker and Docker Compose. The setup uses ollama LLM with Open Web UI.
To run locally
- Run
make start
at the root of this project. This will setup Open Web UI for chatting and deploys it withllama3
model. - ~~ To setup this stack with a different LLM like say
gemma2
, issue this commandmake start LOCAL_LLM=gemma2
~~
- To access the chat UI, got to
http://localhost:3000
- Sign up with a local account and login (Login information is local to the host machine).
- Select a model to use for chat.
- Chat away :)
Ollama exposes some endpoint that one can use to intereact with the LLM model. Some examples are below:
Chat Endpoint
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt":"Name the planets in the solar system?"
}'
Pull Model One can use multiple model in the Open Web UI. To pull a model run
curl http://localhost:11434/api/pull -d '{"name": "llama3"}'
Check Models Avalible locally
curl http://localhost:11434/api/tags
https://github.com/langflow-ai/langflow/tree/main/docker_example
To deploy the RAG, run make rag
. This will start the RAG service at http://localhost:7860
- https://tavily.com/ - Search API. See repo for samples and web scraper code - https://github.com/assafelovic/gpt-researcher
curl http://localhost:11434/api/generate -d '{
"model": "llava",
"prompt":"What is in this picture?",
"images": ["iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx ........C"]
}'
CREW AI Docs https://chatgpt.com/g/g-qqTuUWsBY-crewai-assistant/c/2ee65a77-006f-491a-af2d-8d9a26777576