The list of models has the baseline assumption that they support multimodal image inputs.
By default, the OpenAI LangChain package is installed but you can add provider-specific packages by doing pip install langchain-{provider}
, substituting {provider}
with your desired one.
Here's the full list of providers as supported by LangChain: https://python.langchain.com/docs/integrations/chat/
- Provider info: https://platform.openai.com/docs/models
- Supported models
- GPT-4o
- GPT-4o mini
- LangChain package to install:
langchain-openai
- LangChain documentation: https://python.langchain.com/docs/integrations/chat/openai/
- Provider info: https://docs.anthropic.com/en/docs/about-claude/models
- Supported models
- Claude 3.5 Sonnet
- Claude 3 Opus
- Claude 3 Sonnet
- Claude 3 Haiku
- LangChain package to install:
langchain-anthropic
- LangChain documentation: https://python.langchain.com/docs/integrations/chat/anthropic/
- Provider info: https://ai.google.dev/gemini-api/docs/models/gemini
- Supported models
- Gemini 1.5 Flash
- Gemini 1.5 Flash-8B
- Gemini 1.5 Pro
- LangChain package to install:
langchain-google-genai
- LangChain documentation: https://python.langchain.com/docs/integrations/chat/google_generative_ai/
- Provider info: https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html
- Supported models:
- LangChain package to install:
langchain-aws
- LangChain documentation: https://python.langchain.com/docs/integrations/chat/bedrock/
- Provider info: https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview
- Supported models:
- Gemini 1.5 Flash
- Gemini 1.5 Flash-8B
- Gemini 1.5 Pro
- More in "Model Garden": https://console.cloud.google.com/vertex-ai/model-garden
- LangChain package to install:
langchain-google-vertexai
- LangChain documentation: https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm/
- Provider info: https://docs.together.ai/docs/vision-overview
- Supported models:
- Llama 3.2 Vision 11B
- Llama 3.2 Vision 90B
- "Serverless" list: https://docs.together.ai/docs/serverless-models
- "Dedicated" list: https://docs.together.ai/docs/dedicated-models
- LangChain package to install:
langchain-together
- LangChain documentation: https://python.langchain.com/docs/integrations/chat/together/
- Provider info: https://huggingface.co/models
- Supported models:
- Llama 3.2 Vision 11B
- Llama 3.2 Vision 90B
- Full list: https://huggingface.co/models
- LangChain package to install:
langchain-huggingface
- LangChain documentation: https://python.langchain.com/docs/integrations/chat/huggingface/
- Provider info: https://github.com/ollama/ollama
- Supported models:
- Llama 3.2 Vision 11B
- Llama 3.2 Vision 90B
- Full list: https://github.com/ollama/ollama#model-library
- LangChain package to install:
langchain-ollama
- LangChain documentation: https://python.langchain.com/docs/integrations/chat/ollama/