⭐⭐⭐ Kalavai and our LLM pools are open source, and free to use in both commercial and non-commercial purposes. If you find it useful, consider supporting us by giving a star to our GitHub project, joining our discord channel, follow our Substack and give us a review on Product Hunt.
Kalavai is an open source tool that turns everyday devices into your very own LLM platform. It aggregates resources from multiple machines, including desktops and laptops, and is compatible with most model engines to make LLM deployment and orchestration simple and reliable.
Kalavai's goal is to make using LLMs in real applications accessible and affordable to all. It's a magic box that integrates all the components required to make LLM useful in the age of massive computing, from sourcing computing power, managing distributed infrastructure and storage, using industry-standard model engines and orchestration of LLMs.
every_tree_starts_as_a_seed.mp4
your_wish_is_my_command.mp4
one_api_to_rule_them_all.mp4
all_power_all_privacy.mp4
- 31 January 2025:
kalavai-client
is now a PyPI package, easier to install than ever! - 27 January 2025: Support for accessing pools from remote computers
- 9 January 2025: Added support for Aphrodite Engine models
- 8 January 2025: Release of a free, public, shared pool for community LLM deployment
- 24 December 2024: Release of public BOINC pool to donate computing to scientific projects
- 23 December 2024: Release of public petals swarm
- 24 November 2024: Common pools with private user spaces
- 30 October 2024: Release of our public pool platform
We currently support out of the box the following LLM engines:
Coming soon:
Not what you were looking for? Tell us what engines you'd like to see.
Kalavai is at an early stage of its development. We encourage people to use it and give us feedback! Although we are trying to minimise breaking changes, these may occur until we have a stable version (v1.0).
- Get a free Kalavai account and access unlimited AI.
- Full documentation for the project.
- Join our Substack for updates and be part of our community
- Join our discord community
The kalavai-client
is the main tool to interact with the Kalavai platform, to create and manage both local and public pools and also to interact with them (e.g. deploy models). Let's go over its installation.
From release v0.5.0, you can now install kalavai-client
in non-worker computers. You can run a pool on a set of machines and have the client on a remote computer from which you access the LLM pool. Because the client only requires having python installed, this means more computers are now supported to run it.
For workers sharing resources with the pool:
- A laptop, desktop or Virtual Machine
- Docker engine installed (for linux, Windows and MacOS) with privilege access.
Support for Windows and MacOS workers is experimental: kalavai workers run on docker containers that require access to the host network interfaces, thus systems that do not support containers natively (Windows and MacOS) may have difficulties finding each other.
Any system that runs python 3.6+ is able to run the kalavai-client
and therefore connect and operate an LLM pool, without sharing with the pool. Your computer won't be adding its capacity to the pool, but it wil be able to deploy jobs and interact with models.
If you see the following error:
fatal error: Python.h: No such file or directory | #include <Python.h>
Make sure you also install python3-dev package. For ubuntu distros:
sudo apt install python3-dev
If you see:
AttributeError: install_layout. Did you mean: 'install_platlib'?
[end of output]
Upgrade your setuptools:
pip install -U setuptools
The client is a python package and can be installed with one command:
pip install kalavai-client
This is the easiest and most powerful way to experience Kalavai. It affords users the full resource capabilities of the community and access to all its deployed LLMs, via an OpenAI-compatible endpoint as well as a UI-based playground.
Check out our guide on how to join and start deploying LLMs.
Kalavai is free to use, no caps, for both commercial and non-commercial purposes. All you need to get started is one or more computers that can see each other (i.e. within the same network), and you are good to go. If you wish to join computers in different locations / networks, check managed kalavai.
Simply use the client to start your seed node:
kalavai pool start <pool-name>
Now you are ready to add worker nodes to this seed. To do so, generate a joining token:
$ kalavai pool token --user
Join token: <token>
Increase the power of your AI pool by inviting others to join.
Copy the joining token. On the worker node, run:
kalavai pool join <token>
You can now connect to an existing pool from any computer -not just from worker nodes. To connect to a pool, run:
kalavai pool attach <token>
This won't add the machine as a worker, but you will be able to operate in the pool as if you were. This is ideal for remote access to the pool, and to use the pool from machines that cannot run workers (docker container limitations).
Check our examples to put your new AI pool to good use!
- Single node vLLM GPU LLM deployment
- Multi node vLLM GPU LLM deployment
- Aphrodite-engine quantized LLM deployment, including Kobold interface
- Ray cluster for distributed computation.
If your system is not currently supported, open an issue and request it. We are expanding this list constantly.
Since worker nodes run inside docker, any machine that can run docker should be compatible with Kalavai. Here are instructions for linux, Windows and MacOS.
The kalavai client, which controls and access pools, can be installed on any machine that has python 3.10+.
amd64
orx86_64
CPU architecture- NVIDIA GPU
- AMD and Intel GPUs are currently not supported (interested in helping us test it?)
- Kalavai client on Linux
- [TEMPLATE] Distributed LLM deployment
- Kalavai client on Windows (with WSL2)
- Public LLM pools
- Self-hosted LLM pools
- Collaborative LLM deployment
- Ray cluster support
- Kalavai client on Mac
- [TEMPLATE] GPUStack support
- [TEMPLATE] exo support
- Support for AMD GPUs
- Docker install path
Anything missing here? Give us a shout in the discussion board
- PR welcome!
- Join the community and share ideas!
- Report bugs, issues and new features.
- Help improve our compatibility matrix by testing on different operative systems.
- Follow our Substack channel for news, guides and more.
- Community integrations are template jobs built by Kalavai and the community that makes deploying distributed workflows easy for users. Anyone can extend them and contribute to the repo.
Python version >= 3.6.
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.10 python3.10-dev python3-virtualenv
virtualenv -p python3.10 env
source env/bin/activate
sudo apt install python3.10-venv python3.10-dev -y
pip install -U setuptools
pip install -e .[dev]
Build python wheels:
bash publish.sh build
To run the unit tests, use:
python -m unittest
docker run --rm --net=host -v /root/.cache/kalavai/:/root/.cache/kalavai/ ghcr.io/helmfile/helmfile:v0.169.2 helmfile sync --file /root/.cache/kalavai/apps.yaml --kubeconfig /root/.cache/kalavai/kubeconfig