title | description | icon |
---|---|---|
Installation and Setup Guide |
Start experiencing the power of Shinkai in under 5 minutes |
wrench |
Windows/Linux -> CPU: 4 Cores | RAM: 8 GB | GPU: NVIDIA or AMD with 4GB VRAM
MacOs -> Processor M1+ (Not Intel compatible)
Getting started with Shinkai is quick and easy. Follow these steps to install and set up Shinkai on your device.
Visit the Shinkai website and click on the "Download" button. Shinkai supports Windows, Mac, and Linux platforms.
[](https://www.shinkai.com/)Once downloaded, run the installation file and follow the on-screen instructions to install Shinkai.
After installation, open the Shinkai app. The first time you launch it, you'll be guided through a quick setup process.
To start using your local AI with Shinkai you will need to install local AI agents or models.
After initializing your app you will be prompted to install recommended models according to your decice specs. You can see the full list of models available by clicking 'Show all models.'
As a general rule, here is a brief guideline for model installation **according to your machine specs**:**8GB RAM** ➡ choose 7B models or smaller.
_e.g., Gemma2 2b, Codegemma, Falcon, etc._
**16GB RAM** ➡ choose 7B, 13B models or smaller.
_e.g., Llama 3.1 8b, Mistral Nemo 12b, Nexusraven, etc._
**32GB RAM** ➡ choose 7B, 13B, 15B, 30B models or smaller.
_e.g., Codestral, etc._
**64GB RAM** ➡ choose 7B, 13B, 15B, 30B models or smaller. You might be able to go higher than that.
_e.g., Alfred, etc._
**96GB RAM+** ➡ any model size should work for you. 🦾
👟 For lightweight, fast AI needs (short conversations, basic text generation): Go with Gemma2 2b or LLaVA Phi 3.
⚖️ For balance between complexity and speed: Llama 3.1 8b is the best choice.
🧠 For handling large documents and complex reasoning: Opt for Mistral Nemo 12b, but keep in mind it will require more resources and be slower.
Ultimately, your choice depends on the type of content, performance requirements, and hardware limitations you have in your setup. The "b" in model names refers to the number of parameters in the model, measured in billions. Parameters are the variables the model uses to make predictions, learn language patterns, and generate outputs.
👀 **The more parameters a model has, the more complex patterns it can learn**, but this also means:
- Increased resource consumption (memory, disk space, computational power)
- Longer inference time (slower responses, unless optimized)
👇 Here’s a general guideline for understanding parameter sizes:
**2b - 8b models** are smaller, faster, and can handle simpler tasks. They’re more lightweight and good for smaller, real-time tasks.
**10b - 70b models** are larger, slower, and better for complex, nuanced tasks. They have better performance on language understanding and generation, but are resource-intensive.
💻 _You have to keep in mind your own machine's specs when selecting a model._