You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After creating a new Conda environment and downloading the requirements and cudatoolkit, I attempted to fine-tune qlora to check if the environment was set up correctly. However, I started receiving a new output as shared below. Does anyone have any idea about this?
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 111
CUDA SETUP: Loading binary /miniconda3/envs/sim_2/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda111.so...
Detected that training was already completed!
loading base model huggyllama/llama-7b...
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:39<00:00, 19.96s/it]
adding LoRA modules...
trainable params: 62464000.0 || all params: 3625340928 || trainable: 1.7229827826002466
loaded model
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at huggingface/transformers#24565
Adding special tokens.
The text was updated successfully, but these errors were encountered:
After creating a new Conda environment and downloading the requirements and cudatoolkit, I attempted to fine-tune qlora to check if the environment was set up correctly. However, I started receiving a new output as shared below. Does anyone have any idea about this?
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 111
CUDA SETUP: Loading binary /miniconda3/envs/sim_2/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda111.so...
Detected that training was already completed!
loading base model huggyllama/llama-7b...
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:39<00:00, 19.96s/it]
adding LoRA modules...
trainable params: 62464000.0 || all params: 3625340928 || trainable: 1.7229827826002466
loaded model
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at huggingface/transformers#24565
Adding special tokens.
The text was updated successfully, but these errors were encountered: