[2023.10.27] We share the source code for the proposed DynamicK-Tuning method.
- our preliminary experiments demonstrate that DynamicK-Tuning could improve the downstream instruction tuning of the Large Language Models. For example, using the Alpaca dataset, the Llama2-7B model could achieve 48.35 on the MMLU benchmark using DynamicK-Tuning, while normal full-finetuning achieves 46.49.
We evaluate our models