Skip to content

Latest commit

 

History

History
17 lines (9 loc) · 573 Bytes

README.md

File metadata and controls

17 lines (9 loc) · 573 Bytes

DynamicK-Tuning (Work in progress, we will update this repo)

News

[2023.10.27] We share the source code for the proposed DynamicK-Tuning method.

Highlight

  • our preliminary experiments demonstrate that DynamicK-Tuning could improve the downstream instruction tuning of the Large Language Models. For example, using the Alpaca dataset, the Llama2-7B model could achieve 48.35 on the MMLU benchmark using DynamicK-Tuning, while normal full-finetuning achieves 46.49.

Enviroment Preparation

Run DynamicK-Tuning

Evaluation

We evaluate our models