Skip to content

ZeroYuHuang/kNeuron-Tuning

Repository files navigation

DynamicK-Tuning (Work in progress, we will update this repo)

News

[2023.10.27] We share the source code for the proposed DynamicK-Tuning method.

Highlight

  • our preliminary experiments demonstrate that DynamicK-Tuning could improve the downstream instruction tuning of the Large Language Models. For example, using the Alpaca dataset, the Llama2-7B model could achieve 48.35 on the MMLU benchmark using DynamicK-Tuning, while normal full-finetuning achieves 46.49.

Enviroment Preparation

Run DynamicK-Tuning

Evaluation

We evaluate our models

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published