From 0bda6b0c1cd632c49a9ec508c58d08fca1adf0e7 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E7=94=B3=E6=9D=89=E6=9D=89?= <467638484@qq.com>
Date: Sun, 22 Dec 2024 23:35:23 +0800
Subject: [PATCH] update about
---
content/about/index.md | 4 +
public/about/index.html | 120 +++++++++++++++---
.../index.html" | 4 +-
.../index.html" | 4 +-
.../index.html" | 4 +-
public/articles/index.html | 6 +-
.../index.html" | 4 +-
.../index.html" | 4 +-
.../index.html" | 4 +-
.../index.html" | 4 +-
.../index.html" | 12 +-
.../index.html" | 4 +-
.../index.html" | 4 +-
.../index.html" | 4 +-
.../index.html" | 6 +-
.../index.html" | 4 +-
.../index.html" | 4 +-
.../index.html" | 4 +-
.../index.html" | 4 +-
.../index.html" | 4 +-
.../index.html" | 2 +-
public/guide/index.html | 6 +-
public/index.html | 2 +-
public/index.json | 2 +-
public/roadmap/index.html | 4 +-
public/tags/ai/index.html | 2 +-
public/tags/llm/index.html | 2 +-
.../index.html" | 2 +-
.../index.html" | 2 +-
29 files changed, 159 insertions(+), 73 deletions(-)
diff --git a/content/about/index.md b/content/about/index.md
index 2dc9b59..5bd5d69 100644
--- a/content/about/index.md
+++ b/content/about/index.md
@@ -89,6 +89,10 @@ Studying at School of Electronic and Information Engineering, majoring in commun
I am a contributor to many open source projects on GitHub, which are shown below.
+{{< github repo="vllm-project/vllm" >}}
+
+
+
{{< github repo="ggerganov/llama.cpp" >}}
diff --git a/public/about/index.html b/public/about/index.html
index c1215ed..6558e31 100644
--- a/public/about/index.html
+++ b/public/about/index.html
@@ -106,7 +106,7 @@
"mainEntityOfPage": "true",
- "wordCount": "582"
+ "wordCount": "596"
}]
@@ -600,7 +600,7 @@
I am a contributor to many open source projects on GitHub, which are shown below.
- +- LLM inference in C/C++ + A high-throughput and memory-efficient inference and serving engine for LLMs
- A generative speech model for daily dialogue. + LLM inference in C/C++
+ A generative speech model for daily dialogue. +
+ +此外,还有一种不需要更新模型权重就可以完成微调的方法,叫做 In-Context Learning,通过在输入的 prompt 中提供与任务相关的上下文和例子,从而让模型能够更好地了理解我们的意图。
+最新进展:
+在 OpenAI 最新的发布会中,还提出了一种叫做 RFT(Reinforcement Fine-Tuning) 的微调技术,能够以奖励驱动的方式不断完善大模型所掌握的知识,更多细节可以参考这篇文章:What Is OpenAI’s Reinforcement Fine-Tuning?。
目前比较主流的几种参数高效微调方法包括:Prompt Tuning、Prefix Tuning、LoRA、QLoRA 等。
-论文《Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning》中展示了各类参数高效微调方法及其所属的类别,如下所示:
+论文《Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning》中展示了各类参数高效微调方法及其所属的类别,如下所示: