Compare commits
52 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
549f32a743 | ||
|
|
e3b3452a73 | ||
|
|
62350d975d | ||
|
|
8d84b326b8 | ||
|
|
16079a3cba | ||
|
|
ff330a5487 | ||
|
|
94b3882d30 | ||
|
|
81544ca8b3 | ||
|
|
b7f4dd835e | ||
|
|
7e2380e4ed | ||
|
|
7f3cfd54b0 | ||
|
|
e083f2c629 | ||
|
|
e33858f110 | ||
|
|
da01a33152 | ||
|
|
8ca920a114 | ||
|
|
5f3d449a66 | ||
|
|
13735e7dfb | ||
|
|
a38d5c3a25 | ||
|
|
5bae637c67 | ||
|
|
12e488ba80 | ||
|
|
ad30c63c69 | ||
|
|
a116eff7df | ||
|
|
01bc355dde | ||
|
|
8e05f3c360 | ||
|
|
fde988dd4e | ||
|
|
91401ad14f | ||
|
|
280194647c | ||
|
|
2e0a542f33 | ||
|
|
b988694da7 | ||
|
|
512c4d0f73 | ||
|
|
5525fb1470 | ||
|
|
4db735e026 | ||
|
|
c8c79c39d1 | ||
|
|
bcfb76d8ca | ||
|
|
2d9aaf8fc9 | ||
|
|
8a3905c09a | ||
|
|
54cd8a46fa | ||
|
|
1b83bf261a | ||
|
|
2a7d22dab1 | ||
|
|
f7494b0cfb | ||
|
|
9ca91d59ec | ||
|
|
11feaa6e68 | ||
|
|
18d4b2304e | ||
|
|
2f45e9c33a | ||
|
|
f7df10cb66 | ||
|
|
46e9a2f5b2 | ||
|
|
69b8d2e0a1 | ||
|
|
0ddd2e9fea | ||
|
|
01c95f5bc4 | ||
|
|
e0bf44d82f | ||
|
|
f328e84ea7 | ||
|
|
c81f5015a1 |
2
.gitattributes
vendored
2
.gitattributes
vendored
@@ -1,3 +1,5 @@
|
||||
* text=auto eol=lf
|
||||
|
||||
backend-python/rwkv_pip/** linguist-vendored
|
||||
backend-python/wkv_cuda_utils/** linguist-vendored
|
||||
backend-python/get-pip.py linguist-vendored
|
||||
|
||||
9
.github/dependabot.yml
vendored
Normal file
9
.github/dependabot.yml
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
commit-message:
|
||||
prefix: "chore"
|
||||
include: "scope"
|
||||
2
.github/workflows/release.yml
vendored
2
.github/workflows/release.yml
vendored
@@ -98,6 +98,7 @@ jobs:
|
||||
rm ./backend-python/get-pip.py
|
||||
rm ./backend-python/rwkv_pip/cpp/librwkv.dylib
|
||||
rm ./backend-python/rwkv_pip/cpp/rwkv.dll
|
||||
rm ./backend-python/rwkv_pip/webgpu/web_rwkv_py.cp310-win_amd64.pyd
|
||||
make
|
||||
mv build/bin/RWKV-Runner build/bin/RWKV-Runner_linux_x64
|
||||
|
||||
@@ -124,6 +125,7 @@ jobs:
|
||||
rm ./backend-python/get-pip.py
|
||||
rm ./backend-python/rwkv_pip/cpp/rwkv.dll
|
||||
rm ./backend-python/rwkv_pip/cpp/librwkv.so
|
||||
rm ./backend-python/rwkv_pip/webgpu/web_rwkv_py.cp310-win_amd64.pyd
|
||||
make
|
||||
cp build/darwin/Readme_Install.txt build/bin/Readme_Install.txt
|
||||
cp build/bin/RWKV-Runner.app/Contents/MacOS/RWKV-Runner build/bin/RWKV-Runner_darwin_universal
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
## Changes
|
||||
|
||||
- rwkv.cpp python38 compatibility
|
||||
- improve rwkv.cpp operation prompts
|
||||
- add load failed traceback
|
||||
- fix windows cmd waiting
|
||||
- abc music inference support
|
||||
- basic abc frontend support
|
||||
- fix finetune errorsMap ($modelInfo)
|
||||
|
||||
## Install
|
||||
|
||||
- Windows: https://github.com/josStorer/RWKV-Runner/blob/master/build/windows/Readme_Install.txt
|
||||
- MacOS: https://github.com/josStorer/RWKV-Runner/blob/master/build/darwin/Readme_Install.txt
|
||||
- Linux: https://github.com/josStorer/RWKV-Runner/blob/master/build/linux/Readme_Install.txt
|
||||
- Server-Deploy-Examples: https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples
|
||||
- Simple Deploy Example: https://github.com/josStorer/RWKV-Runner/blob/master/README.md#simple-deploy-example
|
||||
- Server Deploy Examples: https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples
|
||||
|
||||
31
README.md
31
README.md
@@ -47,13 +47,28 @@ English | [简体中文](README_ZH.md) | [日本語](README_JA.md)
|
||||
|
||||
</div>
|
||||
|
||||
#### Tip: You can deploy [backend-python](./backend-python/) on a server and use this program as a client only. Fill in your server address in the Settings `API URL`.
|
||||
## Tips
|
||||
|
||||
#### Default configs has enabled custom CUDA kernel acceleration, which is much faster and consumes much less VRAM. If you encounter possible compatibility issues (output garbled), go to the Configs page and turn off `Use Custom CUDA kernel to Accelerate`, or try to upgrade your gpu driver.
|
||||
- You can deploy [backend-python](./backend-python/) on a server and use this program as a client only. Fill in
|
||||
your server address in the Settings `API URL`.
|
||||
|
||||
#### If Windows Defender claims this is a virus, you can try downloading [v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip) and letting it update automatically to the latest version, or add it to the trusted list (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`).
|
||||
- If you are deploying and providing public services, please limit the request size through API gateway to prevent
|
||||
excessive resource usage caused by submitting overly long prompts. Additionally, please restrict the upper limit of
|
||||
requests' max_tokens based on your actual
|
||||
situation: https://github.com/josStorer/RWKV-Runner/blob/master/backend-python/utils/rwkv.py#L567, the default is set
|
||||
as le=102400, which may result in significant resource consumption for individual responses in extreme cases.
|
||||
|
||||
#### For different tasks, adjusting API parameters can achieve better results. For example, for translation tasks, you can try setting Temperature to 1 and Top_P to 0.3.
|
||||
- Default configs has enabled custom CUDA kernel acceleration, which is much faster and consumes much less VRAM. If you
|
||||
encounter possible compatibility issues (output garbled), go to the Configs page and turn
|
||||
off `Use Custom CUDA kernel to Accelerate`, or try to upgrade your gpu driver.
|
||||
|
||||
- If Windows Defender claims this is a virus, you can try
|
||||
downloading [v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip)
|
||||
and letting it update automatically to the latest version, or add it to the trusted
|
||||
list (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`).
|
||||
|
||||
- For different tasks, adjusting API parameters can achieve better results. For example, for translation tasks, you can
|
||||
try setting Temperature to 1 and Top_P to 0.3.
|
||||
|
||||
## Features
|
||||
|
||||
@@ -168,6 +183,10 @@ Tip: You can download https://github.com/josStorer/sgm_plus and unzip it to the
|
||||
to use it as an offline sound source. Please note that if you are compiling the program from source code, do not place
|
||||
it in the source code directory.
|
||||
|
||||
If you don't have a MIDI keyboard, you can use virtual MIDI input software like `Virtual Midi Controller 3 LE`, along
|
||||
with [loopMIDI](https://www.tobias-erichsen.de/wp-content/uploads/2020/01/loopMIDISetup_1_0_16_27.zip), to use a regular
|
||||
computer keyboard as MIDI input.
|
||||
|
||||
### USB MIDI Connection
|
||||
|
||||
- USB MIDI devices are plug-and-play, and you can select your input device in the Composition page
|
||||
@@ -206,12 +225,16 @@ it in the source code directory.
|
||||
|
||||
## Related Repositories:
|
||||
|
||||
- RWKV-5-World: https://huggingface.co/BlinkDL/rwkv-5-world/tree/main
|
||||
- RWKV-4-World: https://huggingface.co/BlinkDL/rwkv-4-world/tree/main
|
||||
- RWKV-4-Raven: https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main
|
||||
- ChatRWKV: https://github.com/BlinkDL/ChatRWKV
|
||||
- RWKV-LM: https://github.com/BlinkDL/RWKV-LM
|
||||
- RWKV-LM-LoRA: https://github.com/Blealtan/RWKV-LM-LoRA
|
||||
- MIDI-LLM-tokenizer: https://github.com/briansemrau/MIDI-LLM-tokenizer
|
||||
- ai00_rwkv_server: https://github.com/cgisky1980/ai00_rwkv_server
|
||||
- rwkv.cpp: https://github.com/saharNooby/rwkv.cpp
|
||||
- web-rwkv-py: https://github.com/cryscan/web-rwkv-py
|
||||
|
||||
## Preview
|
||||
|
||||
|
||||
29
README_JA.md
29
README_JA.md
@@ -47,13 +47,26 @@
|
||||
|
||||
</div>
|
||||
|
||||
#### ヒント:サーバーに[backend-python](./backend-python/)をデプロイし、このプログラムをクライアントとして使用することができます。設定された`API URL`にサーバーアドレスを入力してください。
|
||||
## ヒント
|
||||
|
||||
#### デフォルトの設定はカスタム CUDA カーネルアクセラレーションを有効にしています。互換性の問題 (文字化けを出力する) が発生する可能性がある場合は、コンフィグページに移動し、`Use Custom CUDA kernel to Accelerate` をオフにしてください、あるいは、GPUドライバーをアップグレードしてみてください。
|
||||
- サーバーに [backend-python](./backend-python/)
|
||||
をデプロイし、このプログラムをクライアントとして使用することができます。設定された`API URL`にサーバーアドレスを入力してください。
|
||||
|
||||
#### Windows Defender がこれをウイルスだと主張する場合は、[v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip) をダウンロードして最新版に自動更新させるか、信頼済みリストに追加してみてください (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`)。
|
||||
- もし、あなたがデプロイし、外部に公開するサービスを提供している場合、APIゲートウェイを使用してリクエストのサイズを制限し、
|
||||
長すぎるプロンプトの提出がリソースを占有しないようにしてください。さらに、実際の状況に応じて、リクエストの max_tokens
|
||||
の上限を制限してください:https://github.com/josStorer/RWKV-Runner/blob/master/backend-python/utils/rwkv.py#L567
|
||||
、デフォルトは le=102400 ですが、極端な場合には単一の応答が大量のリソースを消費する可能性があります。
|
||||
|
||||
#### 異なるタスクについては、API パラメータを調整することで、より良い結果を得ることができます。例えば、翻訳タスクの場合、Temperature を 1 に、Top_P を 0.3 に設定してみてください。
|
||||
- デフォルトの設定はカスタム CUDA カーネルアクセラレーションを有効にしています。互換性の問題 (文字化けを出力する)
|
||||
が発生する可能性がある場合は、コンフィグページに移動し、`Use Custom CUDA kernel to Accelerate`
|
||||
をオフにしてください、あるいは、GPUドライバーをアップグレードしてみてください。
|
||||
|
||||
- Windows Defender
|
||||
がこれをウイルスだと主張する場合は、[v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip)
|
||||
をダウンロードして最新版に自動更新させるか、信頼済みリストに追加してみてください (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`)。
|
||||
|
||||
- 異なるタスクについては、API パラメータを調整することで、より良い結果を得ることができます。例えば、翻訳タスクの場合、Temperature
|
||||
を 1 に、Top_P を 0.3 に設定してみてください。
|
||||
|
||||
## 特徴
|
||||
|
||||
@@ -167,6 +180,10 @@ Tip: You can download https://github.com/josStorer/sgm_plus and unzip it to the
|
||||
to use it as an offline sound source. Please note that if you are compiling the program from source code, do not place
|
||||
it in the source code directory.
|
||||
|
||||
MIDIキーボードをお持ちでない場合、`Virtual Midi Controller 3 LE`
|
||||
などの仮想MIDI入力ソフトウェアを使用することができます。[loopMIDI](https://www.tobias-erichsen.de/wp-content/uploads/2020/01/loopMIDISetup_1_0_16_27.zip)
|
||||
を組み合わせて、通常のコンピュータキーボードをMIDI入力として使用できます。
|
||||
|
||||
### USB MIDI Connection
|
||||
|
||||
- USB MIDI devices are plug-and-play, and you can select your input device in the Composition page
|
||||
@@ -205,12 +222,16 @@ it in the source code directory.
|
||||
|
||||
## 関連リポジトリ:
|
||||
|
||||
- RWKV-5-World: https://huggingface.co/BlinkDL/rwkv-5-world/tree/main
|
||||
- RWKV-4-World: https://huggingface.co/BlinkDL/rwkv-4-world/tree/main
|
||||
- RWKV-4-Raven: https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main
|
||||
- ChatRWKV: https://github.com/BlinkDL/ChatRWKV
|
||||
- RWKV-LM: https://github.com/BlinkDL/RWKV-LM
|
||||
- RWKV-LM-LoRA: https://github.com/Blealtan/RWKV-LM-LoRA
|
||||
- MIDI-LLM-tokenizer: https://github.com/briansemrau/MIDI-LLM-tokenizer
|
||||
- ai00_rwkv_server: https://github.com/cgisky1980/ai00_rwkv_server
|
||||
- rwkv.cpp: https://github.com/saharNooby/rwkv.cpp
|
||||
- web-rwkv-py: https://github.com/cryscan/web-rwkv-py
|
||||
|
||||
## Preview
|
||||
|
||||
|
||||
24
README_ZH.md
24
README_ZH.md
@@ -46,13 +46,22 @@ API兼容的接口,这意味着一切ChatGPT客户端都是RWKV客户端。
|
||||
|
||||
</div>
|
||||
|
||||
#### 小贴士:你可以在服务器部署[backend-python](./backend-python/),然后将此程序仅用作客户端,在设置的`API URL`中填入你的服务器地址
|
||||
## 小贴士
|
||||
|
||||
#### 预设配置已经开启自定义CUDA算子加速,速度更快,且显存消耗更少。如果你遇到可能的兼容性(输出乱码)问题,前往配置页面,关闭`使用自定义CUDA算子加速`,或更新你的显卡驱动
|
||||
- 你可以在服务器部署[backend-python](./backend-python/),然后将此程序仅用作客户端,在设置的`API URL`中填入你的服务器地址
|
||||
|
||||
#### 如果Windows Defender说这是一个病毒,你可以尝试下载[v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip),然后让其自动更新到最新版,或添加信任 (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`)
|
||||
- 如果你正在部署并对外提供公开服务,请通过API网关限制请求大小,避免过长的prompt提交占用资源。此外,请根据你的实际情况,限制请求的
|
||||
max_tokens 上限: https://github.com/josStorer/RWKV-Runner/blob/master/backend-python/utils/rwkv.py#L567,
|
||||
默认le=102400, 这可能导致极端情况下单个响应消耗大量资源
|
||||
|
||||
#### 对于不同的任务,调整API参数会获得更好的效果,例如对于翻译任务,你可以尝试设置Temperature为1,Top_P为0.3
|
||||
- 预设配置已经开启自定义CUDA算子加速,速度更快,且显存消耗更少。如果你遇到可能的兼容性(输出乱码)
|
||||
问题,前往配置页面,关闭`使用自定义CUDA算子加速`,或更新你的显卡驱动
|
||||
|
||||
- 如果 Windows Defender
|
||||
说这是一个病毒,你可以尝试下载[v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip),
|
||||
然后让其自动更新到最新版,或添加信任 (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`)
|
||||
|
||||
- 对于不同的任务,调整API参数会获得更好的效果,例如对于翻译任务,你可以尝试设置Temperature为1,Top_P为0.3
|
||||
|
||||
## 功能
|
||||
|
||||
@@ -161,6 +170,9 @@ for i in np.argsort(embeddings_cos_sim)[::-1]:
|
||||
小贴士: 你可以下载 https://github.com/josStorer/sgm_plus, 并解压到程序的`assets/sound-font`目录, 以使用离线音源. 注意,
|
||||
如果你正在从源码编译程序, 请不要将其放置在源码目录中
|
||||
|
||||
如果你没有MIDI键盘, 你可以使用像 `Virtual Midi Controller 3 LE` 这样的虚拟MIDI输入软件,
|
||||
配合[loopMIDI](https://www.tobias-erichsen.de/wp-content/uploads/2020/01/loopMIDISetup_1_0_16_27.zip), 使用普通电脑键盘作为MIDI输入
|
||||
|
||||
### USB MIDI 连接
|
||||
|
||||
- USB MIDI设备是即插即用的, 你能够在作曲页面选择你的输入设备
|
||||
@@ -192,12 +204,16 @@ for i in np.argsort(embeddings_cos_sim)[::-1]:
|
||||
|
||||
## 相关仓库:
|
||||
|
||||
- RWKV-5-World: https://huggingface.co/BlinkDL/rwkv-5-world/tree/main
|
||||
- RWKV-4-World: https://huggingface.co/BlinkDL/rwkv-4-world/tree/main
|
||||
- RWKV-4-Raven: https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main
|
||||
- ChatRWKV: https://github.com/BlinkDL/ChatRWKV
|
||||
- RWKV-LM: https://github.com/BlinkDL/RWKV-LM
|
||||
- RWKV-LM-LoRA: https://github.com/Blealtan/RWKV-LM-LoRA
|
||||
- MIDI-LLM-tokenizer: https://github.com/briansemrau/MIDI-LLM-tokenizer
|
||||
- ai00_rwkv_server: https://github.com/cgisky1980/ai00_rwkv_server
|
||||
- rwkv.cpp: https://github.com/saharNooby/rwkv.cpp
|
||||
- web-rwkv-py: https://github.com/cryscan/web-rwkv-py
|
||||
|
||||
## Preview
|
||||
|
||||
|
||||
@@ -50,9 +50,12 @@ func (a *App) OnStartup(ctx context.Context) {
|
||||
os.Mkdir(a.exDir+"models", os.ModePerm)
|
||||
os.Mkdir(a.exDir+"lora-models", os.ModePerm)
|
||||
os.Mkdir(a.exDir+"finetune/json2binidx_tool/data", os.ModePerm)
|
||||
f, err := os.Create(a.exDir + "lora-models/train_log.txt")
|
||||
if err == nil {
|
||||
f.Close()
|
||||
trainLogPath := a.exDir + "lora-models/train_log.txt"
|
||||
if !a.FileExists(trainLogPath) {
|
||||
f, err := os.Create(trainLogPath)
|
||||
if err == nil {
|
||||
f.Close()
|
||||
}
|
||||
}
|
||||
|
||||
a.downloadLoop()
|
||||
|
||||
@@ -10,7 +10,7 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
func (a *App) StartServer(python string, port int, host string, webui bool, rwkvBeta bool, rwkvcpp bool) (string, error) {
|
||||
func (a *App) StartServer(python string, port int, host string, webui bool, rwkvBeta bool, rwkvcpp bool, webgpu bool) (string, error) {
|
||||
var err error
|
||||
if python == "" {
|
||||
python, err = GetPython()
|
||||
@@ -28,6 +28,9 @@ func (a *App) StartServer(python string, port int, host string, webui bool, rwkv
|
||||
if rwkvcpp {
|
||||
args = append(args, "--rwkv.cpp")
|
||||
}
|
||||
if webgpu {
|
||||
args = append(args, "--webgpu")
|
||||
}
|
||||
args = append(args, "--port", strconv.Itoa(port), "--host", host)
|
||||
return Cmd(args...)
|
||||
}
|
||||
@@ -55,6 +58,17 @@ func (a *App) ConvertSafetensors(modelPath string, outPath string) (string, erro
|
||||
return Cmd(args...)
|
||||
}
|
||||
|
||||
func (a *App) ConvertSafetensorsWithPython(python string, modelPath string, outPath string) (string, error) {
|
||||
var err error
|
||||
if python == "" {
|
||||
python, err = GetPython()
|
||||
}
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return Cmd(python, "./backend-python/convert_safetensors.py", "--input", modelPath, "--output", outPath)
|
||||
}
|
||||
|
||||
func (a *App) ConvertGGML(python string, modelPath string, outPath string, Q51 bool) (string, error) {
|
||||
var err error
|
||||
if python == "" {
|
||||
|
||||
@@ -3,6 +3,7 @@ package backend_golang
|
||||
import (
|
||||
"archive/zip"
|
||||
"bufio"
|
||||
"crypto/sha256"
|
||||
"embed"
|
||||
"errors"
|
||||
"fmt"
|
||||
@@ -112,9 +113,19 @@ func CopyEmbed(efs embed.FS) error {
|
||||
return err
|
||||
}
|
||||
|
||||
err = os.WriteFile(path, content, 0644)
|
||||
if err != nil {
|
||||
return err
|
||||
executeWrite := true
|
||||
existedContent, err := os.ReadFile(path)
|
||||
if err == nil {
|
||||
if fmt.Sprintf("%x", sha256.Sum256(existedContent)) == fmt.Sprintf("%x", sha256.Sum256(content)) {
|
||||
executeWrite = false
|
||||
}
|
||||
}
|
||||
|
||||
if executeWrite {
|
||||
err = os.WriteFile(path, content, 0644)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
75
backend-python/convert_safetensors.py
vendored
75
backend-python/convert_safetensors.py
vendored
@@ -1,9 +1,8 @@
|
||||
import json
|
||||
import collections
|
||||
import numpy
|
||||
import os
|
||||
import sys
|
||||
import copy
|
||||
import torch
|
||||
from safetensors.torch import load_file, save_file
|
||||
from safetensors.torch import serialize_file, load_file
|
||||
|
||||
import argparse
|
||||
|
||||
@@ -26,35 +25,63 @@ def rename_key(rename, name):
|
||||
|
||||
|
||||
def convert_file(pt_filename: str, sf_filename: str, rename={}, transpose_names=[]):
|
||||
loaded = torch.load(pt_filename, map_location="cpu")
|
||||
loaded: collections.OrderedDict = torch.load(pt_filename, map_location="cpu")
|
||||
if "state_dict" in loaded:
|
||||
loaded = loaded["state_dict"]
|
||||
|
||||
loaded = {k: v.clone().half() for k, v in loaded.items()}
|
||||
# for k, v in loaded.items():
|
||||
# print(f'{k}\t{v.shape}\t{v.dtype}')
|
||||
kk = list(loaded.keys())
|
||||
version = 4
|
||||
for x in kk:
|
||||
if "ln_x" in x:
|
||||
version = max(5, version)
|
||||
if "gate.weight" in x:
|
||||
version = max(5.1, version)
|
||||
if int(version) == 5 and "att.time_decay" in x:
|
||||
if len(loaded[x].shape) > 1:
|
||||
if loaded[x].shape[1] > 1:
|
||||
version = max(5.2, version)
|
||||
if "time_maa" in x:
|
||||
version = max(6, version)
|
||||
|
||||
loaded = {rename_key(rename, k).lower(): v.contiguous() for k, v in loaded.items()}
|
||||
# For tensors to be contiguous
|
||||
for k, v in loaded.items():
|
||||
print(f"Model detected: v{version:.1f}")
|
||||
|
||||
if version == 5.1:
|
||||
_, n_emb = loaded["emb.weight"].shape
|
||||
for k in kk:
|
||||
if "time_decay" in k or "time_faaaa" in k:
|
||||
# print(k, mm[k].shape)
|
||||
loaded[k] = (
|
||||
loaded[k].unsqueeze(1).repeat(1, n_emb // loaded[k].shape[0])
|
||||
)
|
||||
|
||||
for k in kk:
|
||||
new_k = rename_key(rename, k).lower()
|
||||
v = loaded[k].half()
|
||||
del loaded[k]
|
||||
for transpose_name in transpose_names:
|
||||
if transpose_name in k:
|
||||
loaded[k] = v.transpose(0, 1)
|
||||
|
||||
loaded = {k: v.clone().half().contiguous() for k, v in loaded.items()}
|
||||
|
||||
for k, v in loaded.items():
|
||||
print(f"{k}\t{v.shape}\t{v.dtype}")
|
||||
v = v.transpose(0, 1)
|
||||
print(f"{new_k}\t{v.shape}\t{v.dtype}")
|
||||
loaded[new_k] = {
|
||||
"dtype": str(v.dtype).split(".")[-1],
|
||||
"shape": v.shape,
|
||||
"data": v.numpy().tobytes(),
|
||||
}
|
||||
|
||||
dirname = os.path.dirname(sf_filename)
|
||||
os.makedirs(dirname, exist_ok=True)
|
||||
save_file(loaded, sf_filename, metadata={"format": "pt"})
|
||||
reloaded = load_file(sf_filename)
|
||||
for k in loaded:
|
||||
pt_tensor = loaded[k]
|
||||
sf_tensor = reloaded[k]
|
||||
if not torch.equal(pt_tensor, sf_tensor):
|
||||
raise RuntimeError(f"The output tensors do not match for key {k}")
|
||||
serialize_file(loaded, sf_filename, metadata={"format": "pt"})
|
||||
# reloaded = load_file(sf_filename)
|
||||
# for k in loaded:
|
||||
# pt_tensor = torch.Tensor(
|
||||
# numpy.frombuffer(
|
||||
# bytearray(loaded[k]["data"]),
|
||||
# dtype=getattr(numpy, loaded[k]["dtype"]),
|
||||
# ).reshape(loaded[k]["shape"])
|
||||
# )
|
||||
# sf_tensor = reloaded[k]
|
||||
# if not torch.equal(pt_tensor, sf_tensor):
|
||||
# raise RuntimeError(f"The output tensors do not match for key {k}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -37,6 +37,11 @@ def get_args(args: Union[Sequence[str], None] = None):
|
||||
action="store_true",
|
||||
help="whether to use rwkv.cpp (default: False)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--webgpu",
|
||||
action="store_true",
|
||||
help="whether to use webgpu (default: False)",
|
||||
)
|
||||
args = parser.parse_args(args)
|
||||
|
||||
return args
|
||||
|
||||
@@ -8,7 +8,6 @@ import base64
|
||||
from fastapi import APIRouter, Request, status, HTTPException
|
||||
from sse_starlette.sse import EventSourceResponse
|
||||
from pydantic import BaseModel, Field
|
||||
import numpy as np
|
||||
import tiktoken
|
||||
from utils.rwkv import *
|
||||
from utils.log import quick_log
|
||||
@@ -335,6 +334,8 @@ The following is a coherent verbose detailed conversation between a girl named {
|
||||
body.stop.append(f"\n\n{bot_code}")
|
||||
elif body.stop is None:
|
||||
body.stop = default_stop
|
||||
if not body.presystem:
|
||||
body.stop.append("\n\n")
|
||||
|
||||
if body.stream:
|
||||
return EventSourceResponse(
|
||||
@@ -396,6 +397,8 @@ class EmbeddingsBody(BaseModel):
|
||||
|
||||
|
||||
def embedding_base64(embedding: List[float]) -> str:
|
||||
import numpy as np
|
||||
|
||||
return base64.b64encode(np.array(embedding).astype(np.float32)).decode("utf-8")
|
||||
|
||||
|
||||
|
||||
@@ -37,10 +37,14 @@ def text_to_midi(body: TextToMidiBody):
|
||||
async def midi_to_text(file_data: UploadFile):
|
||||
vocab_config = "backend-python/utils/midi_vocab_config.json"
|
||||
cfg = VocabConfig.from_json(vocab_config)
|
||||
filter_config = "backend-python/utils/midi_filter_config.json"
|
||||
filter_cfg = FilterConfig.from_json(filter_config)
|
||||
mid = mido.MidiFile(file=file_data.file)
|
||||
text = convert_midi_to_str(cfg, mid)
|
||||
output_list = convert_midi_to_str(cfg, filter_cfg, mid)
|
||||
if len(output_list) == 0:
|
||||
raise HTTPException(status.HTTP_400_BAD_REQUEST, "bad midi file")
|
||||
|
||||
return {"text": text}
|
||||
return {"text": output_list[0]}
|
||||
|
||||
|
||||
class TxtToMidiBody(BaseModel):
|
||||
|
||||
@@ -87,19 +87,26 @@ def add_state(body: AddStateBody):
|
||||
raise HTTPException(status.HTTP_400_BAD_REQUEST, "trie not loaded")
|
||||
|
||||
import torch
|
||||
import numpy as np
|
||||
|
||||
try:
|
||||
devices: List[torch.device] = []
|
||||
state: Union[Any, None] = None
|
||||
|
||||
if body.state is not None:
|
||||
if type(body.state) == list and hasattr(body.state[0], "device"): # torch
|
||||
devices = [tensor.device for tensor in body.state]
|
||||
state = [tensor.cpu() for tensor in body.state]
|
||||
elif type(body.state) == np.ndarray: # rwkv.cpp
|
||||
state = body.state
|
||||
else: # WebGPU
|
||||
state = body.state.back()
|
||||
|
||||
id: int = trie.insert(body.prompt)
|
||||
devices: List[torch.device] = [
|
||||
(tensor.device if hasattr(tensor, "device") else torch.device("cpu"))
|
||||
for tensor in body.state
|
||||
]
|
||||
dtrie[id] = {
|
||||
"tokens": copy.deepcopy(body.tokens),
|
||||
"state": [tensor.cpu() for tensor in body.state]
|
||||
if hasattr(body.state[0], "device")
|
||||
else copy.deepcopy(body.state),
|
||||
"logits": copy.deepcopy(body.logits),
|
||||
"tokens": body.tokens,
|
||||
"state": state,
|
||||
"logits": body.logits,
|
||||
"devices": devices,
|
||||
}
|
||||
|
||||
@@ -174,6 +181,7 @@ def longest_prefix_state(body: LongestPrefixStateBody, request: Request):
|
||||
raise HTTPException(status.HTTP_400_BAD_REQUEST, "trie not loaded")
|
||||
|
||||
import torch
|
||||
import numpy as np
|
||||
|
||||
id = -1
|
||||
try:
|
||||
@@ -182,17 +190,19 @@ def longest_prefix_state(body: LongestPrefixStateBody, request: Request):
|
||||
except:
|
||||
pass
|
||||
if id != -1:
|
||||
prompt: str = trie[id]
|
||||
v = dtrie[id]
|
||||
devices: List[torch.device] = v["devices"]
|
||||
prompt: str = trie[id]
|
||||
state: Union[Any, None] = v["state"]
|
||||
|
||||
if type(state) == list and hasattr(state[0], "device"): # torch
|
||||
state = [tensor.to(devices[i]) for i, tensor in enumerate(state)]
|
||||
|
||||
quick_log(request, body, "Hit:\n" + prompt)
|
||||
return {
|
||||
"prompt": prompt,
|
||||
"tokens": v["tokens"],
|
||||
"state": [tensor.to(devices[i]) for i, tensor in enumerate(v["state"])]
|
||||
if hasattr(v["state"][0], "device")
|
||||
else v["state"],
|
||||
"state": state,
|
||||
"logits": v["logits"],
|
||||
}
|
||||
else:
|
||||
|
||||
2
backend-python/rwkv_pip/beta/model.py
vendored
2
backend-python/rwkv_pip/beta/model.py
vendored
@@ -251,7 +251,7 @@ class RWKV(MyModule):
|
||||
)
|
||||
assert (
|
||||
w["_strategy"] == args.strategy_string
|
||||
) # if you are using a new strategy, re-convert the model
|
||||
), "model has been converted and does not match current strategy; if you are using a new strategy, re-convert the model"
|
||||
assert (
|
||||
float(w["_version"]) >= 0.7
|
||||
) # sometimes you should re-convert using latest convert_model.py
|
||||
|
||||
2
backend-python/rwkv_pip/model.py
vendored
2
backend-python/rwkv_pip/model.py
vendored
@@ -342,7 +342,7 @@ class RWKV(MyModule):
|
||||
)
|
||||
assert (
|
||||
w["_strategy"] == args.strategy_string
|
||||
) # if you are using a new strategy, re-convert the model
|
||||
), "model has been converted and does not match current strategy; if you are using a new strategy, re-convert the model"
|
||||
assert (
|
||||
float(w["_version"]) >= 0.7
|
||||
) # sometimes you should re-convert using latest convert_model.py
|
||||
|
||||
65532
backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt
vendored
Normal file
65532
backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt
vendored
Normal file
File diff suppressed because it is too large
Load Diff
23
backend-python/rwkv_pip/utils.py
vendored
23
backend-python/rwkv_pip/utils.py
vendored
@@ -34,6 +34,25 @@ class PIPELINE_ARGS:
|
||||
)
|
||||
|
||||
|
||||
class ABC_TOKENIZER:
|
||||
def __init__(self):
|
||||
self.pad_token_id = 0
|
||||
self.bos_token_id = 2
|
||||
self.eos_token_id = 3
|
||||
|
||||
def encode(self, text):
|
||||
ids = [ord(c) for c in text]
|
||||
return ids
|
||||
|
||||
def decode(self, ids):
|
||||
txt = "".join(
|
||||
chr(idx) if idx > self.eos_token_id else ""
|
||||
for idx in ids
|
||||
if idx != self.eos_token_id
|
||||
)
|
||||
return txt
|
||||
|
||||
|
||||
class PIPELINE:
|
||||
def __init__(self, model, WORD_NAME: str):
|
||||
self.model = model
|
||||
@@ -48,6 +67,8 @@ class PIPELINE:
|
||||
self.tokenizer = TRIE_TOKENIZER(
|
||||
os.path.dirname(os.path.abspath(__file__)) + "/rwkv_vocab_v20230424.txt"
|
||||
)
|
||||
elif WORD_NAME == "abc_tokenizer":
|
||||
self.tokenizer = ABC_TOKENIZER()
|
||||
else:
|
||||
if WORD_NAME.endswith(".txt"):
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
@@ -84,6 +105,8 @@ class PIPELINE:
|
||||
return e / e.sum(axis=axis, keepdims=True)
|
||||
|
||||
def sample_logits(self, logits, temperature=1.0, top_p=0.85, top_k=0):
|
||||
if type(logits) == list:
|
||||
logits = np.array(logits)
|
||||
np_logits = type(logits) == np.ndarray
|
||||
if np_logits:
|
||||
probs = self.np_softmax(logits, axis=-1)
|
||||
|
||||
31
backend-python/rwkv_pip/webgpu/model.py
vendored
Normal file
31
backend-python/rwkv_pip/webgpu/model.py
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
from typing import Any, List, Union
|
||||
|
||||
try:
|
||||
import web_rwkv_py as wrp
|
||||
except ModuleNotFoundError:
|
||||
try:
|
||||
from . import web_rwkv_py as wrp
|
||||
except ImportError:
|
||||
raise ModuleNotFoundError(
|
||||
"web_rwkv_py not found, install it from https://github.com/cryscan/web-rwkv-py"
|
||||
)
|
||||
|
||||
|
||||
class RWKV:
|
||||
def __init__(self, model_path: str, strategy: str = None):
|
||||
self.model = wrp.v5.Model(
|
||||
model_path,
|
||||
turbo=True,
|
||||
quant=32 if "i8" in strategy else None,
|
||||
quant_nf4=26 if "i4" in strategy else None,
|
||||
)
|
||||
self.w = {} # fake weight
|
||||
self.w["emb.weight"] = [0] * wrp.peek_info(model_path).num_vocab
|
||||
|
||||
def forward(self, tokens: List[int], state: Union[Any, None] = None):
|
||||
if type(state).__name__ == "BackedState": # memory state
|
||||
gpu_state = wrp.v5.ModelState(self.model, 1)
|
||||
gpu_state.load(state)
|
||||
else:
|
||||
gpu_state = state
|
||||
return wrp.v5.run_one(self.model, tokens, gpu_state)
|
||||
BIN
backend-python/rwkv_pip/webgpu/web_rwkv_py.cp310-win_amd64.pyd
vendored
Normal file
BIN
backend-python/rwkv_pip/webgpu/web_rwkv_py.cp310-win_amd64.pyd
vendored
Normal file
Binary file not shown.
71
backend-python/utils/midi.py
vendored
71
backend-python/utils/midi.py
vendored
@@ -52,6 +52,8 @@ class VocabConfig:
|
||||
bin_name_to_program_name: Dict[str, str]
|
||||
# Mapping from program number to instrument name.
|
||||
instrument_names: Dict[str, str]
|
||||
# Manual override for velocity bins. Each element is the max velocity value for that bin by index.
|
||||
velocity_bins_override: Optional[List[int]] = None
|
||||
|
||||
def __post_init__(self):
|
||||
self.validate()
|
||||
@@ -116,6 +118,12 @@ class VocabConfig:
|
||||
raise ValueError("velocity_bins must be at least 2")
|
||||
if len(self.bin_instrument_names) > 16:
|
||||
raise ValueError("bin_instruments must have at most 16 values")
|
||||
if self.velocity_bins_override:
|
||||
print("VocabConfig is using velocity_bins_override. Ignoring velocity_exp.")
|
||||
if len(self.velocity_bins_override) != self.velocity_bins:
|
||||
raise ValueError(
|
||||
"velocity_bins_override must have same length as velocity_bins"
|
||||
)
|
||||
if (
|
||||
self.ch10_instrument_bin_name
|
||||
and self.ch10_instrument_bin_name not in self.bin_instrument_names
|
||||
@@ -156,6 +164,11 @@ class VocabUtils:
|
||||
|
||||
def velocity_to_bin(self, velocity: float) -> int:
|
||||
velocity = max(0, min(velocity, self.cfg.velocity_events - 1))
|
||||
if self.cfg.velocity_bins_override:
|
||||
for i, v in enumerate(self.cfg.velocity_bins_override):
|
||||
if velocity <= v:
|
||||
return i
|
||||
return 0
|
||||
binsize = self.cfg.velocity_events / (self.cfg.velocity_bins - 1)
|
||||
if self.cfg.velocity_exp == 1.0:
|
||||
return ceil(velocity / binsize)
|
||||
@@ -176,6 +189,8 @@ class VocabUtils:
|
||||
)
|
||||
|
||||
def bin_to_velocity(self, bin: int) -> int:
|
||||
if self.cfg.velocity_bins_override:
|
||||
return self.cfg.velocity_bins_override[bin]
|
||||
binsize = self.cfg.velocity_events / (self.cfg.velocity_bins - 1)
|
||||
if self.cfg.velocity_exp == 1.0:
|
||||
return max(0, ceil(bin * binsize - 1))
|
||||
@@ -358,13 +373,32 @@ class AugmentConfig:
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class FilterConfig:
|
||||
# Whether to filter out MIDI files with duplicate MD5 hashes.
|
||||
deduplicate_md5: bool
|
||||
# Minimum time delay between notes in a file before splitting into multiple documents.
|
||||
piece_split_delay: float
|
||||
# Minimum length of a piece in milliseconds.
|
||||
min_piece_length: float
|
||||
|
||||
@classmethod
|
||||
def from_json(cls, path: str):
|
||||
with open(path, "r") as f:
|
||||
config = json.load(f)
|
||||
return cls(**config)
|
||||
|
||||
|
||||
def mix_volume(velocity: int, volume: int, expression: int) -> float:
|
||||
return velocity * (volume / 127.0) * (expression / 127.0)
|
||||
|
||||
|
||||
def convert_midi_to_str(
|
||||
cfg: VocabConfig, mid: mido.MidiFile, augment: AugmentValues = None
|
||||
) -> str:
|
||||
cfg: VocabConfig,
|
||||
filter_cfg: FilterConfig,
|
||||
mid: mido.MidiFile,
|
||||
augment: AugmentValues = None,
|
||||
) -> List[str]:
|
||||
utils = VocabUtils(cfg)
|
||||
if augment is None:
|
||||
augment = AugmentValues.default()
|
||||
@@ -390,7 +424,9 @@ def convert_midi_to_str(
|
||||
} # {channel: {(note, program) -> True}}
|
||||
started_flag = False
|
||||
|
||||
output_list = []
|
||||
output = ["<start>"]
|
||||
output_length_ms = 0.0
|
||||
token_data_buffer: List[
|
||||
Tuple[int, int, int, float]
|
||||
] = [] # need to sort notes between wait tokens
|
||||
@@ -432,16 +468,33 @@ def convert_midi_to_str(
|
||||
token_data_buffer = []
|
||||
|
||||
def consume_note_program_data(prog: int, chan: int, note: int, vel: float):
|
||||
nonlocal output, started_flag, delta_time_ms, cfg, utils, token_data_buffer
|
||||
nonlocal output, output_length_ms, started_flag, delta_time_ms, cfg, utils, token_data_buffer
|
||||
is_token_valid = (
|
||||
utils.prog_data_to_token_data(prog, chan, note, vel) is not None
|
||||
)
|
||||
if not is_token_valid:
|
||||
return
|
||||
|
||||
if delta_time_ms > filter_cfg.piece_split_delay * 1000.0:
|
||||
# check if any notes are still held
|
||||
silent = True
|
||||
for channel in channel_notes.keys():
|
||||
if len(channel_notes[channel]) > 0:
|
||||
silent = False
|
||||
break
|
||||
if silent:
|
||||
flush_token_data_buffer()
|
||||
output.append("<end>")
|
||||
if output_length_ms > filter_cfg.min_piece_length * 1000.0:
|
||||
output_list.append(" ".join(output))
|
||||
output = ["<start>"]
|
||||
output_length_ms = 0.0
|
||||
started_flag = False
|
||||
if started_flag:
|
||||
wait_tokens = utils.data_to_wait_tokens(delta_time_ms)
|
||||
if len(wait_tokens) > 0:
|
||||
flush_token_data_buffer()
|
||||
output_length_ms += delta_time_ms
|
||||
output += wait_tokens
|
||||
delta_time_ms = 0.0
|
||||
token_data_buffer.append((prog, chan, note, vel * augment.velocity_mod_factor))
|
||||
@@ -510,7 +563,9 @@ def convert_midi_to_str(
|
||||
|
||||
flush_token_data_buffer()
|
||||
output.append("<end>")
|
||||
return " ".join(output)
|
||||
if output_length_ms > filter_cfg.min_piece_length * 1000.0:
|
||||
output_list.append(" ".join(output))
|
||||
return output_list
|
||||
|
||||
|
||||
def generate_program_change_messages(cfg: VocabConfig):
|
||||
@@ -633,10 +688,10 @@ def token_to_midi_message(
|
||||
if utils.cfg.decode_fix_repeated_notes:
|
||||
if (channel, note) in state.active_notes:
|
||||
del state.active_notes[(channel, note)]
|
||||
yield mido.Message(
|
||||
"note_off", note=note, time=ticks, channel=channel
|
||||
), state
|
||||
ticks = 0
|
||||
yield mido.Message(
|
||||
"note_off", note=note, time=ticks, channel=channel
|
||||
), state
|
||||
ticks = 0
|
||||
state.active_notes[(channel, note)] = state.total_time
|
||||
yield mido.Message(
|
||||
"note_on", note=note, velocity=velocity, time=ticks, channel=channel
|
||||
|
||||
5
backend-python/utils/midi_filter_config.json
Normal file
5
backend-python/utils/midi_filter_config.json
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"deduplicate_md5": true,
|
||||
"piece_split_delay": 10000,
|
||||
"min_piece_length": 0
|
||||
}
|
||||
@@ -8,15 +8,9 @@ from typing import Dict, Iterable, List, Tuple, Union, Type
|
||||
from utils.log import quick_log
|
||||
from fastapi import HTTPException
|
||||
from pydantic import BaseModel, Field
|
||||
import numpy as np
|
||||
from routes import state_cache
|
||||
import global_var
|
||||
|
||||
|
||||
END_OF_TEXT = 0
|
||||
END_OF_LINE_DOUBLE = 535
|
||||
|
||||
|
||||
os.environ["TORCH_EXTENSIONS_DIR"] = f"{pathlib.Path(__file__).parent.parent.resolve()}"
|
||||
|
||||
|
||||
@@ -29,6 +23,8 @@ class RWKVType(Enum):
|
||||
|
||||
class AbstractRWKV(ABC):
|
||||
def __init__(self, model, pipeline):
|
||||
self.EOS_ID = 0
|
||||
|
||||
self.name = "rwkv"
|
||||
self.model = model
|
||||
self.pipeline = pipeline
|
||||
@@ -68,6 +64,8 @@ class AbstractRWKV(ABC):
|
||||
pass
|
||||
|
||||
def get_embedding(self, input: str, fast_mode: bool) -> Tuple[List[float], int]:
|
||||
import numpy as np
|
||||
|
||||
if fast_mode:
|
||||
embedding, token_len = self.__fast_embedding(
|
||||
self.fix_tokens(self.pipeline.encode(input)), None
|
||||
@@ -222,6 +220,8 @@ class AbstractRWKV(ABC):
|
||||
def generate(
|
||||
self, prompt: str, stop: Union[str, List[str], None] = None
|
||||
) -> Iterable[Tuple[str, str, int, int]]:
|
||||
import numpy as np
|
||||
|
||||
quick_log(None, None, "Generation Prompt:\n" + prompt)
|
||||
cache = None
|
||||
delta_prompt = prompt
|
||||
@@ -231,14 +231,14 @@ class AbstractRWKV(ABC):
|
||||
)
|
||||
except HTTPException:
|
||||
pass
|
||||
if cache is None or cache["prompt"] == "":
|
||||
if cache is None or cache["prompt"] == "" or cache["state"] is None:
|
||||
self.model_state = None
|
||||
self.model_tokens = []
|
||||
else:
|
||||
delta_prompt = prompt[len(cache["prompt"]) :]
|
||||
self.model_state = copy.deepcopy(cache["state"])
|
||||
self.model_tokens = copy.deepcopy(cache["tokens"])
|
||||
logits = copy.deepcopy(cache["logits"])
|
||||
self.model_state = cache["state"]
|
||||
self.model_tokens = cache["tokens"]
|
||||
logits = cache["logits"]
|
||||
|
||||
prompt_token_len = 0
|
||||
if delta_prompt != "":
|
||||
@@ -271,7 +271,7 @@ class AbstractRWKV(ABC):
|
||||
logits, temperature=self.temperature, top_p=self.top_p, top_k=self.top_k
|
||||
)
|
||||
|
||||
if token == END_OF_TEXT:
|
||||
if token == self.EOS_ID:
|
||||
yield response, "", prompt_token_len, completion_token_len
|
||||
break
|
||||
|
||||
@@ -398,7 +398,7 @@ class TextRWKV(AbstractRWKV):
|
||||
def fix_tokens(self, tokens) -> List[int]:
|
||||
if self.rwkv_type == RWKVType.World:
|
||||
return tokens
|
||||
if len(tokens) > 0 and tokens[-1] == END_OF_LINE_DOUBLE:
|
||||
if len(tokens) > 0 and tokens[-1] == 535:
|
||||
tokens = tokens[:-1] + [self.END_OF_LINE, self.END_OF_LINE]
|
||||
return tokens
|
||||
|
||||
@@ -456,7 +456,7 @@ The following is a coherent verbose detailed conversation between a girl named {
|
||||
pass
|
||||
|
||||
|
||||
class MusicRWKV(AbstractRWKV):
|
||||
class MusicMidiRWKV(AbstractRWKV):
|
||||
def __init__(self, model, pipeline):
|
||||
super().__init__(model, pipeline)
|
||||
|
||||
@@ -498,8 +498,45 @@ class MusicRWKV(AbstractRWKV):
|
||||
return " " + delta
|
||||
|
||||
|
||||
class MusicAbcRWKV(AbstractRWKV):
|
||||
def __init__(self, model, pipeline):
|
||||
super().__init__(model, pipeline)
|
||||
|
||||
self.EOS_ID = 3
|
||||
|
||||
self.max_tokens_per_generation = 500
|
||||
self.temperature = 1
|
||||
self.top_p = 0.8
|
||||
self.top_k = 8
|
||||
|
||||
self.rwkv_type = RWKVType.Music
|
||||
|
||||
def adjust_occurrence(self, occurrence: Dict, token: int):
|
||||
pass
|
||||
|
||||
def adjust_forward_logits(self, logits: List[float], occurrence: Dict, i: int):
|
||||
pass
|
||||
|
||||
def fix_tokens(self, tokens) -> List[int]:
|
||||
return tokens
|
||||
|
||||
def run_rnn(
|
||||
self, _tokens: List[str], newline_adj: int = 0
|
||||
) -> Tuple[List[float], int]:
|
||||
tokens = [int(x) for x in _tokens]
|
||||
token_len = len(tokens)
|
||||
self.model_tokens += tokens
|
||||
out, self.model_state = self.model.forward(tokens, self.model_state)
|
||||
return out, token_len
|
||||
|
||||
def delta_postprocess(self, delta: str) -> str:
|
||||
return delta
|
||||
|
||||
|
||||
def get_tokenizer(tokenizer_len: int):
|
||||
tokenizer_dir = f"{pathlib.Path(__file__).parent.parent.resolve()}/rwkv_pip/"
|
||||
if tokenizer_len < 20096:
|
||||
return "abc_tokenizer"
|
||||
if tokenizer_len < 50277:
|
||||
return tokenizer_dir + "tokenizer-midi.json"
|
||||
elif tokenizer_len < 65536:
|
||||
@@ -511,6 +548,7 @@ def get_tokenizer(tokenizer_len: int):
|
||||
def RWKV(model: str, strategy: str, tokenizer: Union[str, None]) -> AbstractRWKV:
|
||||
rwkv_beta = global_var.get(global_var.Args).rwkv_beta
|
||||
rwkv_cpp = getattr(global_var.get(global_var.Args), "rwkv.cpp")
|
||||
webgpu = global_var.get(global_var.Args).webgpu
|
||||
|
||||
if "midi" in model.lower() or "abc" in model.lower():
|
||||
os.environ["RWKV_RESCALE_LAYER"] = "999"
|
||||
@@ -526,6 +564,11 @@ def RWKV(model: str, strategy: str, tokenizer: Union[str, None]) -> AbstractRWKV
|
||||
from rwkv_pip.cpp.model import (
|
||||
RWKV as Model,
|
||||
)
|
||||
elif webgpu:
|
||||
print("Using webgpu")
|
||||
from rwkv_pip.webgpu.model import (
|
||||
RWKV as Model,
|
||||
)
|
||||
else:
|
||||
from rwkv_pip.model import (
|
||||
RWKV as Model,
|
||||
@@ -541,7 +584,8 @@ def RWKV(model: str, strategy: str, tokenizer: Union[str, None]) -> AbstractRWKV
|
||||
rwkv_map: dict[str, Type[AbstractRWKV]] = {
|
||||
"20B_tokenizer": TextRWKV,
|
||||
"rwkv_vocab_v20230424": TextRWKV,
|
||||
"tokenizer-midi": MusicRWKV,
|
||||
"tokenizer-midi": MusicMidiRWKV,
|
||||
"abc_tokenizer": MusicAbcRWKV,
|
||||
}
|
||||
tokenizer_name = os.path.splitext(os.path.basename(tokenizer))[0]
|
||||
rwkv: AbstractRWKV
|
||||
|
||||
5
build/darwin/Readme_Install.txt
vendored
5
build/darwin/Readme_Install.txt
vendored
@@ -1,3 +1,8 @@
|
||||
Client Download URL:
|
||||
客户端下载地址:
|
||||
クライアントのダウンロードURL:
|
||||
https://github.com/josStorer/RWKV-Runner/releases/latest/download/RWKV-Runner_macos_universal.zip
|
||||
|
||||
For Mac and Linux users, please manually install Python 3.10 (usually the latest systems come with it built-in). You can specify the Python interpreter to use in Settings. (which python3)
|
||||
对于Mac和Linux用户,请手动安装 Python3.10 (通常最新的系统已经内置了). 你可以在设置中指定使用的Python解释器. (which python3)
|
||||
MacおよびLinuxのユーザーの方は、Python3.10を手動でインストールしてください(通常、最新のシステムには既に組み込まれています)。 設定メニューで使用するPythonインタプリタを指定することができます。 (which python3)
|
||||
|
||||
5
build/linux/Readme_Install.txt
vendored
5
build/linux/Readme_Install.txt
vendored
@@ -1,3 +1,8 @@
|
||||
Client Download URL:
|
||||
客户端下载地址:
|
||||
クライアントのダウンロードURL:
|
||||
https://github.com/josStorer/RWKV-Runner/releases/latest/download/RWKV-Runner_linux_x64
|
||||
|
||||
For Mac and Linux users, please manually install Python 3.10 (usually the latest systems come with it built-in). You can specify the Python interpreter to use in Settings.
|
||||
对于Mac和Linux用户,请手动安装 Python3.10 (通常最新的系统已经内置了). 你可以在设置中指定使用的Python解释器.
|
||||
MacおよびLinuxのユーザーの方は、Python3.10を手動でインストールしてください(通常、最新のシステムには既に組み込まれています)。 設定メニューで使用するPythonインタプリタを指定することができます。
|
||||
|
||||
5
build/windows/Readme_Install.txt
vendored
5
build/windows/Readme_Install.txt
vendored
@@ -1,3 +1,8 @@
|
||||
Client Download URL:
|
||||
客户端下载地址:
|
||||
クライアントのダウンロードURL:
|
||||
https://github.com/josStorer/RWKV-Runner/releases/latest/download/RWKV-Runner_windows_x64.exe
|
||||
|
||||
Please execute this program in an empty directory. All related dependencies will be placed in this directory.
|
||||
请将本程序放在一个空目录内执行, 所有相关依赖均会放置于此目录.
|
||||
このプログラムを空のディレクトリで実行してください。関連するすべての依存関係は、このディレクトリに配置されます。
|
||||
|
||||
@@ -19,14 +19,15 @@ document.querySelectorAll('.grid.h-10.grid-cols-12.place-content-center.gap-x-3.
|
||||
if (!data.name.endsWith('.bin') && !data.name.endsWith('.pth'))
|
||||
return
|
||||
|
||||
data.desc = {en: '', zh: ''}
|
||||
data.desc = { en: '', zh: '', ja: '' }
|
||||
const rawText = await (await fetch(e.children[1].href.replace('/resolve/', '/raw/'))).text()
|
||||
|
||||
data.size = parseInt(extractValue(rawText, 'size'))
|
||||
data.SHA256 = extractValue(rawText, 'oid sha256:')
|
||||
data.lastUpdated = e.children[3].children[0].getAttribute('datetime')
|
||||
data.url = e.children[1].href.replace('/resolve/', '/blob/')
|
||||
data.downloadUrl = e.children[1].href
|
||||
data.url = e.children[1].href.replace('/resolve/', '/blob/').replace('?download=true', '')
|
||||
data.downloadUrl = e.children[1].href.replace('?download=true', '')
|
||||
data.tags = []
|
||||
|
||||
modelsJson.push(data)
|
||||
})
|
||||
|
||||
@@ -32,6 +32,7 @@ cleaner_thread.start()
|
||||
w = torch.load(model_file, map_location="cpu")
|
||||
gc.collect()
|
||||
|
||||
vocab_size = w["emb.weight"].shape[0]
|
||||
n_embd = w["emb.weight"].shape[1]
|
||||
n_layer = 0
|
||||
keys = list(w.keys())
|
||||
@@ -52,6 +53,9 @@ for x in keys:
|
||||
version = max(6, version)
|
||||
|
||||
if version <= expected_max_version:
|
||||
print(f"--n_layer {n_layer} --n_embd {n_embd}", end="")
|
||||
print(
|
||||
f"v{int(version)}/train.py --vocab_size {vocab_size} --n_layer {n_layer} --n_embd {n_embd}",
|
||||
end="",
|
||||
)
|
||||
else:
|
||||
raise Exception(f"RWKV{version} is not supported")
|
||||
|
||||
@@ -47,10 +47,10 @@ else
|
||||
fi
|
||||
|
||||
echo "loading $loadModel"
|
||||
modelInfo=$(python3 ./finetune/get_layer_and_embd.py $loadModel 4)
|
||||
modelInfo=$(python3 ./finetune/get_layer_and_embd.py $loadModel 5.2)
|
||||
echo $modelInfo
|
||||
if [[ $modelInfo =~ "--n_layer" ]]; then
|
||||
python3 ./finetune/lora/train.py $modelInfo $@ --proj_dir lora-models --data_type binidx --lora \
|
||||
python3 ./finetune/lora/$modelInfo $@ --proj_dir lora-models --data_type binidx --lora \
|
||||
--lora_parts=att,ffn,time,ln --strategy deepspeed_stage_2 --accelerator gpu
|
||||
else
|
||||
echo "modelInfo is invalid"
|
||||
|
||||
@@ -7,6 +7,7 @@ import struct
|
||||
from functools import lru_cache
|
||||
from itertools import accumulate
|
||||
|
||||
|
||||
def print_rank_0(*message):
|
||||
pass
|
||||
# """If distributed is initialized print only on rank 0."""
|
||||
@@ -16,12 +17,14 @@ def print_rank_0(*message):
|
||||
# else:
|
||||
# print(*message, flush=True)
|
||||
|
||||
|
||||
def _warmup_mmap_file(path):
|
||||
pass
|
||||
# with open(path, "rb") as stream:
|
||||
# while stream.read(100 * 1024 * 1024):
|
||||
# pass
|
||||
|
||||
|
||||
dtypes = {
|
||||
1: np.uint8,
|
||||
2: np.int8,
|
||||
@@ -33,18 +36,22 @@ dtypes = {
|
||||
8: np.uint16,
|
||||
}
|
||||
|
||||
|
||||
def code(dtype):
|
||||
for k in dtypes.keys():
|
||||
if dtypes[k] == dtype:
|
||||
return k
|
||||
raise ValueError(dtype)
|
||||
|
||||
|
||||
def index_file_path(prefix_path):
|
||||
return prefix_path + ".idx"
|
||||
|
||||
|
||||
def data_file_path(prefix_path):
|
||||
return prefix_path + ".bin"
|
||||
|
||||
|
||||
class MMapIndexedDataset(torch.utils.data.Dataset):
|
||||
class Index(object):
|
||||
_HDR_MAGIC = b"MMIDIDX\x00\x00"
|
||||
@@ -100,7 +107,7 @@ class MMapIndexedDataset(torch.utils.data.Dataset):
|
||||
self._file.close()
|
||||
|
||||
return _Writer()
|
||||
|
||||
|
||||
def __init__(self, path, skip_warmup=False):
|
||||
with open(path, "rb") as stream:
|
||||
magic_test = stream.read(9)
|
||||
@@ -217,8 +224,7 @@ class MMapIndexedDataset(torch.utils.data.Dataset):
|
||||
elif isinstance(idx, slice):
|
||||
start, stop, step = idx.indices(len(self))
|
||||
if step != 1:
|
||||
raise ValueError(
|
||||
"Slices into indexed_dataset must be contiguous")
|
||||
raise ValueError("Slices into indexed_dataset must be contiguous")
|
||||
ptr = self._index._pointers[start]
|
||||
sizes = self._index._sizes[idx]
|
||||
offsets = list(accumulate(sizes))
|
||||
@@ -17,9 +17,11 @@ class MyDataset(Dataset):
|
||||
|
||||
if args.data_type == "binidx":
|
||||
self.vocab_size = args.vocab_size
|
||||
rank_zero_info(f"Current vocab size = {self.vocab_size} (make sure it's correct)")
|
||||
rank_zero_info(
|
||||
f"Current vocab size = {self.vocab_size} (make sure it's correct)"
|
||||
)
|
||||
|
||||
if args.data_file.endswith('/'):
|
||||
if args.data_file.endswith("/"):
|
||||
d_all = []
|
||||
for p in os.listdir(args.data_file):
|
||||
if p.endswith(".idx"):
|
||||
@@ -29,33 +31,52 @@ class MyDataset(Dataset):
|
||||
exit(0)
|
||||
else:
|
||||
self.data = MMapIndexedDataset(args.data_file)
|
||||
self.data_size = len(self.data._bin_buffer) // self.data._index._dtype_size
|
||||
self.data_size = (
|
||||
len(self.data._bin_buffer) // self.data._index._dtype_size
|
||||
)
|
||||
rank_zero_info(f"Data has {self.data_size} tokens.")
|
||||
|
||||
if args.my_qa_mask > 0:
|
||||
self.data_pile = MMapIndexedDataset('/fsx/BlinkDL/pile/pile_20B_tokenizer_text_document')
|
||||
self.data_pile_size = len(self.data_pile._bin_buffer) // self.data._index._dtype_size
|
||||
self.data_pile = MMapIndexedDataset(
|
||||
"/fsx/BlinkDL/pile/pile_20B_tokenizer_text_document"
|
||||
)
|
||||
self.data_pile_size = (
|
||||
len(self.data_pile._bin_buffer) // self.data._index._dtype_size
|
||||
)
|
||||
|
||||
if args.my_pile_stage > 0:
|
||||
# assert self.data_size == 332115325534 and self.vocab_size == 50277
|
||||
self.samples_per_epoch = args.epoch_steps * args.real_bsz
|
||||
assert self.samples_per_epoch == 40320
|
||||
rank_zero_info(f"########## Pile 20b-tokenized stage {args.my_pile_stage} ##########")
|
||||
rank_zero_info(
|
||||
f"########## Pile 20b-tokenized stage {args.my_pile_stage} ##########"
|
||||
)
|
||||
dataset_slot = self.data_size // args.ctx_len
|
||||
if args.my_pile_stage != 4:
|
||||
assert MaybeIsPrime(args.magic_prime)
|
||||
assert args.magic_prime % 3 == 2
|
||||
assert args.magic_prime / dataset_slot > 0.99 and args.magic_prime / dataset_slot <= 1
|
||||
assert (
|
||||
args.magic_prime / dataset_slot > 0.99
|
||||
and args.magic_prime / dataset_slot <= 1
|
||||
)
|
||||
elif args.data_type == "numpy":
|
||||
self.data = np.load(args.data_file).astype("int")
|
||||
self.vocab_size = args.vocab_size
|
||||
rank_zero_info("Current vocab size =", self.vocab_size, "(make sure it's correct)")
|
||||
rank_zero_info(
|
||||
"Current vocab size =", self.vocab_size, "(make sure it's correct)"
|
||||
)
|
||||
self.data_size = len(self.data)
|
||||
rank_zero_info(f"Data has {self.data_size} tokens.")
|
||||
elif args.data_type == "uint16":
|
||||
self.data = np.fromfile(args.data_file, dtype=np.uint16).astype("int32").reshape(-1, args.my_sample_len)
|
||||
self.data = (
|
||||
np.fromfile(args.data_file, dtype=np.uint16)
|
||||
.astype("int32")
|
||||
.reshape(-1, args.my_sample_len)
|
||||
)
|
||||
self.vocab_size = args.vocab_size
|
||||
rank_zero_info("Current vocab size =", self.vocab_size, "(make sure it's correct)")
|
||||
rank_zero_info(
|
||||
"Current vocab size =", self.vocab_size, "(make sure it's correct)"
|
||||
)
|
||||
self.data_size = self.data.shape[0]
|
||||
rank_zero_info(f"Data has {self.data_size} samples.")
|
||||
elif args.data_type == "wds_img":
|
||||
@@ -86,10 +107,14 @@ class MyDataset(Dataset):
|
||||
for u in unique:
|
||||
xxObj[xx] = u
|
||||
xx += 1
|
||||
with open(f"{args.proj_dir}/vocab.json", "w", encoding="utf-16le") as vocab_file:
|
||||
with open(
|
||||
f"{args.proj_dir}/vocab.json", "w", encoding="utf-16le"
|
||||
) as vocab_file:
|
||||
vocab_file.write(json.dumps(xxObj, ensure_ascii=False))
|
||||
self.data_size = len(self.data)
|
||||
rank_zero_info(f"Data has {self.data_size} tokens, {self.vocab_size} vocab size.")
|
||||
rank_zero_info(
|
||||
f"Data has {self.data_size} tokens, {self.vocab_size} vocab size."
|
||||
)
|
||||
self.stoi = {ch: i for i, ch in enumerate(unique)}
|
||||
self.itos = {i: ch for i, ch in enumerate(unique)}
|
||||
|
||||
@@ -104,36 +129,53 @@ class MyDataset(Dataset):
|
||||
# print(f"epoch {epoch} idx {idx} rank {rank}/{world_size}")
|
||||
|
||||
if args.data_type == "wds_img":
|
||||
|
||||
def init_wds(self, bias=0):
|
||||
def identity(x):
|
||||
return x
|
||||
return x
|
||||
|
||||
import webdataset as wds
|
||||
import torchvision.transforms as transforms
|
||||
|
||||
# img_transform = transforms.Compose(
|
||||
# [transforms.CenterCrop(256)]
|
||||
# )
|
||||
img_transform = transforms.Compose([
|
||||
transforms.CenterCrop(512),
|
||||
transforms.Resize((args.my_img_size))
|
||||
])
|
||||
self.data_raw = wds.WebDataset(args.data_file, resampled=True).shuffle(10000, initial=1000, rng=random.Random(epoch*100000+rank+bias*1e9)).decode("torchrgb").to_tuple("jpg", "json", "txt").map_tuple(img_transform, identity, identity)
|
||||
img_transform = transforms.Compose(
|
||||
[transforms.CenterCrop(512), transforms.Resize((args.my_img_size))]
|
||||
)
|
||||
self.data_raw = (
|
||||
wds.WebDataset(args.data_file, resampled=True)
|
||||
.shuffle(
|
||||
10000,
|
||||
initial=1000,
|
||||
rng=random.Random(epoch * 100000 + rank + bias * 1e9),
|
||||
)
|
||||
.decode("torchrgb")
|
||||
.to_tuple("jpg", "json", "txt")
|
||||
.map_tuple(img_transform, identity, identity)
|
||||
)
|
||||
for pp in self.data_raw.pipeline:
|
||||
if 'Resampled' in str(pp):
|
||||
if "Resampled" in str(pp):
|
||||
pp.deterministic = True
|
||||
|
||||
def worker_seed():
|
||||
return rank*100000+epoch+bias*1e9
|
||||
return rank * 100000 + epoch + bias * 1e9
|
||||
|
||||
pp.worker_seed = worker_seed
|
||||
self.data = iter(self.data_raw)
|
||||
# print(f"WebDataset loaded for rank {rank} epoch {epoch}")
|
||||
|
||||
if self.data == None:
|
||||
init_wds(self)
|
||||
trial = 0
|
||||
while trial < 10:
|
||||
try:
|
||||
dd = next(self.data) # jpg, json, txt
|
||||
dd = next(self.data) # jpg, json, txt
|
||||
break
|
||||
except:
|
||||
print(f'[dataloader error - epoch {epoch} rank {rank} - trying a new shuffle]')
|
||||
print(
|
||||
f"[dataloader error - epoch {epoch} rank {rank} - trying a new shuffle]"
|
||||
)
|
||||
self.error_count += 1
|
||||
init_wds(self, self.error_count)
|
||||
trial += 1
|
||||
@@ -144,7 +186,7 @@ class MyDataset(Dataset):
|
||||
return dd[0], dd[2]
|
||||
else:
|
||||
if args.data_type == "uint16":
|
||||
i = np.random.randint(0, self.data_size-1)
|
||||
i = np.random.randint(0, self.data_size - 1)
|
||||
dix = self.data[i]
|
||||
x = torch.tensor(dix[:-1], dtype=torch.long)
|
||||
y = torch.tensor(dix[1:], dtype=torch.long)
|
||||
@@ -196,7 +238,12 @@ class MyDataset(Dataset):
|
||||
z_sum = 0
|
||||
isGood = False
|
||||
for i in range(3, ctx_len):
|
||||
if dix[i] == 27 and dix[i-1] == 34 and dix[i-2] == 187 and dix[i-3] == 187:
|
||||
if (
|
||||
dix[i] == 27
|
||||
and dix[i - 1] == 34
|
||||
and dix[i - 2] == 187
|
||||
and dix[i - 3] == 187
|
||||
):
|
||||
isGood = True
|
||||
if dix[i] == 0:
|
||||
isGood = False
|
||||
@@ -206,7 +253,9 @@ class MyDataset(Dataset):
|
||||
if z_sum == 0:
|
||||
z = [1] * ctx_len
|
||||
i = np.random.randint(0, self.data_pile_size - req_len)
|
||||
dix = self.data_pile.get(idx=0, offset=i, length=req_len).astype(int)
|
||||
dix = self.data_pile.get(
|
||||
idx=0, offset=i, length=req_len
|
||||
).astype(int)
|
||||
z = torch.tensor(z, dtype=torch.bfloat16)
|
||||
|
||||
x = torch.tensor(dix[:-1], dtype=torch.long)
|
||||
@@ -5,6 +5,7 @@
|
||||
import functools
|
||||
import os, math, gc, importlib
|
||||
import torch
|
||||
|
||||
# torch._C._jit_set_profiling_executor(True)
|
||||
# torch._C._jit_set_profiling_mode(True)
|
||||
import torch.nn as nn
|
||||
@@ -13,7 +14,8 @@ from torch.nn import functional as F
|
||||
import pytorch_lightning as pl
|
||||
from pytorch_lightning.utilities import rank_zero_info, rank_zero_only
|
||||
from pytorch_lightning.strategies import DeepSpeedStrategy
|
||||
if importlib.util.find_spec('deepspeed'):
|
||||
|
||||
if importlib.util.find_spec("deepspeed"):
|
||||
import deepspeed
|
||||
from deepspeed.ops.adam import DeepSpeedCPUAdam, FusedAdam
|
||||
|
||||
@@ -28,9 +30,10 @@ LORA_CONFIG = {
|
||||
|
||||
|
||||
try:
|
||||
print('RWKV_MY_TESTING', os.environ["RWKV_MY_TESTING"])
|
||||
print("RWKV_MY_TESTING", os.environ["RWKV_MY_TESTING"])
|
||||
except:
|
||||
os.environ["RWKV_MY_TESTING"] = ''
|
||||
os.environ["RWKV_MY_TESTING"] = ""
|
||||
|
||||
|
||||
def __nop(ob):
|
||||
return ob
|
||||
@@ -53,7 +56,26 @@ T_MAX = int(os.environ["RWKV_T_MAX"]) # TAKES LOTS OF VRAM!
|
||||
from torch.utils.cpp_extension import load
|
||||
|
||||
if os.environ["RWKV_FLOAT_MODE"] == "bf16":
|
||||
wkv_cuda = load(name=f"wkv_{T_MAX}_bf16", sources=["finetune/lora/cuda/wkv_op_bf16.cpp", "finetune/lora/cuda/wkv_cuda_bf16.cu"], verbose=True, extra_cuda_cflags=["-t 4", "-std=c++17", "-res-usage", "--maxrregcount 60", "--use_fast_math", "-O3", "-Xptxas -O3", "--extra-device-vectorization", f"-DTmax={T_MAX}"])
|
||||
wkv_cuda = load(
|
||||
name=f"wkv_{T_MAX}_bf16",
|
||||
sources=[
|
||||
"finetune/lora/v4/cuda/wkv_op_bf16.cpp",
|
||||
"finetune/lora/v4/cuda/wkv_cuda_bf16.cu",
|
||||
],
|
||||
verbose=True,
|
||||
extra_cuda_cflags=[
|
||||
"-t 4",
|
||||
"-std=c++17",
|
||||
"-res-usage",
|
||||
"--maxrregcount 60",
|
||||
"--use_fast_math",
|
||||
"-O3",
|
||||
"-Xptxas -O3",
|
||||
"--extra-device-vectorization",
|
||||
f"-DTmax={T_MAX}",
|
||||
],
|
||||
)
|
||||
|
||||
class WKV(torch.autograd.Function):
|
||||
@staticmethod
|
||||
def forward(ctx, B, T, C, w, u, k, v):
|
||||
@@ -66,10 +88,16 @@ if os.environ["RWKV_FLOAT_MODE"] == "bf16":
|
||||
u = u.contiguous()
|
||||
k = k.contiguous()
|
||||
v = v.contiguous()
|
||||
y = torch.empty((B, T, C), device=w.device, memory_format=torch.contiguous_format, dtype=torch.bfloat16)
|
||||
y = torch.empty(
|
||||
(B, T, C),
|
||||
device=w.device,
|
||||
memory_format=torch.contiguous_format,
|
||||
dtype=torch.bfloat16,
|
||||
)
|
||||
wkv_cuda.forward(B, T, C, w, u, k, v, y)
|
||||
ctx.save_for_backward(w, u, k, v, y)
|
||||
return y
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, gy):
|
||||
B = ctx.B
|
||||
@@ -78,16 +106,54 @@ if os.environ["RWKV_FLOAT_MODE"] == "bf16":
|
||||
assert T <= T_MAX
|
||||
assert B * C % min(C, 32) == 0
|
||||
w, u, k, v, y = ctx.saved_tensors
|
||||
gw = torch.empty((B, C), device=gy.device, memory_format=torch.contiguous_format, dtype=torch.bfloat16)
|
||||
gu = torch.empty((B, C), device=gy.device, memory_format=torch.contiguous_format, dtype=torch.bfloat16)
|
||||
gk = torch.empty((B, T, C), device=gy.device, memory_format=torch.contiguous_format, dtype=torch.bfloat16)
|
||||
gv = torch.empty((B, T, C), device=gy.device, memory_format=torch.contiguous_format, dtype=torch.bfloat16)
|
||||
gw = torch.empty(
|
||||
(B, C),
|
||||
device=gy.device,
|
||||
memory_format=torch.contiguous_format,
|
||||
dtype=torch.bfloat16,
|
||||
)
|
||||
gu = torch.empty(
|
||||
(B, C),
|
||||
device=gy.device,
|
||||
memory_format=torch.contiguous_format,
|
||||
dtype=torch.bfloat16,
|
||||
)
|
||||
gk = torch.empty(
|
||||
(B, T, C),
|
||||
device=gy.device,
|
||||
memory_format=torch.contiguous_format,
|
||||
dtype=torch.bfloat16,
|
||||
)
|
||||
gv = torch.empty(
|
||||
(B, T, C),
|
||||
device=gy.device,
|
||||
memory_format=torch.contiguous_format,
|
||||
dtype=torch.bfloat16,
|
||||
)
|
||||
wkv_cuda.backward(B, T, C, w, u, k, v, y, gy.contiguous(), gw, gu, gk, gv)
|
||||
gw = torch.sum(gw, dim=0)
|
||||
gu = torch.sum(gu, dim=0)
|
||||
return (None, None, None, gw, gu, gk, gv)
|
||||
|
||||
else:
|
||||
wkv_cuda = load(name=f"wkv_{T_MAX}", sources=["finetune/lora/cuda/wkv_op.cpp", "finetune/lora/cuda/wkv_cuda.cu"], verbose=True, extra_cuda_cflags=["-res-usage", "--maxrregcount 60", "--use_fast_math", "-O3", "-Xptxas -O3", "--extra-device-vectorization", f"-DTmax={T_MAX}"])
|
||||
wkv_cuda = load(
|
||||
name=f"wkv_{T_MAX}",
|
||||
sources=[
|
||||
"finetune/lora/v4/cuda/wkv_op.cpp",
|
||||
"finetune/lora/v4/cuda/wkv_cuda.cu",
|
||||
],
|
||||
verbose=True,
|
||||
extra_cuda_cflags=[
|
||||
"-res-usage",
|
||||
"--maxrregcount 60",
|
||||
"--use_fast_math",
|
||||
"-O3",
|
||||
"-Xptxas -O3",
|
||||
"--extra-device-vectorization",
|
||||
f"-DTmax={T_MAX}",
|
||||
],
|
||||
)
|
||||
|
||||
class WKV(torch.autograd.Function):
|
||||
@staticmethod
|
||||
def forward(ctx, B, T, C, w, u, k, v):
|
||||
@@ -106,7 +172,9 @@ else:
|
||||
u = u.float().contiguous()
|
||||
k = k.float().contiguous()
|
||||
v = v.float().contiguous()
|
||||
y = torch.empty((B, T, C), device=w.device, memory_format=torch.contiguous_format)
|
||||
y = torch.empty(
|
||||
(B, T, C), device=w.device, memory_format=torch.contiguous_format
|
||||
)
|
||||
wkv_cuda.forward(B, T, C, w, u, k, v, y)
|
||||
ctx.save_for_backward(w, u, k, v, y)
|
||||
if "32" in os.environ["RWKV_FLOAT_MODE"]:
|
||||
@@ -115,6 +183,7 @@ else:
|
||||
return y.half()
|
||||
elif os.environ["RWKV_FLOAT_MODE"] == "bf16":
|
||||
return y.bfloat16()
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, gy):
|
||||
B = ctx.B
|
||||
@@ -123,14 +192,26 @@ else:
|
||||
assert T <= T_MAX
|
||||
assert B * C % min(C, 32) == 0
|
||||
w, u, k, v, y = ctx.saved_tensors
|
||||
gw = torch.empty((B, C), device=gy.device, memory_format=torch.contiguous_format)
|
||||
gu = torch.empty((B, C), device=gy.device, memory_format=torch.contiguous_format)
|
||||
gk = torch.empty((B, T, C), device=gy.device, memory_format=torch.contiguous_format)
|
||||
gv = torch.empty((B, T, C), device=gy.device, memory_format=torch.contiguous_format)
|
||||
gw = torch.empty(
|
||||
(B, C), device=gy.device, memory_format=torch.contiguous_format
|
||||
)
|
||||
gu = torch.empty(
|
||||
(B, C), device=gy.device, memory_format=torch.contiguous_format
|
||||
)
|
||||
gk = torch.empty(
|
||||
(B, T, C), device=gy.device, memory_format=torch.contiguous_format
|
||||
)
|
||||
gv = torch.empty(
|
||||
(B, T, C), device=gy.device, memory_format=torch.contiguous_format
|
||||
)
|
||||
if "32" in os.environ["RWKV_FLOAT_MODE"]:
|
||||
wkv_cuda.backward(B, T, C, w, u, k, v, y, gy.contiguous(), gw, gu, gk, gv)
|
||||
wkv_cuda.backward(
|
||||
B, T, C, w, u, k, v, y, gy.contiguous(), gw, gu, gk, gv
|
||||
)
|
||||
else:
|
||||
wkv_cuda.backward(B, T, C, w, u, k, v, y, gy.float().contiguous(), gw, gu, gk, gv)
|
||||
wkv_cuda.backward(
|
||||
B, T, C, w, u, k, v, y, gy.float().contiguous(), gw, gu, gk, gv
|
||||
)
|
||||
gw = torch.sum(gw, dim=0)
|
||||
gu = torch.sum(gu, dim=0)
|
||||
if "32" in os.environ["RWKV_FLOAT_MODE"]:
|
||||
@@ -138,7 +219,15 @@ else:
|
||||
elif os.environ["RWKV_FLOAT_MODE"] == "fp16":
|
||||
return (None, None, None, gw.half(), gu.half(), gk.half(), gv.half())
|
||||
elif os.environ["RWKV_FLOAT_MODE"] == "bf16":
|
||||
return (None, None, None, gw.bfloat16(), gu.bfloat16(), gk.bfloat16(), gv.bfloat16())
|
||||
return (
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
gw.bfloat16(),
|
||||
gu.bfloat16(),
|
||||
gk.bfloat16(),
|
||||
gv.bfloat16(),
|
||||
)
|
||||
|
||||
|
||||
def RUN_CUDA(B, T, C, w, u, k, v):
|
||||
@@ -151,15 +240,17 @@ def RUN_CUDA(B, T, C, w, u, k, v):
|
||||
|
||||
|
||||
class LoraLinear(nn.Module):
|
||||
|
||||
def __init__(self, in_features: int, out_features: int, bias: bool):
|
||||
super().__init__()
|
||||
|
||||
self.weight = nn.Parameter(torch.empty((out_features, in_features)))
|
||||
assert bias == False, "Biased LoraLinear not supported"
|
||||
|
||||
r, alpha, dropout = LORA_CONFIG["r"], LORA_CONFIG[
|
||||
"alpha"], LORA_CONFIG["dropout"]
|
||||
r, alpha, dropout = (
|
||||
LORA_CONFIG["r"],
|
||||
LORA_CONFIG["alpha"],
|
||||
LORA_CONFIG["dropout"],
|
||||
)
|
||||
self.lora_A = nn.Parameter(torch.empty(r, in_features))
|
||||
self.lora_B = nn.Parameter(torch.empty(out_features, r))
|
||||
self.lora_dropout = nn.Dropout(dropout)
|
||||
@@ -170,9 +261,9 @@ class LoraLinear(nn.Module):
|
||||
nn.init.zeros_(self.lora_B)
|
||||
|
||||
def forward(self, x):
|
||||
return (
|
||||
F.linear(x, self.weight) + self.scaling *
|
||||
F.linear(F.linear(self.lora_dropout(x), self.lora_A), self.lora_B))
|
||||
return F.linear(x, self.weight) + self.scaling * F.linear(
|
||||
F.linear(self.lora_dropout(x), self.lora_A), self.lora_B
|
||||
)
|
||||
|
||||
|
||||
@functools.wraps(LoraLinear)
|
||||
@@ -214,17 +305,23 @@ class RWKV_TimeMix(MyModule):
|
||||
# fancy time_decay
|
||||
decay_speed = torch.ones(args.dim_att)
|
||||
for h in range(args.dim_att):
|
||||
decay_speed[h] = -5 + 8 * (h / (args.dim_att - 1)) ** (0.7 + 1.3 * ratio_0_to_1)
|
||||
decay_speed[h] = -5 + 8 * (h / (args.dim_att - 1)) ** (
|
||||
0.7 + 1.3 * ratio_0_to_1
|
||||
)
|
||||
self.time_decay = nn.Parameter(decay_speed)
|
||||
# print(layer_id, self.time_decay.flatten()[:3].cpu().numpy(), '...', self.time_decay.flatten()[-3:].cpu().numpy())
|
||||
|
||||
# fancy time_first
|
||||
zigzag = torch.tensor([(i + 1) % 3 - 1 for i in range(args.dim_att)]) * 0.5
|
||||
self.time_first = nn.Parameter(torch.ones(args.dim_att) * math.log(0.3) + zigzag)
|
||||
self.time_first = nn.Parameter(
|
||||
torch.ones(args.dim_att) * math.log(0.3) + zigzag
|
||||
)
|
||||
|
||||
# fancy time_mix
|
||||
self.time_mix_k = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0))
|
||||
self.time_mix_v = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0) + 0.3 * ratio_0_to_1)
|
||||
self.time_mix_v = nn.Parameter(
|
||||
torch.pow(ddd, ratio_1_to_almost0) + 0.3 * ratio_0_to_1
|
||||
)
|
||||
self.time_mix_r = nn.Parameter(torch.pow(ddd, 0.5 * ratio_1_to_almost0))
|
||||
|
||||
self.time_shift = nn.ZeroPad2d((0, 0, 1, -1))
|
||||
@@ -235,8 +332,10 @@ class RWKV_TimeMix(MyModule):
|
||||
|
||||
self.output = nn.Linear(args.dim_att, args.n_embd, bias=False)
|
||||
|
||||
if 'a' in os.environ["RWKV_MY_TESTING"]:
|
||||
self.register_buffer("att_mask", torch.tril(torch.ones(args.ctx_len, args.ctx_len)))
|
||||
if "a" in os.environ["RWKV_MY_TESTING"]:
|
||||
self.register_buffer(
|
||||
"att_mask", torch.tril(torch.ones(args.ctx_len, args.ctx_len))
|
||||
)
|
||||
d_qkv = args.n_embd // 16
|
||||
self.qq = nn.Linear(args.n_embd, d_qkv, bias=False)
|
||||
self.kk = nn.Linear(args.n_embd, d_qkv, bias=False)
|
||||
@@ -245,12 +344,17 @@ class RWKV_TimeMix(MyModule):
|
||||
with torch.no_grad():
|
||||
self.time_mix_qq = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0))
|
||||
self.time_mix_kk = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0))
|
||||
self.time_mix_vv = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0) + 0.3 * ratio_0_to_1)
|
||||
self.time_mix_vv = nn.Parameter(
|
||||
torch.pow(ddd, ratio_1_to_almost0) + 0.3 * ratio_0_to_1
|
||||
)
|
||||
|
||||
if "a" not in os.environ["RWKV_MY_TESTING"]:
|
||||
|
||||
if 'a' not in os.environ["RWKV_MY_TESTING"]:
|
||||
@MyFunction
|
||||
def jit_func(self, x):
|
||||
xx = self.time_shift(x) # Mix x with the previous timestep to produce xk, xv, xr
|
||||
xx = self.time_shift(
|
||||
x
|
||||
) # Mix x with the previous timestep to produce xk, xv, xr
|
||||
xk = x * self.time_mix_k + xx * (1 - self.time_mix_k)
|
||||
xv = x * self.time_mix_v + xx * (1 - self.time_mix_v)
|
||||
xr = x * self.time_mix_r + xx * (1 - self.time_mix_r)
|
||||
@@ -263,21 +367,26 @@ class RWKV_TimeMix(MyModule):
|
||||
def forward(self, x):
|
||||
B, T, C = x.size() # x = (Batch,Time,Channel)
|
||||
sr, k, v = self.jit_func(x)
|
||||
rwkv = sr * RUN_CUDA(B, T, self.args.dim_att, self.time_decay, self.time_first, k, v)
|
||||
rwkv = sr * RUN_CUDA(
|
||||
B, T, self.args.dim_att, self.time_decay, self.time_first, k, v
|
||||
)
|
||||
return self.output(rwkv)
|
||||
|
||||
if 'a' in os.environ["RWKV_MY_TESTING"]:
|
||||
if "a" in os.environ["RWKV_MY_TESTING"]:
|
||||
|
||||
@MyFunction
|
||||
def QKV(self, q, k, v):
|
||||
att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
|
||||
att = att.masked_fill(self.att_mask == 0, float('-inf'))
|
||||
att = F.softmax(att, dim = -1)
|
||||
att = att.masked_fill(self.att_mask == 0, float("-inf"))
|
||||
att = F.softmax(att, dim=-1)
|
||||
x = att @ v
|
||||
return x
|
||||
|
||||
@MyFunction
|
||||
def jit_funcQKV(self, x):
|
||||
xx = self.time_shift(x) # Mix x with the previous timestep to produce xk, xv, xr
|
||||
xx = self.time_shift(
|
||||
x
|
||||
) # Mix x with the previous timestep to produce xk, xv, xr
|
||||
xk = x * self.time_mix_k + xx * (1 - self.time_mix_k)
|
||||
xv = x * self.time_mix_v + xx * (1 - self.time_mix_v)
|
||||
xr = x * self.time_mix_r + xx * (1 - self.time_mix_r)
|
||||
@@ -296,12 +405,16 @@ class RWKV_TimeMix(MyModule):
|
||||
def forward(self, x):
|
||||
B, T, C = x.size() # x = (Batch,Time,Channel)
|
||||
sr, k, v, qq, kk, vv = self.jit_funcQKV(x)
|
||||
rwkv = sr * RUN_CUDA(B, T, self.args.dim_att, self.time_decay, self.time_first, k, v)
|
||||
rwkv = sr * RUN_CUDA(
|
||||
B, T, self.args.dim_att, self.time_decay, self.time_first, k, v
|
||||
)
|
||||
rwkv = self.output(rwkv) + self.oo(self.QKV(qq, kk, vv))
|
||||
return rwkv
|
||||
|
||||
|
||||
########################################################################################################
|
||||
|
||||
|
||||
class RWKV_ChannelMix(MyModule):
|
||||
def __init__(self, args, layer_id):
|
||||
super().__init__()
|
||||
@@ -331,6 +444,7 @@ class RWKV_ChannelMix(MyModule):
|
||||
kv = self.value(k)
|
||||
return torch.sigmoid(self.receptance(xr)) * kv
|
||||
|
||||
|
||||
class MishGLU(MyModule):
|
||||
def __init__(self, args, layer_id):
|
||||
super().__init__()
|
||||
@@ -360,6 +474,7 @@ class MishGLU(MyModule):
|
||||
b = self.bb(xb)
|
||||
return self.value(a * F.mish(b))
|
||||
|
||||
|
||||
########################################################################################################
|
||||
# The RWKV Model with our blocks
|
||||
########################################################################################################
|
||||
@@ -377,15 +492,19 @@ class Block(nn.Module):
|
||||
if self.layer_id == 0:
|
||||
self.ln0 = nn.LayerNorm(args.n_embd)
|
||||
if args.my_pos_emb > 0:
|
||||
self.pos_emb_x = nn.Parameter(torch.zeros((1,args.my_pos_emb,args.n_embd)))
|
||||
self.pos_emb_y = nn.Parameter(torch.zeros((args.my_pos_emb,1,args.n_embd)))
|
||||
self.pos_emb_x = nn.Parameter(
|
||||
torch.zeros((1, args.my_pos_emb, args.n_embd))
|
||||
)
|
||||
self.pos_emb_y = nn.Parameter(
|
||||
torch.zeros((args.my_pos_emb, 1, args.n_embd))
|
||||
)
|
||||
|
||||
if self.layer_id == 0 and self.args.pre_ffn > 0:
|
||||
self.ffnPre = RWKV_ChannelMix(args, 0)
|
||||
else:
|
||||
self.att = RWKV_TimeMix(args, layer_id)
|
||||
|
||||
if 'g' in os.environ["RWKV_MY_TESTING"]:
|
||||
if "g" in os.environ["RWKV_MY_TESTING"]:
|
||||
self.ffn = MishGLU(args, layer_id)
|
||||
else:
|
||||
self.ffn = RWKV_ChannelMix(args, layer_id)
|
||||
@@ -395,7 +514,9 @@ class Block(nn.Module):
|
||||
self.tiny_q = nn.Linear(args.n_embd, args.tiny_att_dim, bias=False)
|
||||
self.tiny_k = nn.Linear(args.n_embd, args.tiny_att_dim, bias=False)
|
||||
self.tiny_v = nn.Linear(args.n_embd, args.n_embd, bias=False)
|
||||
self.register_buffer("tiny_mask", torch.tril(torch.ones(args.ctx_len, args.ctx_len)))
|
||||
self.register_buffer(
|
||||
"tiny_mask", torch.tril(torch.ones(args.ctx_len, args.ctx_len))
|
||||
)
|
||||
|
||||
def forward(self, x, x_emb=None):
|
||||
args = self.args
|
||||
@@ -403,7 +524,7 @@ class Block(nn.Module):
|
||||
if self.layer_id == 0:
|
||||
x = self.ln0(x)
|
||||
if args.my_pos_emb > 0:
|
||||
pos_emb = (self.pos_emb_x + self.pos_emb_y).reshape(T+1, -1)[:-1,:]
|
||||
pos_emb = (self.pos_emb_x + self.pos_emb_y).reshape(T + 1, -1)[:-1, :]
|
||||
x = x + pos_emb
|
||||
|
||||
if self.layer_id == 0 and args.pre_ffn > 0:
|
||||
@@ -443,13 +564,13 @@ class RWKV(pl.LightningModule):
|
||||
def __init__(self, args):
|
||||
super().__init__()
|
||||
self.args = args
|
||||
if not hasattr(args, 'dim_att'):
|
||||
if not hasattr(args, "dim_att"):
|
||||
args.dim_att = args.n_embd
|
||||
if not hasattr(args, 'dim_ffn'):
|
||||
if not hasattr(args, "dim_ffn"):
|
||||
args.dim_ffn = args.n_embd * 4
|
||||
if not hasattr(args, 'tiny_att_layer'):
|
||||
if not hasattr(args, "tiny_att_layer"):
|
||||
args.tiny_att_layer = -1
|
||||
if not hasattr(args, 'tiny_att_dim'):
|
||||
if not hasattr(args, "tiny_att_dim"):
|
||||
args.tiny_att_dim = -1
|
||||
|
||||
self.emb = nn.Embedding(args.vocab_size, args.n_embd)
|
||||
@@ -462,7 +583,9 @@ class RWKV(pl.LightningModule):
|
||||
if args.head_qk > 0:
|
||||
self.head_q = nn.Linear(args.n_embd, args.head_qk, bias=False)
|
||||
self.head_k = nn.Linear(args.n_embd, args.head_qk, bias=False)
|
||||
self.register_buffer("copy_mask", torch.tril(torch.ones(args.ctx_len, args.ctx_len)))
|
||||
self.register_buffer(
|
||||
"copy_mask", torch.tril(torch.ones(args.ctx_len, args.ctx_len))
|
||||
)
|
||||
|
||||
def configure_optimizers(self):
|
||||
args = self.args
|
||||
@@ -494,19 +617,46 @@ class RWKV(pl.LightningModule):
|
||||
param_dict = {n: p for n, p in self.named_parameters()}
|
||||
if args.my_pile_stage == 2:
|
||||
optim_groups = [
|
||||
{"params": [param_dict[n] for n in lr_1x], "weight_decay": 0.0, "my_lr_scale": 1.0},
|
||||
{"params": [param_dict[n] for n in lr_2x], "weight_decay": 0.0, "my_lr_scale": 5.0},# test: 2e-3 / args.lr_init},
|
||||
{"params": [param_dict[n] for n in lr_3x], "weight_decay": 0.0, "my_lr_scale": 5.0},# test: 3e-3 / args.lr_init},
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_1x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 1.0,
|
||||
},
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_2x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 5.0,
|
||||
}, # test: 2e-3 / args.lr_init},
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_3x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 5.0,
|
||||
}, # test: 3e-3 / args.lr_init},
|
||||
]
|
||||
else:
|
||||
optim_groups = [
|
||||
{"params": [param_dict[n] for n in lr_1x], "weight_decay": 0.0, "my_lr_scale": 1.0},
|
||||
{"params": [param_dict[n] for n in lr_2x], "weight_decay": 0.0, "my_lr_scale": 2.0},
|
||||
{"params": [param_dict[n] for n in lr_3x], "weight_decay": 0.0, "my_lr_scale": 3.0},
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_1x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 1.0,
|
||||
},
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_2x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 2.0,
|
||||
},
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_3x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 3.0,
|
||||
},
|
||||
]
|
||||
else:
|
||||
optim_groups = [
|
||||
{"params": [p for n, p in self.named_parameters()], "weight_decay": 0.0},
|
||||
{
|
||||
"params": [p for n, p in self.named_parameters()],
|
||||
"weight_decay": 0.0,
|
||||
},
|
||||
]
|
||||
|
||||
for g in optim_groups:
|
||||
@@ -514,8 +664,26 @@ class RWKV(pl.LightningModule):
|
||||
optim_groups = [g for g in optim_groups if len(g["params"]) > 0]
|
||||
|
||||
if self.deepspeed_offload:
|
||||
return DeepSpeedCPUAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, adamw_mode=False, weight_decay=0, amsgrad=False)
|
||||
return FusedAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, adam_w_mode=False, weight_decay=0, amsgrad=False)
|
||||
return DeepSpeedCPUAdam(
|
||||
optim_groups,
|
||||
lr=self.args.lr_init,
|
||||
betas=self.args.betas,
|
||||
eps=self.args.adam_eps,
|
||||
bias_correction=True,
|
||||
adamw_mode=False,
|
||||
weight_decay=0,
|
||||
amsgrad=False,
|
||||
)
|
||||
return FusedAdam(
|
||||
optim_groups,
|
||||
lr=self.args.lr_init,
|
||||
betas=self.args.betas,
|
||||
eps=self.args.adam_eps,
|
||||
bias_correction=True,
|
||||
adam_w_mode=False,
|
||||
weight_decay=0,
|
||||
amsgrad=False,
|
||||
)
|
||||
# return ZeroOneAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, weight_decay=0, amsgrad=False, cuda_aware=False)
|
||||
|
||||
@property
|
||||
@@ -589,10 +757,14 @@ class RWKV(pl.LightningModule):
|
||||
|
||||
logits = self(idx)
|
||||
if sum_mask == mask.shape[0]:
|
||||
loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
|
||||
loss = F.cross_entropy(
|
||||
logits.view(-1, logits.size(-1)), targets.view(-1)
|
||||
)
|
||||
# print('rank', self.global_rank, 'loss', loss.item())
|
||||
else:
|
||||
loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1), reduction='none')
|
||||
loss = F.cross_entropy(
|
||||
logits.view(-1, logits.size(-1)), targets.view(-1), reduction="none"
|
||||
)
|
||||
# loss_raw = loss
|
||||
loss = torch.sum(loss * mask) / sum_mask
|
||||
|
||||
@@ -632,7 +804,14 @@ class RWKV(pl.LightningModule):
|
||||
|
||||
gain = 1.0
|
||||
scale = 1.0
|
||||
if "ln_" in n or ".ln" in n or "time_" in n or "_mask" in n or "pos_emb" in n or '.mask.' in n:
|
||||
if (
|
||||
"ln_" in n
|
||||
or ".ln" in n
|
||||
or "time_" in n
|
||||
or "_mask" in n
|
||||
or "pos_emb" in n
|
||||
or ".mask." in n
|
||||
):
|
||||
m[n] = p
|
||||
else:
|
||||
if n == "emb.weight":
|
||||
@@ -640,7 +819,19 @@ class RWKV(pl.LightningModule):
|
||||
else:
|
||||
if shape[0] > shape[1]:
|
||||
gain = math.sqrt(shape[0] / shape[1])
|
||||
for kk in [".att.key.", ".att.receptance.", ".att.output.", ".att.key.", ".ffn.value.", ".ffn.receptance.", ".ffnPre.value.", ".ffnPre.receptance.", "head_q.", '.oo.', '.rr.']:
|
||||
for kk in [
|
||||
".att.key.",
|
||||
".att.receptance.",
|
||||
".att.output.",
|
||||
".att.key.",
|
||||
".ffn.value.",
|
||||
".ffn.receptance.",
|
||||
".ffnPre.value.",
|
||||
".ffnPre.receptance.",
|
||||
"head_q.",
|
||||
".oo.",
|
||||
".rr.",
|
||||
]:
|
||||
if kk in n:
|
||||
scale = 0
|
||||
if n == "head.weight":
|
||||
@@ -650,7 +841,9 @@ class RWKV(pl.LightningModule):
|
||||
if "head_q." in n:
|
||||
scale = 0
|
||||
|
||||
print(f"{str(shape[0]).ljust(5)} {str(shape[1]).ljust(5)} {str(scale).ljust(4)} {n}")
|
||||
print(
|
||||
f"{str(shape[0]).ljust(5)} {str(shape[1]).ljust(5)} {str(scale).ljust(4)} {n}"
|
||||
)
|
||||
|
||||
if self.args.accelerator.upper() == "GPU":
|
||||
m[n] = torch.empty((shape[0], shape[1]), device="cuda")
|
||||
@@ -5,15 +5,17 @@ import pytorch_lightning as pl
|
||||
from pytorch_lightning.utilities import rank_zero_info, rank_zero_only
|
||||
from .model import LORA_CONFIG
|
||||
|
||||
|
||||
def my_save(dd, ff):
|
||||
if '14b-run1' not in ff:
|
||||
if "14b-run1" not in ff:
|
||||
torch.save(dd, ff)
|
||||
else:
|
||||
fn = ff.split('/')[-1]
|
||||
fff = '/dev/shm/' + fn
|
||||
fn = ff.split("/")[-1]
|
||||
fff = "/dev/shm/" + fn
|
||||
torch.save(dd, fff)
|
||||
subprocess.Popen(f" aws s3 mv {fff} s3://rwkv-14b-4k/{fn} --quiet", shell=True)
|
||||
|
||||
|
||||
class train_callback(pl.Callback):
|
||||
def __init__(self, args):
|
||||
super().__init__()
|
||||
@@ -38,7 +40,9 @@ class train_callback(pl.Callback):
|
||||
if args.lr_final == 0 or args.lr_init == 0: # linear decay
|
||||
lr = args.lr_init + (args.lr_final - args.lr_init) * progress
|
||||
else: # exp decay
|
||||
lr = args.lr_init * math.exp(math.log(args.lr_final / args.lr_init) * pow(progress, 1))
|
||||
lr = args.lr_init * math.exp(
|
||||
math.log(args.lr_final / args.lr_init) * pow(progress, 1)
|
||||
)
|
||||
|
||||
if trainer.global_step < w_step:
|
||||
lr = lr * (0.2 + 0.8 * trainer.global_step / w_step)
|
||||
@@ -60,7 +64,9 @@ class train_callback(pl.Callback):
|
||||
trainer.my_loss_sum = 0
|
||||
trainer.my_loss_count = 0
|
||||
trainer.my_log = open(args.proj_dir + "/train_log.txt", "a")
|
||||
trainer.my_log.write(f"NEW RUN {args.my_timestamp}\n{vars(self.args)}\n")
|
||||
trainer.my_log.write(
|
||||
f"NEW RUN {args.my_timestamp}\n{vars(self.args)}\n"
|
||||
)
|
||||
try:
|
||||
print(f"\n{trainer.strategy.config}\n")
|
||||
trainer.my_log.write(f"{trainer.strategy.config}\n")
|
||||
@@ -70,6 +76,7 @@ class train_callback(pl.Callback):
|
||||
if len(args.wandb) > 0:
|
||||
print("Login to wandb...")
|
||||
import wandb
|
||||
|
||||
wandb.init(
|
||||
project=args.wandb,
|
||||
name=args.run_name + " " + args.my_timestamp,
|
||||
@@ -102,20 +109,26 @@ class train_callback(pl.Callback):
|
||||
# self.log("s", real_step, prog_bar=True, on_step=True)
|
||||
|
||||
if len(args.wandb) > 0:
|
||||
lll = {"loss": trainer.my_loss, "lr": trainer.my_lr, "Gtokens": real_step * token_per_step / 1e9}
|
||||
lll = {
|
||||
"loss": trainer.my_loss,
|
||||
"lr": trainer.my_lr,
|
||||
"Gtokens": real_step * token_per_step / 1e9,
|
||||
}
|
||||
if kt_s > 0:
|
||||
lll["kt/s"] = kt_s
|
||||
trainer.my_wandb.log(lll, step=int(real_step))
|
||||
if args.magic_prime > 0:
|
||||
expand_factor = 2 if args.my_qa_mask > 0 else 1
|
||||
if int(real_step) == int(args.magic_prime * expand_factor // args.real_bsz) - 1:
|
||||
if (
|
||||
int(real_step)
|
||||
== int(args.magic_prime * expand_factor // args.real_bsz) - 1
|
||||
):
|
||||
to_save_dict = pl_module.state_dict()
|
||||
my_save(
|
||||
to_save_dict,
|
||||
f"{args.proj_dir}/rwkv-final.pth",
|
||||
)
|
||||
|
||||
|
||||
def on_train_epoch_start(self, trainer, pl_module):
|
||||
args = self.args
|
||||
dataset = trainer.train_dataloader.dataset.datasets
|
||||
@@ -128,24 +141,28 @@ class train_callback(pl.Callback):
|
||||
def on_train_epoch_end(self, trainer, pl_module):
|
||||
args = self.args
|
||||
if trainer.is_global_zero: # logging & save state_dict
|
||||
if (args.epoch_save > 0 and trainer.current_epoch % args.epoch_save == 0) or trainer.current_epoch == args.epoch_count - 1:
|
||||
if args.data_type == 'wds_img':
|
||||
if (
|
||||
args.epoch_save > 0 and trainer.current_epoch % args.epoch_save == 0
|
||||
) or trainer.current_epoch == args.epoch_count - 1:
|
||||
if args.data_type == "wds_img":
|
||||
raw_dict = pl_module.state_dict()
|
||||
to_save_dict = {}
|
||||
for k in raw_dict:
|
||||
if k.startswith('encoder.') or k.startswith('decoder.'):
|
||||
if k.startswith("encoder.") or k.startswith("decoder."):
|
||||
to_save_dict[k] = raw_dict[k]
|
||||
else:
|
||||
to_save_dict = pl_module.state_dict()
|
||||
|
||||
if args.lora:
|
||||
enable_time_finetune = 'time' in LORA_CONFIG["parts"]
|
||||
enable_ln_finetune = 'ln' in LORA_CONFIG["parts"]
|
||||
enable_time_finetune = "time" in LORA_CONFIG["parts"]
|
||||
enable_ln_finetune = "ln" in LORA_CONFIG["parts"]
|
||||
lora_dict = {}
|
||||
for name, state in to_save_dict.items():
|
||||
if ('.lora_' in name
|
||||
or (enable_time_finetune and '.time_' in name)
|
||||
or (enable_ln_finetune and '.ln' in name)):
|
||||
if (
|
||||
".lora_" in name
|
||||
or (enable_time_finetune and ".time_" in name)
|
||||
or (enable_ln_finetune and ".ln" in name)
|
||||
):
|
||||
lora_dict[name] = state
|
||||
to_save_dict = lora_dict
|
||||
|
||||
@@ -155,8 +172,10 @@ class train_callback(pl.Callback):
|
||||
f"{args.proj_dir}/rwkv-{args.epoch_begin + trainer.current_epoch}.pth",
|
||||
)
|
||||
except Exception as e:
|
||||
print('Error\n\n', e, '\n\n')
|
||||
trainer.my_log.write(f"{args.epoch_begin + trainer.current_epoch} {trainer.my_epoch_loss:.6f} {math.exp(trainer.my_epoch_loss):.4f} {trainer.my_lr:.8f} {datetime.datetime.now()} {trainer.current_epoch}\n")
|
||||
print("Error\n\n", e, "\n\n")
|
||||
trainer.my_log.write(
|
||||
f"{args.epoch_begin + trainer.current_epoch} {trainer.my_epoch_loss:.6f} {math.exp(trainer.my_epoch_loss):.4f} {trainer.my_lr:.8f} {datetime.datetime.now()} {trainer.current_epoch}\n"
|
||||
)
|
||||
trainer.my_log.flush()
|
||||
|
||||
trainer.my_loss_sum = 0
|
||||
@@ -178,22 +197,22 @@ def generate_init_weight(model, init_weight_name):
|
||||
mm[k] = src.reshape(mm[k].shape)
|
||||
except:
|
||||
tmp = mm[k].squeeze().clone()
|
||||
print(k, src.shape, '-->', mm[k].shape)
|
||||
print(k, src.shape, "-->", mm[k].shape)
|
||||
ss = src.shape[0]
|
||||
dd = tmp.shape[0]
|
||||
for i in range(dd):
|
||||
pos = i / dd * ss
|
||||
if pos >= ss - 1:
|
||||
tmp[i] = src[ss-1]
|
||||
tmp[i] = src[ss - 1]
|
||||
else:
|
||||
p0 = int(math.floor(pos))
|
||||
ii = pos - p0
|
||||
tmp[i] = src[p0] * (1-ii) + src[p0+1] * (ii)
|
||||
tmp[i] = src[p0] * (1 - ii) + src[p0 + 1] * (ii)
|
||||
mm[k] = tmp.reshape(mm[k].shape)
|
||||
sss = src.squeeze().float().cpu().numpy()
|
||||
print(sss[:10], '...', sss[-10:])
|
||||
print(sss[:10], "...", sss[-10:])
|
||||
mmm = mm[k].squeeze().float().cpu().numpy()
|
||||
print(mmm[:10], '...', mmm[-10:])
|
||||
print(mmm[:10], "...", mmm[-10:])
|
||||
|
||||
print(f"Save to {init_weight_name}...")
|
||||
torch.save(mm, init_weight_name)
|
||||
@@ -6,6 +6,7 @@ from torch.nn import functional as F
|
||||
time_slot = {}
|
||||
time_ref = time.time_ns()
|
||||
|
||||
|
||||
def record_time(name):
|
||||
if name not in time_slot:
|
||||
time_slot[name] = 1e20
|
||||
@@ -13,20 +14,23 @@ def record_time(name):
|
||||
if tt < time_slot[name]:
|
||||
time_slot[name] = tt
|
||||
|
||||
class TOKENIZER():
|
||||
def __init__(self, WORD_NAME, UNKNOWN_CHAR='\ue083'):
|
||||
if 'list' in str(type(WORD_NAME)):
|
||||
|
||||
class TOKENIZER:
|
||||
def __init__(self, WORD_NAME, UNKNOWN_CHAR="\ue083"):
|
||||
if "list" in str(type(WORD_NAME)):
|
||||
self.charMode = False
|
||||
if WORD_NAME[0] == WORD_NAME[1]:
|
||||
from transformers import PreTrainedTokenizerFast
|
||||
|
||||
self.tokenizer = PreTrainedTokenizerFast(tokenizer_file=WORD_NAME[0])
|
||||
else:
|
||||
from transformers import GPT2TokenizerFast
|
||||
|
||||
self.tokenizer = GPT2TokenizerFast(WORD_NAME[0], WORD_NAME[1])
|
||||
self.vocab_size = len(self.tokenizer)
|
||||
else:
|
||||
self.charMode = True
|
||||
with open(WORD_NAME + '.json', "r", encoding="utf-16") as result_file:
|
||||
with open(WORD_NAME + ".json", "r", encoding="utf-16") as result_file:
|
||||
self.word_table = json.load(result_file)
|
||||
|
||||
self.vocab_size = len(self.word_table)
|
||||
@@ -37,23 +41,25 @@ class TOKENIZER():
|
||||
self.UNKNOWN_CHAR = self.stoi[UNKNOWN_CHAR]
|
||||
|
||||
def refine_context(self, context):
|
||||
context = context.strip().split('\n')
|
||||
context = context.strip().split("\n")
|
||||
for c in range(len(context)):
|
||||
context[c] = context[c].strip().strip('\u3000').strip('\r')
|
||||
context = list(filter(lambda c: c != '', context))
|
||||
context = '\n' + ('\n'.join(context)).strip()
|
||||
if context == '':
|
||||
context = '\n'
|
||||
context[c] = context[c].strip().strip("\u3000").strip("\r")
|
||||
context = list(filter(lambda c: c != "", context))
|
||||
context = "\n" + ("\n".join(context)).strip()
|
||||
if context == "":
|
||||
context = "\n"
|
||||
return context
|
||||
|
||||
def sample_logits(self, out, x, ctx_len, temperature=1.0, top_p_usual=None, top_p_newline=None):
|
||||
def sample_logits(
|
||||
self, out, x, ctx_len, temperature=1.0, top_p_usual=None, top_p_newline=None
|
||||
):
|
||||
# out[self.UNKNOWN_CHAR] = -float('Inf')
|
||||
lastChar = int(x[-1])
|
||||
|
||||
probs = F.softmax(out, dim=-1)
|
||||
|
||||
if self.charMode:
|
||||
if self.itos[lastChar] == '\n':
|
||||
if self.itos[lastChar] == "\n":
|
||||
top_p = top_p_newline
|
||||
else:
|
||||
top_p = top_p_usual
|
||||
@@ -81,6 +87,7 @@ class TOKENIZER():
|
||||
out = torch.multinomial(probs, num_samples=1)[0]
|
||||
return out
|
||||
|
||||
|
||||
def MaybeIsPrime(number):
|
||||
if FermatPrimalityTest(number) and MillerRabinPrimalityTest(number):
|
||||
return True
|
||||
@@ -121,7 +128,9 @@ def MillerRabinPrimalityTest(number):
|
||||
if (randomNumberWithPower != 1) and (randomNumberWithPower != number - 1):
|
||||
iterationNumber = 1
|
||||
|
||||
while (iterationNumber <= timesTwoDividNumber - 1) and (randomNumberWithPower != number - 1):
|
||||
while (iterationNumber <= timesTwoDividNumber - 1) and (
|
||||
randomNumberWithPower != number - 1
|
||||
):
|
||||
randomNumberWithPower = pow(randomNumberWithPower, 2, number)
|
||||
iterationNumber = iterationNumber + 1
|
||||
if randomNumberWithPower != (number - 1):
|
||||
@@ -184,7 +184,7 @@ if __name__ == "__main__":
|
||||
args.num_sanity_val_steps = 0
|
||||
args.check_val_every_n_epoch = int(1e20)
|
||||
args.log_every_n_steps = int(1e20)
|
||||
args.max_epochs = args.epoch_count # continue forever
|
||||
args.max_epochs = args.epoch_count # -1 continue forever
|
||||
args.betas = (args.beta1, args.beta2)
|
||||
args.real_bsz = int(args.num_nodes) * int(args.devices) * args.micro_bsz
|
||||
os.environ["RWKV_T_MAX"] = str(args.ctx_len)
|
||||
@@ -373,7 +373,7 @@ if __name__ == "__main__":
|
||||
for param in module.parameters():
|
||||
param.requires_grad = True
|
||||
elif enable_time_finetune and any(
|
||||
n.startswith("time") for n, _ in module.named_parameters()
|
||||
n.startswith("time") for n, _ in module.named_parameters()
|
||||
):
|
||||
for pname, param in module.named_parameters():
|
||||
if pname.startswith("time"):
|
||||
@@ -381,7 +381,7 @@ if __name__ == "__main__":
|
||||
param.requires_grad = True
|
||||
|
||||
if (
|
||||
len(args.load_model) == 0 or args.my_pile_stage == 1
|
||||
len(args.load_model) == 0 or args.my_pile_stage == 1
|
||||
): # shall we build the initial weights?
|
||||
init_weight_name = f"{args.proj_dir}/rwkv-init.pth"
|
||||
generate_init_weight(model, init_weight_name) # save initial weights
|
||||
@@ -423,8 +423,8 @@ if __name__ == "__main__":
|
||||
)
|
||||
|
||||
if (
|
||||
args.lr_init > 1e-4
|
||||
or trainer.world_size * args.micro_bsz * trainer.accumulate_grad_batches < 8
|
||||
args.lr_init > 1e-4
|
||||
or trainer.world_size * args.micro_bsz * trainer.accumulate_grad_batches < 8
|
||||
):
|
||||
if "I_KNOW_WHAT_IM_DOING" in os.environ:
|
||||
if trainer.global_rank == 0:
|
||||
@@ -459,10 +459,10 @@ if __name__ == "__main__":
|
||||
|
||||
if "deepspeed" in args.strategy:
|
||||
trainer.strategy.config["zero_optimization"]["allgather_bucket_size"] = (
|
||||
args.ds_bucket_mb * 1000 * 1000
|
||||
args.ds_bucket_mb * 1000 * 1000
|
||||
)
|
||||
trainer.strategy.config["zero_optimization"]["reduce_bucket_size"] = (
|
||||
args.ds_bucket_mb * 1000 * 1000
|
||||
args.ds_bucket_mb * 1000 * 1000
|
||||
)
|
||||
|
||||
# must set shuffle=False, persistent_workers=False (because worker is in another thread)
|
||||
202
finetune/lora/v5/cuda/wkv5_cuda.cu
vendored
Normal file
202
finetune/lora/v5/cuda/wkv5_cuda.cu
vendored
Normal file
@@ -0,0 +1,202 @@
|
||||
#include <stdio.h>
|
||||
#include <assert.h>
|
||||
#include "ATen/ATen.h"
|
||||
typedef at::BFloat16 bf16;
|
||||
|
||||
template <typename F>
|
||||
__global__ void kernel_forward(const int B, const int T, const int C, const int H,
|
||||
const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const F *__restrict__ _u,
|
||||
F *__restrict__ const _y)
|
||||
{
|
||||
const int b = blockIdx.x / H;
|
||||
const int h = blockIdx.x % H;
|
||||
const int i = threadIdx.x;
|
||||
_w += h*_N_;
|
||||
_u += h*_N_;
|
||||
|
||||
__shared__ float r[_N_], k[_N_], u[_N_], w[_N_];
|
||||
float state[_N_] = {0};
|
||||
|
||||
__syncthreads();
|
||||
w[i] = _w[i];
|
||||
u[i] = float(_u[i]);
|
||||
__syncthreads();
|
||||
|
||||
for (int t = b*T*C + h*_N_ + i; t < (b+1)*T*C + h*_N_ + i; t += C)
|
||||
{
|
||||
__syncthreads();
|
||||
r[i] = float(_r[t]);
|
||||
k[i] = float(_k[t]);
|
||||
__syncthreads();
|
||||
|
||||
const float v = float(_v[t]);
|
||||
float y = 0;
|
||||
|
||||
#pragma unroll
|
||||
for (int j = 0; j < _N_; j+=4)
|
||||
{
|
||||
const float4& r_ = (float4&)(r[j]);
|
||||
const float4& k_ = (float4&)(k[j]);
|
||||
const float4& w_ = (float4&)(w[j]);
|
||||
const float4& u_ = (float4&)(u[j]);
|
||||
float4& s = (float4&)(state[j]);
|
||||
float4 x;
|
||||
|
||||
x.x = k_.x * v;
|
||||
x.y = k_.y * v;
|
||||
x.z = k_.z * v;
|
||||
x.w = k_.w * v;
|
||||
|
||||
y += r_.x * (u_.x * x.x + s.x);
|
||||
y += r_.y * (u_.y * x.y + s.y);
|
||||
y += r_.z * (u_.z * x.z + s.z);
|
||||
y += r_.w * (u_.w * x.w + s.w);
|
||||
|
||||
s.x = s.x * w_.x + x.x;
|
||||
s.y = s.y * w_.y + x.y;
|
||||
s.z = s.z * w_.z + x.z;
|
||||
s.w = s.w * w_.w + x.w;
|
||||
}
|
||||
_y[t] = F(y);
|
||||
}
|
||||
}
|
||||
|
||||
template <typename F>
|
||||
__global__ void kernel_backward(const int B, const int T, const int C, const int H,
|
||||
const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const float *__restrict__ __w, const F *__restrict__ _u, const F *__restrict__ const _gy,
|
||||
F *__restrict__ const _gr, F *__restrict__ const _gk, F *__restrict__ const _gv, F *__restrict__ const _gw, F *__restrict__ const _gu)
|
||||
{
|
||||
const int b = blockIdx.x / H;
|
||||
const int h = blockIdx.x % H;
|
||||
const int i = threadIdx.x;
|
||||
_w += h*_N_;
|
||||
_u += h*_N_;
|
||||
__w += h*_N_;
|
||||
|
||||
__shared__ float w_[_N_], u_[_N_];
|
||||
__shared__ float r[_N_], k[_N_], v[_N_], gy[_N_];
|
||||
__syncthreads();
|
||||
w_[i] = _w[i];
|
||||
u_[i] = float(_u[i]);
|
||||
__syncthreads();
|
||||
|
||||
const float w = w_[i];
|
||||
const float ww = __w[i];
|
||||
const float u = u_[i];
|
||||
|
||||
float state[_N_] = {0}, saaaa[_N_] = {0}, sbbbb[_N_] = {0}, scccc[_N_] = {0}, sdddd[_N_] = {0};
|
||||
|
||||
float gw = 0, gu = 0;
|
||||
const int t000 = b*T*C + h*_N_ + i;
|
||||
const int t111 = (b+1)*T*C + h*_N_ + i;
|
||||
const int t222 = t111 - 2*C;
|
||||
|
||||
for (int t = t000; t < t111; t += C)
|
||||
{
|
||||
__syncthreads();
|
||||
v[i] = float(_v[t]);
|
||||
gy[i] = float(_gy[t]);
|
||||
__syncthreads();
|
||||
|
||||
const float k = float(_k[t]);
|
||||
float gr = 0, gu_ = 0;
|
||||
|
||||
#pragma unroll
|
||||
for (int j = 0; j < _N_; j++)
|
||||
{
|
||||
float& s = state[j];
|
||||
float x = k * v[j];
|
||||
|
||||
gr += (u * x + s) * gy[j];
|
||||
gu_ += x * gy[j];
|
||||
s = s * w + x;
|
||||
}
|
||||
_gr[t] = F(gr);
|
||||
gu += float(_r[t]) * gu_;
|
||||
}
|
||||
_gu[b*C + h*_N_ + i] = F(gu);
|
||||
|
||||
for (int t = t000; t < t222; t += C)
|
||||
{
|
||||
__syncthreads();
|
||||
v[i] = float(_v[t]);
|
||||
gy[i] = float(_gy[t + 2*C]);
|
||||
__syncthreads();
|
||||
|
||||
const float k = float(_k[t]);
|
||||
float gw_ = 0;
|
||||
|
||||
#pragma unroll
|
||||
for (int j = 0; j < _N_; j++)
|
||||
{
|
||||
float& s = saaaa[j];
|
||||
float& s2 = sbbbb[j];
|
||||
float x = k * v[j];
|
||||
|
||||
float tmp = w * (x + s);
|
||||
s = tmp;
|
||||
s2 = tmp + w * s2;
|
||||
gw_ += s2 * gy[j];
|
||||
}
|
||||
gw += float(_r[t + 2*C]) * gw_;
|
||||
}
|
||||
_gw[b*C + h*_N_ + i] = F(ww * gw);
|
||||
|
||||
for (int t = t111 - C; t >= t000; t -= C)
|
||||
{
|
||||
__syncthreads();
|
||||
v[i] = float(_v[t]);
|
||||
gy[i] = float(_gy[t]);
|
||||
__syncthreads();
|
||||
|
||||
const float rr = float(_r[t]);
|
||||
float gk = 0;
|
||||
|
||||
#pragma unroll
|
||||
for (int j = 0; j < _N_; j++)
|
||||
{
|
||||
float& s = scccc[j];
|
||||
float x = rr * gy[j];
|
||||
|
||||
gk += (u * x + s) * v[j];
|
||||
s = x + s * w;
|
||||
}
|
||||
_gk[t] = F(gk);
|
||||
}
|
||||
|
||||
for (int t = t111 - C; t >= t000; t -= C)
|
||||
{
|
||||
__syncthreads();
|
||||
r[i] = float(_r[t]);
|
||||
k[i] = float(_k[t]);
|
||||
__syncthreads();
|
||||
|
||||
const float gyy = float(_gy[t]);
|
||||
float gv = 0;
|
||||
|
||||
#pragma unroll
|
||||
for (int j = 0; j < _N_; j++)
|
||||
{
|
||||
float& s = sdddd[j];
|
||||
float x = gyy * r[j];
|
||||
|
||||
gv += (u_[j] * x + s) * k[j];
|
||||
s = x + s * w_[j];
|
||||
}
|
||||
_gv[t] = F(gv);
|
||||
}
|
||||
}
|
||||
|
||||
void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y)
|
||||
{
|
||||
assert(H*_N_ == C);
|
||||
assert(_N_%4 == 0);
|
||||
kernel_forward<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, r, k, v, w, u, y);
|
||||
}
|
||||
|
||||
void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, float *ww, bf16 *u, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu)
|
||||
{
|
||||
assert(H*_N_ == C);
|
||||
assert(_N_%4 == 0);
|
||||
kernel_backward<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, r, k, v, w, ww, u, gy, gr, gk, gv, gw, gu);
|
||||
}
|
||||
22
finetune/lora/v5/cuda/wkv5_op.cpp
vendored
Normal file
22
finetune/lora/v5/cuda/wkv5_op.cpp
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
#include <torch/extension.h>
|
||||
#include "ATen/ATen.h"
|
||||
typedef at::BFloat16 bf16;
|
||||
|
||||
void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y);
|
||||
void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, float *ww, bf16 *u, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu);
|
||||
|
||||
void forward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &y) {
|
||||
cuda_forward(B, T, C, H, r.data_ptr<bf16>(), k.data_ptr<bf16>(), v.data_ptr<bf16>(), w.data_ptr<float>(), u.data_ptr<bf16>(), y.data_ptr<bf16>());
|
||||
}
|
||||
void backward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &ww, torch::Tensor &u, torch::Tensor &gy, torch::Tensor &gr, torch::Tensor &gk, torch::Tensor &gv, torch::Tensor &gw, torch::Tensor &gu) {
|
||||
cuda_backward(B, T, C, H, r.data_ptr<bf16>(), k.data_ptr<bf16>(), v.data_ptr<bf16>(), w.data_ptr<float>(), ww.data_ptr<float>(), u.data_ptr<bf16>(), gy.data_ptr<bf16>(), gr.data_ptr<bf16>(), gk.data_ptr<bf16>(), gv.data_ptr<bf16>(), gw.data_ptr<bf16>(), gu.data_ptr<bf16>());
|
||||
}
|
||||
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
|
||||
m.def("forward", &forward, "wkv5 forward");
|
||||
m.def("backward", &backward, "wkv5 backward");
|
||||
}
|
||||
|
||||
TORCH_LIBRARY(wkv5, m) {
|
||||
m.def("forward", forward);
|
||||
m.def("backward", backward);
|
||||
}
|
||||
0
finetune/lora/v5/src/__init__.py
vendored
Normal file
0
finetune/lora/v5/src/__init__.py
vendored
Normal file
303
finetune/lora/v5/src/binidx.py
vendored
Normal file
303
finetune/lora/v5/src/binidx.py
vendored
Normal file
@@ -0,0 +1,303 @@
|
||||
from lib2to3.pgen2 import token
|
||||
import os
|
||||
import torch
|
||||
import numpy as np
|
||||
import shutil
|
||||
import struct
|
||||
from functools import lru_cache
|
||||
from itertools import accumulate
|
||||
|
||||
|
||||
def print_rank_0(*message):
|
||||
pass
|
||||
# """If distributed is initialized print only on rank 0."""
|
||||
# if torch.distributed.is_initialized():
|
||||
# if torch.distributed.get_rank() == 0:
|
||||
# print(*message, flush=True)
|
||||
# else:
|
||||
# print(*message, flush=True)
|
||||
|
||||
|
||||
def _warmup_mmap_file(path):
|
||||
pass
|
||||
# with open(path, "rb") as stream:
|
||||
# while stream.read(100 * 1024 * 1024):
|
||||
# pass
|
||||
|
||||
|
||||
dtypes = {
|
||||
1: np.uint8,
|
||||
2: np.int8,
|
||||
3: np.int16,
|
||||
4: np.int32,
|
||||
5: np.int64,
|
||||
6: float,
|
||||
7: np.double,
|
||||
8: np.uint16,
|
||||
}
|
||||
|
||||
|
||||
def code(dtype):
|
||||
for k in dtypes.keys():
|
||||
if dtypes[k] == dtype:
|
||||
return k
|
||||
raise ValueError(dtype)
|
||||
|
||||
|
||||
def index_file_path(prefix_path):
|
||||
return prefix_path + ".idx"
|
||||
|
||||
|
||||
def data_file_path(prefix_path):
|
||||
return prefix_path + ".bin"
|
||||
|
||||
|
||||
class MMapIndexedDataset(torch.utils.data.Dataset):
|
||||
class Index(object):
|
||||
_HDR_MAGIC = b"MMIDIDX\x00\x00"
|
||||
|
||||
@classmethod
|
||||
def writer(cls, path, dtype):
|
||||
class _Writer(object):
|
||||
def __enter__(self):
|
||||
self._file = open(path, "wb")
|
||||
|
||||
# Write Magic string so we can check the file format then opening it again.
|
||||
self._file.write(cls._HDR_MAGIC)
|
||||
# Write version number
|
||||
# Little endian unsigned 64 Bit integer
|
||||
self._file.write(struct.pack("<Q", 1))
|
||||
# Little endian unsigned 8 Bit integer
|
||||
self._file.write(struct.pack("<B", code(dtype)))
|
||||
|
||||
return self
|
||||
|
||||
@staticmethod
|
||||
def _get_pointers(sizes):
|
||||
dtype_size = dtype().itemsize
|
||||
address = 0
|
||||
pointers = []
|
||||
|
||||
for size in sizes:
|
||||
pointers.append(address)
|
||||
address += size * dtype_size
|
||||
|
||||
return pointers
|
||||
|
||||
def write(self, sizes, doc_idx):
|
||||
pointers = self._get_pointers(sizes)
|
||||
|
||||
# Little endian unsigned 64 Bit integer
|
||||
self._file.write(struct.pack("<Q", len(sizes)))
|
||||
# Little endian unsigned 64 Bit integer
|
||||
self._file.write(struct.pack("<Q", len(doc_idx)))
|
||||
|
||||
sizes = np.array(sizes, dtype=np.int32)
|
||||
self._file.write(sizes.tobytes(order="C"))
|
||||
del sizes
|
||||
|
||||
pointers = np.array(pointers, dtype=np.int64)
|
||||
self._file.write(pointers.tobytes(order="C"))
|
||||
del pointers
|
||||
|
||||
doc_idx = np.array(doc_idx, dtype=np.int64)
|
||||
self._file.write(doc_idx.tobytes(order="C"))
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
self._file.close()
|
||||
|
||||
return _Writer()
|
||||
|
||||
def __init__(self, path, skip_warmup=False):
|
||||
with open(path, "rb") as stream:
|
||||
magic_test = stream.read(9)
|
||||
assert self._HDR_MAGIC == magic_test, (
|
||||
"Index file doesn't match expected format. "
|
||||
"Make sure that --dataset-impl is configured properly."
|
||||
)
|
||||
# Little endian unsigned 64 Bit integer
|
||||
version = struct.unpack("<Q", stream.read(8))
|
||||
assert (1,) == version
|
||||
|
||||
# Little endian unsigned 8 Bit integer
|
||||
(dtype_code,) = struct.unpack("<B", stream.read(1))
|
||||
self._dtype = dtypes[dtype_code]
|
||||
self._dtype_size = self._dtype().itemsize
|
||||
|
||||
self._len = struct.unpack("<Q", stream.read(8))[0]
|
||||
self._doc_count = struct.unpack("<Q", stream.read(8))[0]
|
||||
offset = stream.tell()
|
||||
|
||||
if not skip_warmup:
|
||||
print_rank_0(" warming up index mmap file...")
|
||||
_warmup_mmap_file(path)
|
||||
|
||||
self._bin_buffer_mmap = np.memmap(path, mode="r", order="C")
|
||||
self._bin_buffer = memoryview(self._bin_buffer_mmap)
|
||||
print_rank_0(" reading sizes...")
|
||||
self._sizes = np.frombuffer(
|
||||
self._bin_buffer, dtype=np.int32, count=self._len, offset=offset
|
||||
)
|
||||
print_rank_0(" reading pointers...")
|
||||
self._pointers = np.frombuffer(
|
||||
self._bin_buffer,
|
||||
dtype=np.int64,
|
||||
count=self._len,
|
||||
offset=offset + self._sizes.nbytes,
|
||||
)
|
||||
print_rank_0(" reading document index...")
|
||||
self._doc_idx = np.frombuffer(
|
||||
self._bin_buffer,
|
||||
dtype=np.int64,
|
||||
count=self._doc_count,
|
||||
offset=offset + self._sizes.nbytes + self._pointers.nbytes,
|
||||
)
|
||||
|
||||
def __del__(self):
|
||||
self._bin_buffer_mmap._mmap.close()
|
||||
del self._bin_buffer_mmap
|
||||
|
||||
@property
|
||||
def dtype(self):
|
||||
return self._dtype
|
||||
|
||||
@property
|
||||
def sizes(self):
|
||||
return self._sizes
|
||||
|
||||
@property
|
||||
def doc_idx(self):
|
||||
return self._doc_idx
|
||||
|
||||
@lru_cache(maxsize=8)
|
||||
def __getitem__(self, i):
|
||||
return self._pointers[i], self._sizes[i]
|
||||
|
||||
def __len__(self):
|
||||
return self._len
|
||||
|
||||
def __init__(self, path, skip_warmup=False):
|
||||
super().__init__()
|
||||
|
||||
self._path = None
|
||||
self._index = None
|
||||
self._bin_buffer = None
|
||||
|
||||
self._do_init(path, skip_warmup)
|
||||
|
||||
def __getstate__(self):
|
||||
return self._path
|
||||
|
||||
def __setstate__(self, state):
|
||||
self._do_init(state)
|
||||
|
||||
def _do_init(self, path, skip_warmup):
|
||||
self._path = path
|
||||
self._index = self.Index(index_file_path(self._path), skip_warmup)
|
||||
|
||||
if not skip_warmup:
|
||||
print_rank_0(" warming up data mmap file...")
|
||||
_warmup_mmap_file(data_file_path(self._path))
|
||||
print_rank_0(" creating numpy buffer of mmap...")
|
||||
self._bin_buffer_mmap = np.memmap(
|
||||
data_file_path(self._path), mode="r", order="C"
|
||||
)
|
||||
print_rank_0(" creating memory view of numpy buffer...")
|
||||
self._bin_buffer = memoryview(self._bin_buffer_mmap)
|
||||
|
||||
def __del__(self):
|
||||
self._bin_buffer_mmap._mmap.close()
|
||||
del self._bin_buffer_mmap
|
||||
del self._index
|
||||
|
||||
def __len__(self):
|
||||
return len(self._index)
|
||||
|
||||
# @lru_cache(maxsize=8)
|
||||
def __getitem__(self, idx):
|
||||
if isinstance(idx, int):
|
||||
ptr, size = self._index[idx]
|
||||
np_array = np.frombuffer(
|
||||
self._bin_buffer, dtype=self._index.dtype, count=size, offset=ptr
|
||||
)
|
||||
return np_array
|
||||
elif isinstance(idx, slice):
|
||||
start, stop, step = idx.indices(len(self))
|
||||
if step != 1:
|
||||
raise ValueError("Slices into indexed_dataset must be contiguous")
|
||||
ptr = self._index._pointers[start]
|
||||
sizes = self._index._sizes[idx]
|
||||
offsets = list(accumulate(sizes))
|
||||
total_size = sum(sizes)
|
||||
np_array = np.frombuffer(
|
||||
self._bin_buffer, dtype=self._index.dtype, count=total_size, offset=ptr
|
||||
)
|
||||
sents = np.split(np_array, offsets[:-1])
|
||||
return sents
|
||||
|
||||
def get(self, idx, offset=0, length=None):
|
||||
"""Retrieves a single item from the dataset with the option to only
|
||||
return a portion of the item.
|
||||
|
||||
get(idx) is the same as [idx] but get() does not support slicing.
|
||||
"""
|
||||
ptr, size = self._index[idx]
|
||||
if length is None:
|
||||
length = size - offset
|
||||
ptr += offset * np.dtype(self._index.dtype).itemsize
|
||||
np_array = np.frombuffer(
|
||||
self._bin_buffer, dtype=self._index.dtype, count=length, offset=ptr
|
||||
)
|
||||
return np_array
|
||||
|
||||
def pad(self, idx, length=None):
|
||||
ptr, size = self._index[idx]
|
||||
try:
|
||||
np_array = np.frombuffer(
|
||||
self._bin_buffer, dtype=self._index.dtype, count=length, offset=ptr
|
||||
)
|
||||
except:
|
||||
np_array = np.frombuffer(
|
||||
self._bin_buffer, dtype=self._index.dtype, count=size, offset=ptr
|
||||
)
|
||||
ptr0, _ = self._index[0]
|
||||
np_array0 = np.frombuffer(
|
||||
self._bin_buffer,
|
||||
dtype=self._index.dtype,
|
||||
count=length - size,
|
||||
offset=ptr0,
|
||||
)
|
||||
np_array = np.append(np_array, np_array0)
|
||||
return np_array
|
||||
|
||||
def only(self, idx):
|
||||
ptr, size = self._index[idx]
|
||||
np_array = np.frombuffer(
|
||||
self._bin_buffer, dtype=self._index.dtype, count=size, offset=ptr
|
||||
)
|
||||
|
||||
return np_array
|
||||
|
||||
@property
|
||||
def sizes(self):
|
||||
return self._index.sizes
|
||||
|
||||
@property
|
||||
def doc_idx(self):
|
||||
return self._index.doc_idx
|
||||
|
||||
def get_doc_idx(self):
|
||||
return self._index._doc_idx
|
||||
|
||||
def set_doc_idx(self, doc_idx_):
|
||||
self._index._doc_idx = doc_idx_
|
||||
|
||||
@property
|
||||
def supports_prefetch(self):
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def exists(path):
|
||||
return os.path.exists(index_file_path(path)) and os.path.exists(
|
||||
data_file_path(path)
|
||||
)
|
||||
241
finetune/lora/v5/src/dataset.py
vendored
Normal file
241
finetune/lora/v5/src/dataset.py
vendored
Normal file
@@ -0,0 +1,241 @@
|
||||
########################################################################################################
|
||||
# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
|
||||
########################################################################################################
|
||||
|
||||
import json, math, random, os, sys
|
||||
import numpy as np
|
||||
import torch
|
||||
from torch.utils.data import Dataset
|
||||
from pytorch_lightning.utilities import rank_zero_info
|
||||
from .binidx import MMapIndexedDataset
|
||||
from .utils import MaybeIsPrime
|
||||
|
||||
|
||||
class MyDataset(Dataset):
|
||||
def __init__(self, args):
|
||||
self.args = args
|
||||
|
||||
if args.data_type == "binidx":
|
||||
self.vocab_size = args.vocab_size
|
||||
rank_zero_info(
|
||||
f"Current vocab size = {self.vocab_size} (make sure it's correct)"
|
||||
)
|
||||
|
||||
if args.my_pile_version == 1:
|
||||
self.data = MMapIndexedDataset(args.data_file)
|
||||
self.data_size = (
|
||||
len(self.data._bin_buffer) // self.data._index._dtype_size
|
||||
)
|
||||
rank_zero_info(f"Data has {self.data_size} tokens.")
|
||||
elif args.my_pile_version == 2:
|
||||
data_list = (
|
||||
open(args.data_file, "r", encoding="utf-8")
|
||||
.read()
|
||||
.strip()
|
||||
.split("\n")
|
||||
)
|
||||
data_list = [i.strip().split(" ") for i in data_list]
|
||||
self.data = []
|
||||
self.data_size = int(data_list[-1][-1])
|
||||
rank_zero_info(f"Data has {self.data_size} chunks.")
|
||||
for d in data_list:
|
||||
data = MMapIndexedDataset(d[0])
|
||||
data_size = len(data._bin_buffer) // data._index._dtype_size
|
||||
assert (data_size - args.ctx_len) == int(d[1])
|
||||
self.data += [[int(d[-1]), int(d[1]), data]]
|
||||
# rank_zero_info(self.data)
|
||||
|
||||
if args.my_qa_mask > 0:
|
||||
# self.data_pile = MMapIndexedDataset('/fsx/pile/pile_20B_tokenizer_text_document')
|
||||
self.data_pile = MMapIndexedDataset(
|
||||
"/fsx/pile_deduped/pile_0.87_deduped_text_document"
|
||||
)
|
||||
self.data_pile_size = (
|
||||
len(self.data_pile._bin_buffer) // self.data._index._dtype_size
|
||||
)
|
||||
else:
|
||||
self.data_pile = None
|
||||
self.data_pile_size = 0
|
||||
|
||||
if args.my_pile_stage > 0:
|
||||
# assert self.data_size == 332115325534 and self.vocab_size == 50277
|
||||
self.samples_per_epoch = args.epoch_steps * args.real_bsz
|
||||
assert self.samples_per_epoch == 40320
|
||||
rank_zero_info(
|
||||
f"########## Pile 20b-tokenized stage {args.my_pile_stage} ##########"
|
||||
)
|
||||
dataset_slot = self.data_size // args.ctx_len
|
||||
if args.my_pile_stage != 4:
|
||||
assert MaybeIsPrime(args.magic_prime)
|
||||
assert args.magic_prime % 3 == 2
|
||||
assert (
|
||||
args.magic_prime / dataset_slot > 0.99
|
||||
and args.magic_prime / dataset_slot <= 1
|
||||
)
|
||||
elif args.data_type == "numpy":
|
||||
self.data = np.load(args.data_file).astype("int")
|
||||
self.vocab_size = args.vocab_size
|
||||
rank_zero_info(
|
||||
f"Current vocab size = {self.vocab_size} (make sure it's correct)"
|
||||
)
|
||||
self.data_size = len(self.data)
|
||||
rank_zero_info(f"Data has {self.data_size} tokens.")
|
||||
elif args.data_type == "uint16":
|
||||
self.data = (
|
||||
np.fromfile(args.data_file, dtype=np.uint16)
|
||||
.astype("int32")
|
||||
.reshape(-1, args.my_sample_len)
|
||||
)
|
||||
self.vocab_size = args.vocab_size
|
||||
rank_zero_info(
|
||||
f"Current vocab size = {self.vocab_size} (make sure it's correct)"
|
||||
)
|
||||
self.data_size = self.data.shape[0]
|
||||
rank_zero_info(f"Data has {self.data_size} samples.")
|
||||
else:
|
||||
if args.data_type == "dummy":
|
||||
rank_zero_info("Building dummy data...")
|
||||
self.data = ""
|
||||
for i in range(100000):
|
||||
aa = (i) % 10000
|
||||
bb = (i * i) % 10000
|
||||
cc = aa + bb
|
||||
self.data += f".{aa}+{bb}={cc}."
|
||||
else:
|
||||
self.data = open(args.data_file, "r", encoding=args.data_type).read()
|
||||
rank_zero_info("Building token list...")
|
||||
unique = sorted(list(set(self.data)))
|
||||
self.vocab_size = len(unique)
|
||||
# rank_zero_info()
|
||||
# for u in unique:
|
||||
# print(u, end=' ')
|
||||
# rank_zero_info('\n\n')
|
||||
xx = 0
|
||||
xxObj = {}
|
||||
for u in unique:
|
||||
xxObj[xx] = u
|
||||
xx += 1
|
||||
with open(
|
||||
f"{args.proj_dir}/vocab.json", "w", encoding="utf-8"
|
||||
) as vocab_file:
|
||||
vocab_file.write(json.dumps(xxObj, ensure_ascii=False))
|
||||
self.data_size = len(self.data)
|
||||
rank_zero_info(
|
||||
f"Data has {self.data_size} tokens, {self.vocab_size} vocab size."
|
||||
)
|
||||
self.stoi = {ch: i for i, ch in enumerate(unique)}
|
||||
self.itos = {i: ch for i, ch in enumerate(unique)}
|
||||
|
||||
def __len__(self):
|
||||
return self.args.epoch_steps * self.args.micro_bsz
|
||||
|
||||
def __getitem__(self, idx):
|
||||
args = self.args
|
||||
rank = self.global_rank
|
||||
epoch = self.real_epoch
|
||||
world_size = self.world_size
|
||||
# print(f"epoch {epoch} idx {idx} rank {rank}/{world_size}")
|
||||
|
||||
if args.data_type == "uint16":
|
||||
i = np.random.randint(0, self.data_size - 1)
|
||||
dix = self.data[i]
|
||||
x = torch.tensor(dix[:-1], dtype=torch.long)
|
||||
y = torch.tensor(dix[1:], dtype=torch.long)
|
||||
else:
|
||||
ctx_len = args.ctx_len
|
||||
req_len = ctx_len + 1
|
||||
magic_prime = args.magic_prime
|
||||
data = self.data
|
||||
|
||||
if args.my_pile_stage > 0:
|
||||
ii = 1 + epoch * self.samples_per_epoch + (idx * world_size) + rank
|
||||
|
||||
if args.my_qa_mask > 0:
|
||||
ii_orig = ii
|
||||
if ii % 2 == 0:
|
||||
ii = -1
|
||||
data = self.data_pile
|
||||
else:
|
||||
ii = ii // 2
|
||||
if data == self.data_pile:
|
||||
i = np.random.randint(0, self.data_pile_size - req_len)
|
||||
else:
|
||||
if args.my_pile_stage == 4 or ii < args.my_random_steps:
|
||||
# cheat: pick a random spot in dataset
|
||||
if args.my_pile_version == 1:
|
||||
i = np.random.randint(0, self.data_size - req_len)
|
||||
else:
|
||||
i = np.random.randint(0, self.data_size)
|
||||
else:
|
||||
ii = ii - args.my_random_steps
|
||||
factor = (math.sqrt(5) - 1) / 2
|
||||
factor = int(magic_prime * factor)
|
||||
i = ((factor * ii * ii * ii) % magic_prime) * ctx_len
|
||||
i = i + args.my_pile_shift
|
||||
# print(f"epoch {epoch} idx {idx} rank {rank}/{world_size} ii {ii} pos {round(i / self.data_size, 3)}")
|
||||
else:
|
||||
# cheat: pick a random spot in dataset
|
||||
i = np.random.randint(0, self.data_size - req_len)
|
||||
|
||||
if args.data_type == "binidx":
|
||||
if args.my_pile_version == 1:
|
||||
dix = data.get(idx=0, offset=i, length=req_len).astype(int)
|
||||
else:
|
||||
# self.data : cutoff, chunk_count, data
|
||||
for j in range(len(data)):
|
||||
if i < data[j][0]:
|
||||
ii = i
|
||||
i = (i - (data[j - 1][0] if j > 0 else 0)) % data[j][1]
|
||||
dix = (
|
||||
data[j][2]
|
||||
.get(idx=0, offset=i, length=req_len)
|
||||
.astype(int)
|
||||
)
|
||||
# print(ii, j, i)
|
||||
break
|
||||
elif args.data_type == "numpy":
|
||||
dix = data[i : i + req_len]
|
||||
else:
|
||||
dix = [self.stoi[s] for s in data[i : i + req_len]]
|
||||
|
||||
if args.my_qa_mask == 1:
|
||||
if data == self.data_pile:
|
||||
z = [1] * ctx_len
|
||||
else:
|
||||
z = [0] * ctx_len
|
||||
z_sum = 0
|
||||
isGood = False
|
||||
for i in range(3, ctx_len):
|
||||
if (
|
||||
dix[i] == 27
|
||||
and dix[i - 1] == 34
|
||||
and dix[i - 2] == 187
|
||||
and dix[i - 3] == 187
|
||||
):
|
||||
isGood = True
|
||||
if dix[i] == 0:
|
||||
isGood = False
|
||||
if isGood:
|
||||
z[i] = 1
|
||||
z_sum += 1
|
||||
if z_sum == 0:
|
||||
z = [1] * ctx_len
|
||||
i = np.random.randint(0, self.data_pile_size - req_len)
|
||||
dix = self.data_pile.get(
|
||||
idx=0, offset=i, length=req_len
|
||||
).astype(int)
|
||||
z = torch.tensor(z, dtype=torch.bfloat16)
|
||||
|
||||
x = torch.tensor(dix[:-1], dtype=torch.long)
|
||||
y = torch.tensor(dix[1:], dtype=torch.long)
|
||||
|
||||
# if ii_orig < 50:
|
||||
# # if rank == 1:
|
||||
# print('rank', rank, 'i', ii_orig, ii, i, 'x', x[:5], '...', x[-5:])
|
||||
# else:
|
||||
# exit(0)
|
||||
|
||||
if args.my_qa_mask == 1:
|
||||
return x, y, z
|
||||
|
||||
return x, y
|
||||
819
finetune/lora/v5/src/model.py
vendored
Normal file
819
finetune/lora/v5/src/model.py
vendored
Normal file
@@ -0,0 +1,819 @@
|
||||
########################################################################################################
|
||||
# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
|
||||
########################################################################################################
|
||||
import functools
|
||||
import os, math, gc, importlib
|
||||
import torch
|
||||
|
||||
# torch._C._jit_set_profiling_executor(True)
|
||||
# torch._C._jit_set_profiling_mode(True)
|
||||
import torch.nn as nn
|
||||
from torch.utils.checkpoint import checkpoint as torch_checkpoint
|
||||
from torch.nn import functional as F
|
||||
import pytorch_lightning as pl
|
||||
from pytorch_lightning.utilities import rank_zero_info, rank_zero_only
|
||||
from pytorch_lightning.strategies import DeepSpeedStrategy
|
||||
|
||||
if importlib.util.find_spec("deepspeed"):
|
||||
import deepspeed
|
||||
from deepspeed.ops.adam import DeepSpeedCPUAdam, FusedAdam
|
||||
|
||||
|
||||
# from deepspeed.runtime.fp16.onebit.zoadam import ZeroOneAdam
|
||||
|
||||
# lora-config
|
||||
LORA_CONFIG = {
|
||||
"r": 0,
|
||||
"alpha": 0,
|
||||
"dropout": 0,
|
||||
"parts": {"att", "ln", "time"},
|
||||
}
|
||||
|
||||
try:
|
||||
print("RWKV_MY_TESTING", os.environ["RWKV_MY_TESTING"])
|
||||
except:
|
||||
os.environ["RWKV_MY_TESTING"] = ""
|
||||
|
||||
|
||||
def __nop(ob):
|
||||
return ob
|
||||
|
||||
|
||||
MyModule = nn.Module
|
||||
MyFunction = __nop
|
||||
if os.environ["RWKV_JIT_ON"] == "1":
|
||||
MyModule = torch.jit.ScriptModule
|
||||
MyFunction = torch.jit.script_method
|
||||
|
||||
|
||||
########################################################################################################
|
||||
# CUDA Kernel
|
||||
########################################################################################################
|
||||
|
||||
from torch.utils.cpp_extension import load
|
||||
|
||||
HEAD_SIZE = int(os.environ["RWKV_HEAD_SIZE_A"])
|
||||
wkv5_cuda = load(
|
||||
name="wkv5",
|
||||
sources=[
|
||||
"finetune/lora/v5/cuda/wkv5_op.cpp",
|
||||
f"finetune/lora/v5/cuda/wkv5_cuda.cu",
|
||||
],
|
||||
verbose=True,
|
||||
extra_cuda_cflags=[
|
||||
"-res-usage",
|
||||
"--use_fast_math",
|
||||
"-O3",
|
||||
"-Xptxas -O3",
|
||||
"--extra-device-vectorization",
|
||||
f"-D_N_={HEAD_SIZE}",
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
class WKV_5(torch.autograd.Function):
|
||||
@staticmethod
|
||||
def forward(ctx, B, T, C, H, r, k, v, w, u):
|
||||
with torch.no_grad():
|
||||
assert r.dtype == torch.bfloat16
|
||||
assert k.dtype == torch.bfloat16
|
||||
assert v.dtype == torch.bfloat16
|
||||
assert w.dtype == torch.bfloat16
|
||||
assert u.dtype == torch.bfloat16
|
||||
assert HEAD_SIZE == C // H
|
||||
ctx.B = B
|
||||
ctx.T = T
|
||||
ctx.C = C
|
||||
ctx.H = H
|
||||
assert r.is_contiguous()
|
||||
assert k.is_contiguous()
|
||||
assert v.is_contiguous()
|
||||
assert w.is_contiguous()
|
||||
assert u.is_contiguous()
|
||||
ew = (-torch.exp(w.float())).contiguous()
|
||||
eew = (torch.exp(ew)).contiguous()
|
||||
ctx.save_for_backward(r, k, v, eew, ew, u)
|
||||
y = torch.empty(
|
||||
(B, T, C),
|
||||
device=r.device,
|
||||
dtype=torch.bfloat16,
|
||||
memory_format=torch.contiguous_format,
|
||||
) # .uniform_(-1, 1)
|
||||
wkv5_cuda.forward(B, T, C, H, r, k, v, eew, u, y)
|
||||
return y
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, gy):
|
||||
with torch.no_grad():
|
||||
assert gy.dtype == torch.bfloat16
|
||||
B = ctx.B
|
||||
T = ctx.T
|
||||
C = ctx.C
|
||||
H = ctx.H
|
||||
assert gy.is_contiguous()
|
||||
r, k, v, eew, ew, u = ctx.saved_tensors
|
||||
gr = torch.empty(
|
||||
(B, T, C),
|
||||
device=gy.device,
|
||||
requires_grad=False,
|
||||
dtype=torch.bfloat16,
|
||||
memory_format=torch.contiguous_format,
|
||||
) # .uniform_(-1, 1)
|
||||
gk = torch.empty(
|
||||
(B, T, C),
|
||||
device=gy.device,
|
||||
requires_grad=False,
|
||||
dtype=torch.bfloat16,
|
||||
memory_format=torch.contiguous_format,
|
||||
) # .uniform_(-1, 1)
|
||||
gv = torch.empty(
|
||||
(B, T, C),
|
||||
device=gy.device,
|
||||
requires_grad=False,
|
||||
dtype=torch.bfloat16,
|
||||
memory_format=torch.contiguous_format,
|
||||
) # .uniform_(-1, 1)
|
||||
gw = torch.empty(
|
||||
(B, C),
|
||||
device=gy.device,
|
||||
requires_grad=False,
|
||||
dtype=torch.bfloat16,
|
||||
memory_format=torch.contiguous_format,
|
||||
) # .uniform_(-1, 1)
|
||||
gu = torch.empty(
|
||||
(B, C),
|
||||
device=gy.device,
|
||||
requires_grad=False,
|
||||
dtype=torch.bfloat16,
|
||||
memory_format=torch.contiguous_format,
|
||||
) # .uniform_(-1, 1)
|
||||
wkv5_cuda.backward(B, T, C, H, r, k, v, eew, ew, u, gy, gr, gk, gv, gw, gu)
|
||||
gw = torch.sum(gw, 0).view(H, C // H)
|
||||
gu = torch.sum(gu, 0).view(H, C // H)
|
||||
return (None, None, None, None, gr, gk, gv, gw, gu)
|
||||
|
||||
|
||||
def RUN_CUDA_RWKV5(B, T, C, H, r, k, v, w, u):
|
||||
return WKV_5.apply(B, T, C, H, r, k, v, w, u)
|
||||
|
||||
|
||||
#################################################################
|
||||
class LoraLinear(nn.Module):
|
||||
def __init__(self, in_features: int, out_features: int, bias: bool):
|
||||
super().__init__()
|
||||
|
||||
self.weight = nn.Parameter(torch.empty((out_features, in_features)))
|
||||
assert bias == False, "Biased LoraLinear not supported"
|
||||
|
||||
r, alpha, dropout = (
|
||||
LORA_CONFIG["r"],
|
||||
LORA_CONFIG["alpha"],
|
||||
LORA_CONFIG["dropout"],
|
||||
)
|
||||
self.lora_A = nn.Parameter(torch.empty(r, in_features))
|
||||
self.lora_B = nn.Parameter(torch.empty(out_features, r))
|
||||
self.lora_dropout = nn.Dropout(dropout)
|
||||
self.scaling = alpha / r
|
||||
|
||||
nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5))
|
||||
nn.init.kaiming_uniform_(self.lora_A, a=math.sqrt(5))
|
||||
nn.init.zeros_(self.lora_B)
|
||||
|
||||
def forward(self, x):
|
||||
return F.linear(x, self.weight) + self.scaling * F.linear(
|
||||
F.linear(self.lora_dropout(x), self.lora_A), self.lora_B
|
||||
)
|
||||
|
||||
|
||||
@functools.wraps(LoraLinear)
|
||||
def make_linear_att(*args, **kwargs):
|
||||
if "att" in LORA_CONFIG["parts"] and LORA_CONFIG["r"] > 0:
|
||||
return LoraLinear(*args, **kwargs)
|
||||
else:
|
||||
return nn.Linear(*args, **kwargs)
|
||||
|
||||
|
||||
@functools.wraps(LoraLinear)
|
||||
def make_linear_ffn(*args, **kwargs):
|
||||
if "ffn" in LORA_CONFIG["parts"] and LORA_CONFIG["r"] > 0:
|
||||
return LoraLinear(*args, **kwargs)
|
||||
else:
|
||||
return nn.Linear(*args, **kwargs)
|
||||
|
||||
|
||||
########################################################################################################
|
||||
|
||||
|
||||
class RWKV_TimeMix_RWKV5(MyModule):
|
||||
def __init__(self, args, layer_id):
|
||||
super().__init__()
|
||||
self.args = args
|
||||
self.layer_id = layer_id
|
||||
|
||||
self.head_size = args.head_size_a
|
||||
assert HEAD_SIZE == self.head_size # change HEAD_SIZE to match args.head_size_a
|
||||
self.n_head = args.dim_att // self.head_size
|
||||
assert args.dim_att % self.n_head == 0
|
||||
self.head_size_divisor = args.head_size_divisor
|
||||
|
||||
with torch.no_grad():
|
||||
ratio_0_to_1 = layer_id / (args.n_layer - 1) # 0 to 1
|
||||
ratio_1_to_almost0 = 1.0 - (layer_id / args.n_layer) # 1 to ~0
|
||||
ddd = torch.ones(1, 1, args.n_embd)
|
||||
for i in range(args.n_embd):
|
||||
ddd[0, 0, i] = i / args.n_embd
|
||||
|
||||
# fancy time_mix
|
||||
self.time_mix_k = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0))
|
||||
self.time_mix_v = nn.Parameter(
|
||||
torch.pow(ddd, ratio_1_to_almost0) + 0.3 * ratio_0_to_1
|
||||
)
|
||||
self.time_mix_r = nn.Parameter(torch.pow(ddd, 0.5 * ratio_1_to_almost0))
|
||||
self.time_mix_g = nn.Parameter(torch.pow(ddd, 0.5 * ratio_1_to_almost0))
|
||||
|
||||
# fancy time_decay
|
||||
decay_speed = torch.ones(args.dim_att)
|
||||
for n in range(args.dim_att):
|
||||
decay_speed[n] = -6 + 5 * (n / (args.dim_att - 1)) ** (
|
||||
0.7 + 1.3 * ratio_0_to_1
|
||||
)
|
||||
self.time_decay = nn.Parameter(
|
||||
decay_speed.reshape(self.n_head, self.head_size)
|
||||
)
|
||||
# print(layer_id, self.time_decay.flatten()[:3].cpu().numpy(), '...', self.time_decay.flatten()[-3:].cpu().numpy())
|
||||
|
||||
tmp = torch.zeros(args.dim_att)
|
||||
for n in range(args.dim_att):
|
||||
zigzag = ((n + 1) % 3 - 1) * 0.1
|
||||
tmp[n] = ratio_0_to_1 * (1 - (n / (args.dim_att - 1))) + zigzag
|
||||
|
||||
self.time_faaaa = nn.Parameter(tmp.reshape(self.n_head, self.head_size))
|
||||
|
||||
self.time_shift = nn.ZeroPad2d((0, 0, 1, -1))
|
||||
|
||||
self.receptance = make_linear_att(args.n_embd, args.dim_att, bias=False)
|
||||
self.key = make_linear_att(args.n_embd, args.dim_att, bias=False)
|
||||
self.value = make_linear_att(args.n_embd, args.dim_att, bias=False)
|
||||
|
||||
self.output = nn.Linear(args.dim_att, args.n_embd, bias=False)
|
||||
self.gate = make_linear_att(args.n_embd, args.dim_att, bias=False)
|
||||
self.ln_x = nn.GroupNorm(self.n_head, args.dim_att)
|
||||
|
||||
@MyFunction
|
||||
def jit_func(self, x):
|
||||
B, T, C = x.size()
|
||||
|
||||
xx = self.time_shift(
|
||||
x
|
||||
) # Mix x with the previous timestep to produce xk, xv, xr
|
||||
xk = x * self.time_mix_k + xx * (1 - self.time_mix_k)
|
||||
xv = x * self.time_mix_v + xx * (1 - self.time_mix_v)
|
||||
xr = x * self.time_mix_r + xx * (1 - self.time_mix_r)
|
||||
xg = x * self.time_mix_g + xx * (1 - self.time_mix_g)
|
||||
|
||||
r = self.receptance(xr)
|
||||
k = self.key(xk)
|
||||
v = self.value(xv)
|
||||
g = F.silu(self.gate(xg))
|
||||
|
||||
return r, k, v, g
|
||||
|
||||
@MyFunction
|
||||
def jit_func_2(self, x, g):
|
||||
B, T, C = x.size()
|
||||
x = x.view(B * T, C)
|
||||
x = self.ln_x(x / self.head_size_divisor).view(B, T, C)
|
||||
x = self.output(x * g)
|
||||
return x
|
||||
|
||||
def forward(self, x):
|
||||
B, T, C = x.size()
|
||||
H = self.n_head
|
||||
r, k, v, g = self.jit_func(x)
|
||||
x = RUN_CUDA_RWKV5(B, T, C, H, r, k, v, w=self.time_decay, u=self.time_faaaa)
|
||||
|
||||
return self.jit_func_2(x, g)
|
||||
|
||||
|
||||
########################################################################################################
|
||||
|
||||
|
||||
class RWKV_ChannelMix(MyModule):
|
||||
def __init__(self, args, layer_id):
|
||||
super().__init__()
|
||||
self.args = args
|
||||
self.layer_id = layer_id
|
||||
self.time_shift = nn.ZeroPad2d((0, 0, 1, -1))
|
||||
|
||||
with torch.no_grad(): # fancy init of time_mix
|
||||
ratio_1_to_almost0 = 1.0 - (layer_id / args.n_layer) # 1 to ~0
|
||||
ddd = torch.ones(1, 1, args.n_embd)
|
||||
for i in range(args.n_embd):
|
||||
ddd[0, 0, i] = i / args.n_embd
|
||||
self.time_mix_k = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0))
|
||||
self.time_mix_r = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0))
|
||||
|
||||
self.key = make_linear_ffn(args.n_embd, args.dim_ffn, bias=False)
|
||||
self.receptance = make_linear_ffn(args.n_embd, args.n_embd, bias=False)
|
||||
self.value = make_linear_ffn(args.dim_ffn, args.n_embd, bias=False)
|
||||
|
||||
@MyFunction
|
||||
def forward(self, x):
|
||||
xx = self.time_shift(x)
|
||||
xk = x * self.time_mix_k + xx * (1 - self.time_mix_k)
|
||||
xr = x * self.time_mix_r + xx * (1 - self.time_mix_r)
|
||||
k = self.key(xk)
|
||||
k = torch.relu(k) ** 2
|
||||
kv = self.value(k)
|
||||
return torch.sigmoid(self.receptance(xr)) * kv
|
||||
|
||||
|
||||
class MishGLU(MyModule):
|
||||
def __init__(self, args, layer_id):
|
||||
super().__init__()
|
||||
self.args = args
|
||||
self.layer_id = layer_id
|
||||
self.time_shift = nn.ZeroPad2d((0, 0, 1, -1))
|
||||
|
||||
with torch.no_grad():
|
||||
ratio_1_to_almost0 = 1.0 - (layer_id / args.n_layer)
|
||||
|
||||
x = torch.ones(1, 1, args.n_embd)
|
||||
for i in range(args.n_embd):
|
||||
x[0, 0, i] = i / args.n_embd
|
||||
|
||||
self.time_mix_k = nn.Parameter(torch.pow(x, ratio_1_to_almost0))
|
||||
self.time_mix_r = nn.Parameter(torch.pow(x, ratio_1_to_almost0))
|
||||
self.aa = nn.Linear(args.n_embd, args.dim_ffn, bias=False)
|
||||
self.bb = nn.Linear(args.n_embd, args.dim_ffn, bias=False)
|
||||
self.value = nn.Linear(args.dim_ffn, args.n_embd, bias=False)
|
||||
|
||||
@MyFunction
|
||||
def forward(self, x):
|
||||
xx = self.time_shift(x)
|
||||
xa = x * self.time_mix_k + xx * (1 - self.time_mix_k)
|
||||
xb = x * self.time_mix_r + xx * (1 - self.time_mix_r)
|
||||
a = self.aa(xa)
|
||||
b = self.bb(xb)
|
||||
return self.value(a * F.mish(b))
|
||||
|
||||
|
||||
########################################################################################################
|
||||
# The RWKV Model with our blocks
|
||||
########################################################################################################
|
||||
|
||||
|
||||
class Block(nn.Module):
|
||||
def __init__(self, args, layer_id):
|
||||
super().__init__()
|
||||
self.args = args
|
||||
self.layer_id = layer_id
|
||||
|
||||
self.ln1 = nn.LayerNorm(args.n_embd)
|
||||
self.ln2 = nn.LayerNorm(args.n_embd)
|
||||
|
||||
if self.layer_id == 0:
|
||||
self.ln0 = nn.LayerNorm(args.n_embd)
|
||||
if args.my_pos_emb > 0:
|
||||
self.pos_emb_x = nn.Parameter(
|
||||
torch.zeros((1, args.my_pos_emb, args.n_embd))
|
||||
)
|
||||
self.pos_emb_y = nn.Parameter(
|
||||
torch.zeros((args.my_pos_emb, 1, args.n_embd))
|
||||
)
|
||||
|
||||
if self.layer_id == 0 and self.args.pre_ffn > 0:
|
||||
self.ffnPre = RWKV_ChannelMix(args, 0)
|
||||
else:
|
||||
self.att = RWKV_TimeMix_RWKV5(args, layer_id)
|
||||
|
||||
if "g" in os.environ["RWKV_MY_TESTING"]:
|
||||
self.ffn = MishGLU(args, layer_id)
|
||||
else:
|
||||
self.ffn = RWKV_ChannelMix(args, layer_id)
|
||||
|
||||
if args.tiny_att_dim > 0 and self.layer_id == args.tiny_att_layer:
|
||||
self.tiny_ln = nn.LayerNorm(args.n_embd)
|
||||
self.tiny_q = nn.Linear(args.n_embd, args.tiny_att_dim, bias=False)
|
||||
self.tiny_k = nn.Linear(args.n_embd, args.tiny_att_dim, bias=False)
|
||||
self.tiny_v = nn.Linear(args.n_embd, args.n_embd, bias=False)
|
||||
self.register_buffer(
|
||||
"tiny_mask", torch.tril(torch.ones(args.ctx_len, args.ctx_len))
|
||||
)
|
||||
|
||||
if args.dropout > 0:
|
||||
self.drop0 = nn.Dropout(p=args.dropout)
|
||||
self.drop1 = nn.Dropout(p=args.dropout)
|
||||
|
||||
def forward(self, x, x_emb=None):
|
||||
args = self.args
|
||||
B, T, C = x.size()
|
||||
if self.layer_id == 0:
|
||||
x = self.ln0(x)
|
||||
if args.my_pos_emb > 0:
|
||||
pos_emb = (self.pos_emb_x + self.pos_emb_y).reshape(T + 1, -1)[:-1, :]
|
||||
x = x + pos_emb
|
||||
|
||||
if self.args.dropout == 0:
|
||||
if self.layer_id == 0 and args.pre_ffn > 0:
|
||||
x = x + self.ffnPre(self.ln1(x))
|
||||
else:
|
||||
x = x + self.att(self.ln1(x))
|
||||
x = x + self.ffn(self.ln2(x))
|
||||
else:
|
||||
if self.layer_id == 0 and args.pre_ffn > 0:
|
||||
x = self.drop0(x + self.ffnPre(self.ln1(x)))
|
||||
else:
|
||||
x = self.drop0(x + self.att(self.ln1(x)))
|
||||
x = self.drop1(x + self.ffn(self.ln2(x)))
|
||||
|
||||
if args.tiny_att_dim > 0 and self.layer_id == args.tiny_att_layer:
|
||||
xx = self.tiny_ln(x)
|
||||
q = self.tiny_q(xx)[:, :T, :]
|
||||
k = self.tiny_k(xx)[:, :T, :]
|
||||
c = (q @ k.transpose(-2, -1)) * (args.tiny_att_dim ** (-0.5))
|
||||
c = c.masked_fill(self.tiny_mask[:T, :T] == 0, 0)
|
||||
x = x + c @ self.tiny_v(x_emb)
|
||||
return x
|
||||
|
||||
|
||||
class L2Wrap(torch.autograd.Function):
|
||||
@staticmethod
|
||||
def forward(ctx, loss, y):
|
||||
ctx.save_for_backward(y)
|
||||
return loss
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, grad_output):
|
||||
y = ctx.saved_tensors[0]
|
||||
# to encourage the logits to be close to 0
|
||||
factor = 1e-4 / (y.shape[0] * y.shape[1])
|
||||
maxx, ids = torch.max(y, -1, keepdim=True)
|
||||
gy = torch.zeros_like(y)
|
||||
gy.scatter_(-1, ids, maxx * factor)
|
||||
return (grad_output, gy)
|
||||
|
||||
|
||||
class RWKV(pl.LightningModule):
|
||||
def __init__(self, args):
|
||||
super().__init__()
|
||||
self.args = args
|
||||
if not hasattr(args, "dim_att"):
|
||||
args.dim_att = args.n_embd
|
||||
if not hasattr(args, "dim_ffn"):
|
||||
args.dim_ffn = args.n_embd * 4
|
||||
if not hasattr(args, "tiny_att_layer"):
|
||||
args.tiny_att_layer = -1
|
||||
if not hasattr(args, "tiny_att_dim"):
|
||||
args.tiny_att_dim = -1
|
||||
assert args.n_embd % 32 == 0
|
||||
assert args.dim_att % 32 == 0
|
||||
assert args.dim_ffn % 32 == 0
|
||||
|
||||
self.emb = nn.Embedding(args.vocab_size, args.n_embd)
|
||||
|
||||
self.blocks = nn.ModuleList([Block(args, i) for i in range(args.n_layer)])
|
||||
|
||||
self.ln_out = nn.LayerNorm(args.n_embd)
|
||||
self.head = nn.Linear(args.n_embd, args.vocab_size, bias=False)
|
||||
|
||||
if args.head_qk > 0:
|
||||
self.head_q = nn.Linear(args.n_embd, args.head_qk, bias=False)
|
||||
self.head_k = nn.Linear(args.n_embd, args.head_qk, bias=False)
|
||||
self.register_buffer(
|
||||
"copy_mask", torch.tril(torch.ones(args.ctx_len, args.ctx_len))
|
||||
)
|
||||
if args.dropout > 0:
|
||||
self.drop0 = nn.Dropout(p=args.dropout)
|
||||
|
||||
def configure_optimizers(self):
|
||||
args = self.args
|
||||
|
||||
lr_decay = set()
|
||||
lr_1x = set()
|
||||
lr_2x = set()
|
||||
lr_3x = set()
|
||||
for n, p in self.named_parameters():
|
||||
if ("time_mix" in n) and (args.layerwise_lr > 0):
|
||||
if args.my_pile_stage == 2:
|
||||
lr_2x.add(n)
|
||||
else:
|
||||
lr_1x.add(n)
|
||||
elif ("time_decay" in n) and (args.layerwise_lr > 0):
|
||||
if args.my_pile_stage == 2:
|
||||
lr_3x.add(n)
|
||||
else:
|
||||
lr_2x.add(n)
|
||||
elif ("time_faaaa" in n) and (args.layerwise_lr > 0):
|
||||
if args.my_pile_stage == 2:
|
||||
lr_2x.add(n)
|
||||
else:
|
||||
lr_1x.add(n)
|
||||
elif ("time_first" in n) and (args.layerwise_lr > 0):
|
||||
lr_3x.add(n)
|
||||
elif (len(p.squeeze().shape) >= 2) and (args.weight_decay > 0):
|
||||
lr_decay.add(n)
|
||||
else:
|
||||
lr_1x.add(n)
|
||||
|
||||
lr_decay = sorted(list(lr_decay))
|
||||
lr_1x = sorted(list(lr_1x))
|
||||
lr_2x = sorted(list(lr_2x))
|
||||
lr_3x = sorted(list(lr_3x))
|
||||
# print('decay', lr_decay)
|
||||
# print('1x', lr_1x)
|
||||
# print('2x', lr_2x)
|
||||
# print('3x', lr_3x)
|
||||
param_dict = {n: p for n, p in self.named_parameters()}
|
||||
|
||||
if args.layerwise_lr > 0:
|
||||
if args.my_pile_stage == 2:
|
||||
optim_groups = [
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_1x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 1.0,
|
||||
},
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_2x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 5.0,
|
||||
}, # test: 2e-3 / args.lr_init},
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_3x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 5.0,
|
||||
}, # test: 3e-3 / args.lr_init},
|
||||
]
|
||||
else:
|
||||
optim_groups = [
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_1x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 1.0,
|
||||
},
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_2x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 2.0,
|
||||
},
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_3x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 3.0,
|
||||
},
|
||||
]
|
||||
else:
|
||||
optim_groups = [
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_1x],
|
||||
"weight_decay": 0.0,
|
||||
"my_lr_scale": 1.0,
|
||||
}
|
||||
]
|
||||
|
||||
if args.weight_decay > 0:
|
||||
optim_groups += [
|
||||
{
|
||||
"params": [param_dict[n] for n in lr_decay],
|
||||
"weight_decay": args.weight_decay,
|
||||
"my_lr_scale": 1.0,
|
||||
}
|
||||
]
|
||||
if self.deepspeed_offload:
|
||||
return DeepSpeedCPUAdam(
|
||||
optim_groups,
|
||||
lr=self.args.lr_init,
|
||||
betas=self.args.betas,
|
||||
eps=self.args.adam_eps,
|
||||
bias_correction=True,
|
||||
adamw_mode=True,
|
||||
amsgrad=False,
|
||||
)
|
||||
return FusedAdam(
|
||||
optim_groups,
|
||||
lr=self.args.lr_init,
|
||||
betas=self.args.betas,
|
||||
eps=self.args.adam_eps,
|
||||
bias_correction=True,
|
||||
adam_w_mode=True,
|
||||
amsgrad=False,
|
||||
)
|
||||
else:
|
||||
if self.deepspeed_offload:
|
||||
return DeepSpeedCPUAdam(
|
||||
optim_groups,
|
||||
lr=self.args.lr_init,
|
||||
betas=self.args.betas,
|
||||
eps=self.args.adam_eps,
|
||||
bias_correction=True,
|
||||
adamw_mode=False,
|
||||
weight_decay=0,
|
||||
amsgrad=False,
|
||||
)
|
||||
return FusedAdam(
|
||||
optim_groups,
|
||||
lr=self.args.lr_init,
|
||||
betas=self.args.betas,
|
||||
eps=self.args.adam_eps,
|
||||
bias_correction=True,
|
||||
adam_w_mode=False,
|
||||
weight_decay=0,
|
||||
amsgrad=False,
|
||||
)
|
||||
# return ZeroOneAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, weight_decay=0, amsgrad=False, cuda_aware=False)
|
||||
|
||||
@property
|
||||
def deepspeed_offload(self) -> bool:
|
||||
strategy = self.trainer.strategy
|
||||
if isinstance(strategy, DeepSpeedStrategy):
|
||||
cfg = strategy.config["zero_optimization"]
|
||||
return cfg.get("offload_optimizer") or cfg.get("offload_param")
|
||||
return False
|
||||
|
||||
def forward(self, idx):
|
||||
args = self.args
|
||||
B, T = idx.size()
|
||||
assert T <= args.ctx_len, "Cannot forward, model ctx_len is exhausted."
|
||||
|
||||
x = self.emb(idx)
|
||||
x_emb = x
|
||||
|
||||
if args.dropout > 0:
|
||||
x = self.drop0(x)
|
||||
if args.tiny_att_dim > 0:
|
||||
for block in self.blocks:
|
||||
if args.grad_cp == 1:
|
||||
if args.lora:
|
||||
x = torch_checkpoint(block, x, x_emb, use_reentrant=False)
|
||||
else:
|
||||
x = deepspeed.checkpointing.checkpoint(block, x, x_emb)
|
||||
else:
|
||||
x = block(x, x_emb)
|
||||
else:
|
||||
for block in self.blocks:
|
||||
if args.grad_cp == 1:
|
||||
if args.lora:
|
||||
x = torch_checkpoint(block, x, x_emb, use_reentrant=False)
|
||||
else:
|
||||
x = deepspeed.checkpointing.checkpoint(block, x)
|
||||
else:
|
||||
x = block(x)
|
||||
|
||||
x = self.ln_out(x)
|
||||
|
||||
if args.head_qk > 0:
|
||||
q = self.head_q(x)[:, :T, :]
|
||||
k = self.head_k(x)[:, :T, :]
|
||||
c = (q @ k.transpose(-2, -1)) * (1.0 / args.head_qk)
|
||||
c = c.masked_fill(self.copy_mask[:T, :T] == 0, 0)
|
||||
|
||||
if "32" in os.environ["RWKV_FLOAT_MODE"]:
|
||||
c = c @ F.one_hot(idx, num_classes=args.vocab_size)
|
||||
elif os.environ["RWKV_FLOAT_MODE"] == "fp16":
|
||||
c = c @ F.one_hot(idx, num_classes=args.vocab_size).half()
|
||||
elif os.environ["RWKV_FLOAT_MODE"] == "bf16":
|
||||
c = c @ F.one_hot(idx, num_classes=args.vocab_size).bfloat16()
|
||||
|
||||
x = self.head(x) + c
|
||||
else:
|
||||
x = self.head(x)
|
||||
|
||||
return x
|
||||
|
||||
def training_step(self, batch, batch_idx):
|
||||
args = self.args
|
||||
if args.my_qa_mask != 1:
|
||||
idx, targets = batch
|
||||
logits = self(idx)
|
||||
loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
|
||||
# if '0' in os.environ["RWKV_MY_TESTING"]:
|
||||
# print('logits', logits)
|
||||
# torch.set_printoptions(threshold=10000)
|
||||
# print('idx', idx)
|
||||
# exit(0)
|
||||
else:
|
||||
idx, targets, mask = batch
|
||||
mask = mask.view(-1)
|
||||
sum_mask = torch.sum(mask).item()
|
||||
# if sum_mask == 0:
|
||||
# return torch.tensor([0.0], requires_grad=True)
|
||||
|
||||
logits = self(idx)
|
||||
if sum_mask == mask.shape[0]:
|
||||
loss = F.cross_entropy(
|
||||
logits.view(-1, logits.size(-1)), targets.view(-1)
|
||||
)
|
||||
# print('rank', self.global_rank, 'loss', loss.item())
|
||||
else:
|
||||
loss = F.cross_entropy(
|
||||
logits.view(-1, logits.size(-1)), targets.view(-1), reduction="none"
|
||||
)
|
||||
# loss_raw = loss
|
||||
loss = torch.sum(loss * mask) / sum_mask
|
||||
|
||||
# torch.set_printoptions(threshold=10000)
|
||||
# if True: #self.global_rank == 1:
|
||||
# tmp = ''
|
||||
# sss = 0
|
||||
# ccc = 0
|
||||
# for i in range(mask.shape[0]):
|
||||
# if mask[i] > 0:
|
||||
# tmp += str(idx.view(-1)[i].item()) + ','
|
||||
# sss += loss_raw.view(-1)[i].float().item()
|
||||
# ccc += 1
|
||||
# print('rank', self.global_rank, 'loss', loss.item(), 'lavg', sss / ccc)#, 'tmp', tmp, 'input', idx)
|
||||
return L2Wrap.apply(loss, logits)
|
||||
|
||||
def training_step_end(self, batch_parts):
|
||||
if pl.__version__[0] != "2":
|
||||
all = self.all_gather(batch_parts)
|
||||
if self.trainer.is_global_zero:
|
||||
self.trainer.my_loss_all = all
|
||||
|
||||
def generate_init_weight(self):
|
||||
print(
|
||||
f"""
|
||||
############################################################################
|
||||
#
|
||||
# Init model weight (slow for large models)...
|
||||
#
|
||||
############################################################################
|
||||
"""
|
||||
)
|
||||
m = {}
|
||||
for n in self.state_dict():
|
||||
p = self.state_dict()[n]
|
||||
shape = p.shape
|
||||
|
||||
gain = 1.0
|
||||
scale = 1.0
|
||||
if (
|
||||
"ln_" in n
|
||||
or ".ln" in n
|
||||
or "time_" in n
|
||||
or "_mask" in n
|
||||
or "pos_emb" in n
|
||||
or ".mask." in n
|
||||
):
|
||||
if "ln_x.weight" in n:
|
||||
layer_scale = (1 + int(n.split(".")[1])) / self.args.n_layer
|
||||
m[n] = (p * 0.0) + (layer_scale**0.7)
|
||||
else:
|
||||
m[n] = p
|
||||
else:
|
||||
if n == "emb.weight":
|
||||
scale = -1 * self.args.lr_init
|
||||
else:
|
||||
if shape[0] > shape[1]:
|
||||
gain = math.sqrt(shape[0] / shape[1])
|
||||
|
||||
zero = [
|
||||
".att.output.",
|
||||
".ffn.value.",
|
||||
".ffn.receptance.",
|
||||
".ffnPre.value.",
|
||||
".ffnPre.receptance.",
|
||||
"head_q.",
|
||||
".oo.",
|
||||
".rr.",
|
||||
]
|
||||
|
||||
for kk in zero:
|
||||
if kk in n:
|
||||
scale = 0
|
||||
if n == "head.weight":
|
||||
scale = 0.5
|
||||
if "head_k." in n:
|
||||
scale = 0.1
|
||||
if "head_q." in n:
|
||||
scale = 0
|
||||
|
||||
print(
|
||||
f"{str(shape[0]).ljust(5)} {str(shape[1]).ljust(5)} {str(scale).ljust(4)} {n}"
|
||||
)
|
||||
|
||||
if self.args.accelerator.upper() == "GPU":
|
||||
m[n] = torch.empty((shape[0], shape[1]), device="cuda")
|
||||
else:
|
||||
m[n] = torch.empty((shape[0], shape[1]))
|
||||
|
||||
if scale == 0:
|
||||
nn.init.zeros_(m[n])
|
||||
elif scale < 0:
|
||||
nn.init.uniform_(m[n], a=scale, b=-scale)
|
||||
else:
|
||||
nn.init.orthogonal_(m[n], gain=gain * scale)
|
||||
|
||||
m[n] = m[n].cpu()
|
||||
if os.environ["RWKV_FLOAT_MODE"] == "fp16":
|
||||
m[n] = m[n].half()
|
||||
elif os.environ["RWKV_FLOAT_MODE"] == "bf16":
|
||||
m[n] = m[n].bfloat16()
|
||||
|
||||
# if n == "emb.weight":
|
||||
# print(m[n])
|
||||
|
||||
gc.collect()
|
||||
torch.cuda.empty_cache()
|
||||
return m
|
||||
310
finetune/lora/v5/src/trainer.py
vendored
Normal file
310
finetune/lora/v5/src/trainer.py
vendored
Normal file
@@ -0,0 +1,310 @@
|
||||
import os, math, time, datetime, subprocess
|
||||
import torch
|
||||
from torch.utils.data import DataLoader
|
||||
import pytorch_lightning as pl
|
||||
from pytorch_lightning.utilities import rank_zero_info, rank_zero_only
|
||||
from .model import LORA_CONFIG
|
||||
|
||||
|
||||
def my_save(args, trainer, dd, ff):
|
||||
if "14b-run1" in ff:
|
||||
fn = ff.split("/")[-1]
|
||||
fff = "/dev/shm/" + fn
|
||||
torch.save(dd, fff)
|
||||
subprocess.Popen(f" aws s3 mv {fff} s3://rwkv-14b-4k/{fn} --quiet", shell=True)
|
||||
elif ("world/14b" in ff) or ("world/7b" in ff):
|
||||
aa = ff.split("/")[1]
|
||||
fn = ff.split("/")[-1]
|
||||
fff = f"/dev/shm/{aa}-{fn}"
|
||||
torch.save(dd, fff)
|
||||
subprocess.Popen(
|
||||
f" aws s3 mv {fff} s3://rwkv-world/{aa}-{fn} --quiet", shell=True
|
||||
)
|
||||
else:
|
||||
if "deepspeed_stage_3" in args.strategy:
|
||||
trainer.save_checkpoint(ff, weights_only=True)
|
||||
else:
|
||||
torch.save(dd, ff)
|
||||
|
||||
|
||||
class train_callback(pl.Callback):
|
||||
def __init__(self, args):
|
||||
super().__init__()
|
||||
self.args = args
|
||||
|
||||
def on_train_batch_start(self, trainer, pl_module, batch, batch_idx):
|
||||
args = self.args
|
||||
# if args.cuda_cleanup > 0:
|
||||
# torch.cuda.empty_cache()
|
||||
real_step = trainer.global_step + args.epoch_begin * args.epoch_steps
|
||||
|
||||
# LR schedule
|
||||
w_step = args.warmup_steps
|
||||
if args.lr_final == args.lr_init or args.epoch_count == 0:
|
||||
lr = args.lr_init
|
||||
else:
|
||||
decay_step = real_step - args.my_pile_edecay * args.epoch_steps
|
||||
decay_total = (args.epoch_count - args.my_pile_edecay) * args.epoch_steps
|
||||
progress = (decay_step - w_step + 1) / (decay_total - w_step)
|
||||
progress = min(1, max(0, progress))
|
||||
|
||||
if args.lr_final == 0 or args.lr_init == 0: # linear decay
|
||||
lr = args.lr_init + (args.lr_final - args.lr_init) * progress
|
||||
else: # exp decay
|
||||
lr = args.lr_init * math.exp(
|
||||
math.log(args.lr_final / args.lr_init) * pow(progress, 1)
|
||||
)
|
||||
# if trainer.is_global_zero:
|
||||
# print(trainer.global_step, decay_step, decay_total, w_step, progress, lr)
|
||||
|
||||
if args.my_exit_tokens != 0: # cosine decay
|
||||
real_tokens = real_step * args.ctx_len * args.real_bsz
|
||||
warmup_tokens = w_step * args.ctx_len * args.real_bsz
|
||||
progress = (real_tokens - warmup_tokens) / (
|
||||
abs(args.my_exit_tokens) - warmup_tokens
|
||||
)
|
||||
progress = max(0, min(1, progress))
|
||||
lr_final_factor = args.lr_final / args.lr_init
|
||||
lr_mult = (0.5 + lr_final_factor / 2) + (
|
||||
0.5 - lr_final_factor / 2
|
||||
) * math.cos(math.pi * progress)
|
||||
if args.my_exit_tokens > 0:
|
||||
lr = args.lr_init * lr_mult
|
||||
else:
|
||||
lr = (lr + args.lr_init * lr_mult) / 2
|
||||
if progress >= 1:
|
||||
if (trainer.is_global_zero) or ("deepspeed_stage_3" in args.strategy):
|
||||
my_save(
|
||||
args,
|
||||
trainer,
|
||||
pl_module.state_dict(),
|
||||
f"{args.proj_dir}/rwkv-final.pth",
|
||||
)
|
||||
exit(0)
|
||||
if trainer.global_step < w_step:
|
||||
lr = lr * (0.2 + 0.8 * trainer.global_step / w_step)
|
||||
|
||||
if args.weight_decay_final > 0:
|
||||
wd_now = args.weight_decay * math.exp(
|
||||
math.log(args.weight_decay_final / args.weight_decay) * progress
|
||||
)
|
||||
else:
|
||||
wd_now = args.weight_decay
|
||||
|
||||
for param_group in trainer.optimizers[0].param_groups:
|
||||
if param_group["weight_decay"] > 0:
|
||||
param_group["weight_decay"] = wd_now
|
||||
if args.layerwise_lr > 0:
|
||||
param_group["lr"] = lr * param_group["my_lr_scale"]
|
||||
# print(param_group["lr"], param_group["my_lr_scale"])
|
||||
else:
|
||||
param_group["lr"] = lr
|
||||
|
||||
trainer.my_lr = lr
|
||||
trainer.my_wd = wd_now
|
||||
# rank_zero_info(f"{real_step} {lr}")
|
||||
|
||||
if trainer.global_step == 0:
|
||||
if trainer.is_global_zero: # logging
|
||||
trainer.my_loss_sum = 0
|
||||
trainer.my_loss_count = 0
|
||||
trainer.my_log = open(args.proj_dir + "/train_log.txt", "a")
|
||||
trainer.my_log.write(
|
||||
f"NEW RUN {args.my_timestamp}\n{vars(self.args)}\n"
|
||||
)
|
||||
try:
|
||||
print(f"\n{trainer.strategy.config}\n")
|
||||
trainer.my_log.write(f"{trainer.strategy.config}\n")
|
||||
except:
|
||||
pass
|
||||
trainer.my_log.flush()
|
||||
if len(args.wandb) > 0:
|
||||
print("Login to wandb...")
|
||||
import wandb
|
||||
|
||||
wandb.init(
|
||||
project=args.wandb,
|
||||
name=args.run_name + " " + args.my_timestamp,
|
||||
config=args,
|
||||
save_code=False,
|
||||
)
|
||||
trainer.my_wandb = wandb
|
||||
|
||||
def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx):
|
||||
args = self.args
|
||||
token_per_step = args.ctx_len * args.real_bsz
|
||||
real_step = trainer.global_step + args.epoch_begin * args.epoch_steps
|
||||
if trainer.is_global_zero: # logging
|
||||
t_now = time.time_ns()
|
||||
kt_s = 0
|
||||
try:
|
||||
t_cost = (t_now - trainer.my_time_ns) / 1e9
|
||||
kt_s = token_per_step / t_cost / 1000
|
||||
self.log("REAL it/s", 1.0 / t_cost, prog_bar=True, on_step=True)
|
||||
self.log("Kt/s", kt_s, prog_bar=True, on_step=True)
|
||||
except:
|
||||
pass
|
||||
trainer.my_time_ns = t_now
|
||||
if pl.__version__[0] == "2":
|
||||
trainer.my_loss = outputs["loss"]
|
||||
else:
|
||||
trainer.my_loss = trainer.my_loss_all.float().mean().item()
|
||||
trainer.my_loss_sum += trainer.my_loss
|
||||
trainer.my_loss_count += 1
|
||||
trainer.my_epoch_loss = trainer.my_loss_sum / trainer.my_loss_count
|
||||
self.log("lr", trainer.my_lr, prog_bar=True, on_step=True)
|
||||
self.log("loss", trainer.my_epoch_loss, prog_bar=True, on_step=True)
|
||||
# self.log("s", real_step, prog_bar=True, on_step=True)
|
||||
|
||||
if len(args.wandb) > 0:
|
||||
lll = {
|
||||
"loss": trainer.my_loss,
|
||||
"lr": trainer.my_lr,
|
||||
"wd": trainer.my_wd,
|
||||
"Gtokens": real_step * token_per_step / 1e9,
|
||||
}
|
||||
if kt_s > 0:
|
||||
lll["kt/s"] = kt_s
|
||||
trainer.my_wandb.log(lll, step=int(real_step))
|
||||
if (trainer.is_global_zero) or (
|
||||
"deepspeed_stage_3" in args.strategy
|
||||
): # save pth
|
||||
if args.magic_prime > 0:
|
||||
expand_factor = 2 if args.my_qa_mask > 0 else 1
|
||||
if int(real_step) == int(
|
||||
args.magic_prime * expand_factor // args.real_bsz
|
||||
) - 1 + int(args.my_random_steps):
|
||||
to_save_dict = pl_module.state_dict()
|
||||
my_save(
|
||||
args,
|
||||
trainer,
|
||||
to_save_dict,
|
||||
f"{args.proj_dir}/rwkv-final.pth",
|
||||
)
|
||||
# if args.batch_save==batch_idx :
|
||||
# to_save_dict = pl_module.state_dict()
|
||||
# for name, state in to_save_dict.items():
|
||||
# if 'img' in name:
|
||||
# to_save_dict[name] = state
|
||||
# try:
|
||||
# my_save(
|
||||
# args, trainer,
|
||||
# to_save_dict,
|
||||
# f"{args.proj_dir}/rwkv-{args.epoch_begin + trainer.current_epoch}-{batch_idx}.pth",
|
||||
# )
|
||||
# except Exception as e:
|
||||
# print('Error\n\n', e, '\n\n')
|
||||
|
||||
def on_train_epoch_start(self, trainer, pl_module):
|
||||
args = self.args
|
||||
if pl.__version__[0] == "2":
|
||||
dataset = trainer.train_dataloader.dataset
|
||||
else:
|
||||
dataset = trainer.train_dataloader.dataset.datasets
|
||||
assert "MyDataset" in str(dataset)
|
||||
dataset.global_rank = trainer.global_rank
|
||||
dataset.real_epoch = int(args.epoch_begin + trainer.current_epoch)
|
||||
dataset.world_size = trainer.world_size
|
||||
# print(f'########## world_size {dataset.world_size} global_rank {dataset.global_rank} real_epoch {dataset.real_epoch} ##########')
|
||||
|
||||
def on_train_epoch_end(self, trainer, pl_module):
|
||||
args = self.args
|
||||
to_save_dict = {}
|
||||
if (trainer.is_global_zero) or (
|
||||
"deepspeed_stage_3" in args.strategy
|
||||
): # save pth
|
||||
if (
|
||||
args.epoch_save > 0 and trainer.current_epoch % args.epoch_save == 0
|
||||
) or (trainer.current_epoch == args.epoch_count - 1):
|
||||
if args.data_type == "wds_img":
|
||||
raw_dict = pl_module.state_dict()
|
||||
for k in raw_dict:
|
||||
if k.startswith("encoder.") or k.startswith("decoder."):
|
||||
to_save_dict[k] = raw_dict[k]
|
||||
else:
|
||||
to_save_dict = pl_module.state_dict()
|
||||
|
||||
if args.data_type == "img" and not args.lora:
|
||||
for name, state in to_save_dict.items():
|
||||
if "img" in name:
|
||||
to_save_dict[name] = state
|
||||
|
||||
if args.lora:
|
||||
enable_time_finetune = "time" in LORA_CONFIG["parts"]
|
||||
enable_ln_finetune = "ln" in LORA_CONFIG["parts"]
|
||||
lora_dict = {}
|
||||
for name, state in to_save_dict.items():
|
||||
if "img" in name:
|
||||
lora_dict[name] = state
|
||||
if (
|
||||
".lora_" in name
|
||||
or (enable_time_finetune and ".time_" in name)
|
||||
or (enable_ln_finetune and ".ln" in name)
|
||||
):
|
||||
lora_dict[name] = state
|
||||
to_save_dict = lora_dict
|
||||
|
||||
try:
|
||||
my_save(
|
||||
args,
|
||||
trainer,
|
||||
to_save_dict,
|
||||
f"{args.proj_dir}/rwkv-{args.epoch_begin + trainer.current_epoch}.pth",
|
||||
)
|
||||
except Exception as e:
|
||||
print("Error\n\n", e, "\n\n")
|
||||
|
||||
if trainer.is_global_zero: # logging
|
||||
trainer.my_log.write(
|
||||
f"{args.epoch_begin + trainer.current_epoch} {trainer.my_epoch_loss:.6f} {math.exp(trainer.my_epoch_loss):.4f} {trainer.my_lr:.8f} {datetime.datetime.now()} {trainer.current_epoch}\n"
|
||||
)
|
||||
trainer.my_log.flush()
|
||||
|
||||
trainer.my_loss_sum = 0
|
||||
trainer.my_loss_count = 0
|
||||
if (args.epoch_begin + trainer.current_epoch) >= args.my_exit:
|
||||
exit(0)
|
||||
|
||||
|
||||
@rank_zero_only
|
||||
def generate_init_weight(model, init_weight_name):
|
||||
mm = model.generate_init_weight()
|
||||
|
||||
if model.args.my_pile_stage == 1:
|
||||
if len(model.args.load_model) > 0:
|
||||
print(f"Combine weights from {model.args.load_model}...")
|
||||
load_dict = torch.load(model.args.load_model, map_location="cpu")
|
||||
for k in load_dict:
|
||||
try:
|
||||
assert k in mm
|
||||
except:
|
||||
print("missing", k)
|
||||
exit(0)
|
||||
src = load_dict[k]
|
||||
try:
|
||||
mm[k] = src.reshape(mm[k].shape)
|
||||
except:
|
||||
tmp = mm[k].squeeze().clone()
|
||||
print(k, src.shape, "-->", mm[k].shape)
|
||||
ss = src.shape[0]
|
||||
dd = tmp.shape[0]
|
||||
for i in range(dd):
|
||||
pos = i / dd * ss
|
||||
if pos >= ss - 1:
|
||||
tmp[i] = src[ss - 1]
|
||||
else:
|
||||
p0 = int(math.floor(pos))
|
||||
ii = pos - p0
|
||||
tmp[i] = src[p0] * (1 - ii) + src[p0 + 1] * (ii)
|
||||
mm[k] = tmp.reshape(mm[k].shape)
|
||||
sss = src.squeeze().float().cpu().numpy()
|
||||
print(sss[:10], "...", sss[-10:])
|
||||
mmm = mm[k].squeeze().float().cpu().numpy()
|
||||
print(mmm[:10], "...", mmm[-10:])
|
||||
|
||||
print(f"Save to {init_weight_name}...")
|
||||
torch.save(mm, init_weight_name)
|
||||
|
||||
if model.args.my_pile_stage == 1:
|
||||
print("Done. Now go for stage 2.")
|
||||
exit(0)
|
||||
139
finetune/lora/v5/src/utils.py
vendored
Normal file
139
finetune/lora/v5/src/utils.py
vendored
Normal file
@@ -0,0 +1,139 @@
|
||||
import json, time, random, os
|
||||
import numpy as np
|
||||
import torch
|
||||
from torch.nn import functional as F
|
||||
|
||||
time_slot = {}
|
||||
time_ref = time.time_ns()
|
||||
|
||||
|
||||
def record_time(name):
|
||||
if name not in time_slot:
|
||||
time_slot[name] = 1e20
|
||||
tt = (time.time_ns() - time_ref) / 1e9
|
||||
if tt < time_slot[name]:
|
||||
time_slot[name] = tt
|
||||
|
||||
|
||||
class TOKENIZER:
|
||||
def __init__(self, WORD_NAME, UNKNOWN_CHAR="\ue083"):
|
||||
if "list" in str(type(WORD_NAME)):
|
||||
self.charMode = False
|
||||
if WORD_NAME[0] == WORD_NAME[1]:
|
||||
from transformers import PreTrainedTokenizerFast
|
||||
|
||||
self.tokenizer = PreTrainedTokenizerFast(tokenizer_file=WORD_NAME[0])
|
||||
else:
|
||||
from transformers import GPT2TokenizerFast
|
||||
|
||||
self.tokenizer = GPT2TokenizerFast(WORD_NAME[0], WORD_NAME[1])
|
||||
self.vocab_size = len(self.tokenizer)
|
||||
else:
|
||||
self.charMode = True
|
||||
with open(WORD_NAME + ".json", "r", encoding="utf-16") as result_file:
|
||||
self.word_table = json.load(result_file)
|
||||
|
||||
self.vocab_size = len(self.word_table)
|
||||
|
||||
self.stoi = {v: int(k) for k, v in self.word_table.items()}
|
||||
self.itos = {int(k): v for k, v in self.word_table.items()}
|
||||
|
||||
self.UNKNOWN_CHAR = self.stoi[UNKNOWN_CHAR]
|
||||
|
||||
def refine_context(self, context):
|
||||
context = context.strip().split("\n")
|
||||
for c in range(len(context)):
|
||||
context[c] = context[c].strip().strip("\u3000").strip("\r")
|
||||
context = list(filter(lambda c: c != "", context))
|
||||
context = "\n" + ("\n".join(context)).strip()
|
||||
if context == "":
|
||||
context = "\n"
|
||||
return context
|
||||
|
||||
def sample_logits(
|
||||
self, out, x, ctx_len, temperature=1.0, top_p_usual=None, top_p_newline=None
|
||||
):
|
||||
# out[self.UNKNOWN_CHAR] = -float('Inf')
|
||||
lastChar = int(x[-1])
|
||||
|
||||
probs = F.softmax(out, dim=-1)
|
||||
|
||||
if self.charMode:
|
||||
if self.itos[lastChar] == "\n":
|
||||
top_p = top_p_newline
|
||||
else:
|
||||
top_p = top_p_usual
|
||||
else:
|
||||
top_p = top_p_usual
|
||||
|
||||
if os.environ["RWKV_RUN_DEVICE"] == "cpu":
|
||||
probs = probs.numpy()
|
||||
sorted_probs = np.sort(probs)[::-1]
|
||||
cumulative_probs = np.cumsum(sorted_probs)
|
||||
cutoff = float(sorted_probs[np.argmax(cumulative_probs > top_p)])
|
||||
probs[probs < cutoff] = 0
|
||||
if temperature != 1.0:
|
||||
probs = probs.pow(1.0 / temperature)
|
||||
probs = probs / np.sum(probs)
|
||||
out = np.random.choice(a=len(probs), p=probs)
|
||||
return out
|
||||
else:
|
||||
sorted_probs = torch.sort(probs, descending=True)[0]
|
||||
cumulative_probs = torch.cumsum(sorted_probs, dim=-1).cpu().numpy()
|
||||
cutoff = float(sorted_probs[np.argmax(cumulative_probs > top_p)])
|
||||
probs[probs < cutoff] = 0
|
||||
if temperature != 1.0:
|
||||
probs = probs.pow(1.0 / temperature)
|
||||
out = torch.multinomial(probs, num_samples=1)[0]
|
||||
return out
|
||||
|
||||
|
||||
def MaybeIsPrime(number):
|
||||
if FermatPrimalityTest(number) and MillerRabinPrimalityTest(number):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def FermatPrimalityTest(number):
|
||||
if number > 1:
|
||||
for time in range(3):
|
||||
randomNumber = random.randint(2, number) - 1
|
||||
if pow(randomNumber, number - 1, number) != 1:
|
||||
return False
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def MillerRabinPrimalityTest(number):
|
||||
if number == 2:
|
||||
return True
|
||||
elif number == 1 or number % 2 == 0:
|
||||
return False
|
||||
oddPartOfNumber = number - 1
|
||||
timesTwoDividNumber = 0
|
||||
while oddPartOfNumber % 2 == 0:
|
||||
oddPartOfNumber = oddPartOfNumber // 2
|
||||
timesTwoDividNumber = timesTwoDividNumber + 1
|
||||
|
||||
for time in range(3):
|
||||
while True:
|
||||
randomNumber = random.randint(2, number) - 1
|
||||
if randomNumber != 0 and randomNumber != 1:
|
||||
break
|
||||
|
||||
randomNumberWithPower = pow(randomNumber, oddPartOfNumber, number)
|
||||
|
||||
if (randomNumberWithPower != 1) and (randomNumberWithPower != number - 1):
|
||||
iterationNumber = 1
|
||||
|
||||
while (iterationNumber <= timesTwoDividNumber - 1) and (
|
||||
randomNumberWithPower != number - 1
|
||||
):
|
||||
randomNumberWithPower = pow(randomNumberWithPower, 2, number)
|
||||
iterationNumber = iterationNumber + 1
|
||||
if randomNumberWithPower != (number - 1):
|
||||
return False
|
||||
|
||||
return True
|
||||
436
finetune/lora/v5/train.py
vendored
Normal file
436
finetune/lora/v5/train.py
vendored
Normal file
@@ -0,0 +1,436 @@
|
||||
########################################################################################################
|
||||
# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
|
||||
########################################################################################################
|
||||
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
if __name__ == "__main__":
|
||||
from argparse import ArgumentParser
|
||||
from pytorch_lightning import Trainer
|
||||
from pytorch_lightning.utilities import rank_zero_info, rank_zero_only
|
||||
import pytorch_lightning as pl
|
||||
|
||||
rank_zero_info("########## work in progress ##########")
|
||||
|
||||
parser = ArgumentParser()
|
||||
|
||||
parser.add_argument("--load_model", default="", type=str) # full path, with .pth
|
||||
parser.add_argument(
|
||||
"--wandb", default="", type=str
|
||||
) # wandb project name. if "" then don't use wandb
|
||||
parser.add_argument("--proj_dir", default="out", type=str)
|
||||
parser.add_argument("--random_seed", default="-1", type=int)
|
||||
|
||||
parser.add_argument("--data_file", default="", type=str)
|
||||
parser.add_argument("--data_type", default="utf-8", type=str)
|
||||
parser.add_argument(
|
||||
"--vocab_size", default=0, type=int
|
||||
) # vocab_size = 0 means auto (for char-level LM and .txt data)
|
||||
|
||||
parser.add_argument("--ctx_len", default=1024, type=int)
|
||||
parser.add_argument(
|
||||
"--epoch_steps", default=1000, type=int
|
||||
) # a mini "epoch" has [epoch_steps] steps
|
||||
parser.add_argument(
|
||||
"--epoch_count", default=500, type=int
|
||||
) # train for this many "epochs". will continue afterwards with lr = lr_final
|
||||
parser.add_argument(
|
||||
"--epoch_begin", default=0, type=int
|
||||
) # if you load a model trained for x "epochs", set epoch_begin = x
|
||||
parser.add_argument(
|
||||
"--epoch_save", default=5, type=int
|
||||
) # save the model every [epoch_save] "epochs"
|
||||
|
||||
parser.add_argument(
|
||||
"--micro_bsz", default=12, type=int
|
||||
) # micro batch size (batch size per GPU)
|
||||
parser.add_argument("--n_layer", default=6, type=int)
|
||||
parser.add_argument("--n_embd", default=512, type=int)
|
||||
parser.add_argument("--dim_att", default=0, type=int)
|
||||
parser.add_argument("--dim_ffn", default=0, type=int)
|
||||
parser.add_argument(
|
||||
"--pre_ffn", default=0, type=int
|
||||
) # replace first att layer by ffn (sometimes better)
|
||||
parser.add_argument("--head_qk", default=0, type=int) # my headQK trick
|
||||
parser.add_argument("--tiny_att_dim", default=0, type=int) # tiny attention dim
|
||||
parser.add_argument(
|
||||
"--tiny_att_layer", default=-999, type=int
|
||||
) # tiny attention @ which layer
|
||||
|
||||
parser.add_argument(
|
||||
"--lr_init", default=6e-4, type=float
|
||||
) # 6e-4 for L12-D768, 4e-4 for L24-D1024, 3e-4 for L24-D2048
|
||||
parser.add_argument("--lr_final", default=1e-5, type=float)
|
||||
parser.add_argument(
|
||||
"--warmup_steps", default=-1, type=int
|
||||
) # try 50 if you load a model
|
||||
parser.add_argument("--beta1", default=0.9, type=float)
|
||||
parser.add_argument(
|
||||
"--beta2", default=0.99, type=float
|
||||
) # use 0.999 when your model is close to convergence
|
||||
parser.add_argument("--adam_eps", default=1e-8, type=float)
|
||||
parser.add_argument(
|
||||
"--grad_cp", default=0, type=int
|
||||
) # gradient checkpt: saves VRAM, but slower
|
||||
parser.add_argument(
|
||||
"--dropout", default=0, type=float
|
||||
) # try 0.01 / 0.02 / 0.05 / 0.1
|
||||
parser.add_argument(
|
||||
"--weight_decay", default=0, type=float
|
||||
) # try 0.1 / 0.01 / 0.001
|
||||
parser.add_argument("--weight_decay_final", default=-1, type=float)
|
||||
|
||||
parser.add_argument(
|
||||
"--my_pile_version", default=1, type=int
|
||||
) # my special pile version
|
||||
parser.add_argument("--my_pile_stage", default=0, type=int) # my special pile mode
|
||||
parser.add_argument(
|
||||
"--my_pile_shift", default=-1, type=int
|
||||
) # my special pile mode - text shift
|
||||
parser.add_argument("--my_pile_edecay", default=0, type=int)
|
||||
parser.add_argument(
|
||||
"--layerwise_lr", default=1, type=int
|
||||
) # layerwise lr for faster convergence (but slower it/s)
|
||||
parser.add_argument(
|
||||
"--ds_bucket_mb", default=200, type=int
|
||||
) # deepspeed bucket size in MB. 200 seems enough
|
||||
# parser.add_argument("--cuda_cleanup", default=0, type=int) # extra cuda cleanup (sometimes helpful)
|
||||
|
||||
parser.add_argument("--my_sample_len", default=0, type=int)
|
||||
parser.add_argument("--my_ffn_shift", default=1, type=int)
|
||||
parser.add_argument("--my_att_shift", default=1, type=int)
|
||||
parser.add_argument(
|
||||
"--head_size_a", default=64, type=int
|
||||
) # can try larger values for larger models
|
||||
parser.add_argument("--head_size_divisor", default=8, type=int)
|
||||
parser.add_argument("--my_pos_emb", default=0, type=int)
|
||||
parser.add_argument("--load_partial", default=0, type=int)
|
||||
parser.add_argument("--magic_prime", default=0, type=int)
|
||||
parser.add_argument("--my_qa_mask", default=0, type=int)
|
||||
parser.add_argument("--my_random_steps", default=0, type=int)
|
||||
parser.add_argument("--my_testing", default="", type=str)
|
||||
parser.add_argument("--my_exit", default=99999999, type=int)
|
||||
parser.add_argument("--my_exit_tokens", default=0, type=int)
|
||||
|
||||
# LORA
|
||||
parser.add_argument("--emb", action="store_true")
|
||||
parser.add_argument("--lora", action="store_true")
|
||||
parser.add_argument("--lora_load", default="", type=str)
|
||||
parser.add_argument("--lora_r", default=8, type=int)
|
||||
parser.add_argument("--lora_alpha", default=32, type=float)
|
||||
parser.add_argument("--lora_dropout", default=0.01, type=float)
|
||||
parser.add_argument("--lora_parts", default="att,ln,time", type=str)
|
||||
|
||||
if pl.__version__[0] == "2":
|
||||
parser.add_argument("--accelerator", default="gpu", type=str)
|
||||
parser.add_argument("--strategy", default="auto", type=str)
|
||||
parser.add_argument("--devices", default=1, type=int)
|
||||
parser.add_argument("--num_nodes", default=1, type=int)
|
||||
parser.add_argument("--precision", default="fp16", type=str)
|
||||
parser.add_argument("--accumulate_grad_batches", default=1, type=int)
|
||||
else:
|
||||
parser = Trainer.add_argparse_args(parser)
|
||||
args = parser.parse_args()
|
||||
|
||||
########################################################################################################
|
||||
|
||||
import os, warnings, math, datetime, sys, time
|
||||
import numpy as np
|
||||
import torch
|
||||
from torch.utils.data import DataLoader
|
||||
|
||||
if "deepspeed" in args.strategy:
|
||||
import deepspeed
|
||||
from pytorch_lightning import seed_everything
|
||||
|
||||
if args.random_seed >= 0:
|
||||
print(
|
||||
f"########## WARNING: GLOBAL SEED {args.random_seed} THIS WILL AFFECT MULTIGPU SAMPLING ##########\n"
|
||||
* 3
|
||||
)
|
||||
seed_everything(args.random_seed)
|
||||
|
||||
np.set_printoptions(precision=4, suppress=True, linewidth=200)
|
||||
warnings.filterwarnings(
|
||||
"ignore", ".*Consider increasing the value of the `num_workers` argument*"
|
||||
)
|
||||
warnings.filterwarnings(
|
||||
"ignore", ".*The progress bar already tracks a metric with the*"
|
||||
)
|
||||
# os.environ["WDS_SHOW_SEED"] = "1"
|
||||
|
||||
args.my_timestamp = datetime.datetime.today().strftime("%Y-%m-%d-%H-%M-%S")
|
||||
args.enable_checkpointing = False
|
||||
args.replace_sampler_ddp = False
|
||||
args.logger = False
|
||||
args.gradient_clip_val = 1.0
|
||||
args.num_sanity_val_steps = 0
|
||||
args.check_val_every_n_epoch = int(1e20)
|
||||
args.log_every_n_steps = int(1e20)
|
||||
args.max_epochs = args.epoch_count # -1 continue forever
|
||||
args.betas = (args.beta1, args.beta2)
|
||||
args.real_bsz = int(args.num_nodes) * int(args.devices) * args.micro_bsz
|
||||
os.environ["RWKV_MY_TESTING"] = args.my_testing
|
||||
os.environ["RWKV_HEAD_SIZE_A"] = str(args.head_size_a)
|
||||
if args.dim_att <= 0:
|
||||
args.dim_att = args.n_embd
|
||||
if args.dim_ffn <= 0:
|
||||
args.dim_ffn = int((args.n_embd * 3.5) // 32 * 32) # default = 3.5x emb size
|
||||
|
||||
if args.data_type == "wds_img":
|
||||
args.run_name = f"v{args.my_img_version}-{args.my_img_size}-{args.my_img_bit}bit-{args.my_img_clip}x{args.my_img_clip_scale}"
|
||||
args.proj_dir = f"{args.proj_dir}-{args.run_name}"
|
||||
else:
|
||||
args.run_name = (
|
||||
f"{args.vocab_size} ctx{args.ctx_len} L{args.n_layer} D{args.n_embd}"
|
||||
)
|
||||
if not os.path.exists(args.proj_dir):
|
||||
os.makedirs(args.proj_dir)
|
||||
|
||||
if args.my_pile_stage > 0:
|
||||
magic_prime_bak = args.magic_prime
|
||||
|
||||
if args.my_pile_shift < 0:
|
||||
args.my_pile_shift = 0
|
||||
|
||||
if magic_prime_bak > 0:
|
||||
args.magic_prime = magic_prime_bak
|
||||
if args.my_qa_mask == 2:
|
||||
args.epoch_count = 2 * args.magic_prime // 40320
|
||||
else:
|
||||
args.epoch_count = args.magic_prime // 40320
|
||||
|
||||
args.epoch_steps = 40320 // args.real_bsz
|
||||
assert args.epoch_steps * args.real_bsz == 40320
|
||||
# if args.my_pile_stage == 2:
|
||||
# assert args.lr_final == args.lr_init
|
||||
if args.my_pile_stage >= 2: # find latest saved model
|
||||
list_p = []
|
||||
for p in os.listdir(args.proj_dir):
|
||||
if p.startswith("rwkv") and p.endswith(".pth"):
|
||||
p = ((p.split("-"))[1].split("."))[0]
|
||||
if p != "final":
|
||||
if p == "init":
|
||||
p = -1
|
||||
else:
|
||||
p = int(p)
|
||||
list_p += [p]
|
||||
list_p.sort()
|
||||
max_p = list_p[-1]
|
||||
if len(list_p) > 1:
|
||||
args.my_pile_prev_p = list_p[-2] # in case max_p is corrupted
|
||||
if max_p == -1:
|
||||
args.load_model = f"{args.proj_dir}/rwkv-init.pth"
|
||||
else:
|
||||
args.load_model = f"{args.proj_dir}/rwkv-{max_p}.pth"
|
||||
if args.warmup_steps < 0:
|
||||
if args.my_pile_stage == 2:
|
||||
args.warmup_steps = 10
|
||||
else:
|
||||
args.warmup_steps = 30
|
||||
args.epoch_begin = max_p + 1
|
||||
|
||||
samples_per_epoch = args.epoch_steps * args.real_bsz
|
||||
tokens_per_epoch = samples_per_epoch * args.ctx_len
|
||||
try:
|
||||
deepspeed_version = deepspeed.__version__
|
||||
except:
|
||||
deepspeed_version = None
|
||||
pass
|
||||
rank_zero_info(
|
||||
f"""
|
||||
############################################################################
|
||||
#
|
||||
# RWKV-5 {args.precision.upper()} on {args.num_nodes}x{args.devices} {args.accelerator.upper()}, bsz {args.num_nodes}x{args.devices}x{args.micro_bsz}={args.real_bsz}, {args.strategy} {'with grad_cp' if args.grad_cp > 0 else ''}
|
||||
#
|
||||
# Data = {args.data_file} ({args.data_type}), ProjDir = {args.proj_dir}
|
||||
#
|
||||
# Epoch = {args.epoch_begin} to {args.epoch_begin + args.epoch_count - 1}, save every {args.epoch_save} epoch
|
||||
#
|
||||
# Each "epoch" = {args.epoch_steps} steps, {samples_per_epoch} samples, {tokens_per_epoch} tokens
|
||||
#
|
||||
# Model = {args.n_layer} n_layer, {args.n_embd} n_embd, {args.ctx_len} ctx_len
|
||||
#
|
||||
# Adam = lr {args.lr_init} to {args.lr_final}, warmup {args.warmup_steps} steps, beta {args.betas}, eps {args.adam_eps}
|
||||
#
|
||||
# Found torch {torch.__version__}, recommend 1.13.1+cu117 or newer
|
||||
# Found deepspeed {deepspeed_version}, recommend 0.7.0 (faster than newer versions)
|
||||
# Found pytorch_lightning {pl.__version__}, recommend 1.9.5
|
||||
#
|
||||
############################################################################
|
||||
"""
|
||||
)
|
||||
rank_zero_info(str(vars(args)) + "\n")
|
||||
|
||||
assert args.data_type in ["utf-8", "utf-16le", "numpy", "binidx", "dummy", "uint16"]
|
||||
|
||||
if args.lr_final == 0 or args.lr_init == 0:
|
||||
rank_zero_info(
|
||||
"\n\nNote: lr_final = 0 or lr_init = 0. Using linear LR schedule instead.\n\n"
|
||||
)
|
||||
|
||||
assert args.precision in ["fp32", "tf32", "fp16", "bf16"]
|
||||
os.environ["RWKV_FLOAT_MODE"] = args.precision
|
||||
if args.precision == "fp32":
|
||||
for i in range(10):
|
||||
rank_zero_info(
|
||||
"\n\nNote: you are using fp32 (very slow). Try bf16 / tf32 for faster training.\n\n"
|
||||
)
|
||||
if args.precision == "fp16":
|
||||
rank_zero_info(
|
||||
"\n\nNote: you are using fp16 (might overflow). Try bf16 / tf32 for stable training.\n\n"
|
||||
)
|
||||
|
||||
os.environ["RWKV_JIT_ON"] = "0"
|
||||
if "deepspeed_stage_3" in args.strategy:
|
||||
os.environ["RWKV_JIT_ON"] = "0"
|
||||
|
||||
torch.backends.cudnn.benchmark = True
|
||||
torch.backends.cudnn.enabled = True
|
||||
if args.precision == "fp32":
|
||||
torch.backends.cudnn.allow_tf32 = False
|
||||
torch.backends.cuda.matmul.allow_tf32 = False
|
||||
else:
|
||||
torch.backends.cudnn.allow_tf32 = True
|
||||
torch.backends.cuda.matmul.allow_tf32 = True
|
||||
|
||||
if "32" in args.precision:
|
||||
args.precision = 32
|
||||
elif args.precision == "fp16":
|
||||
args.precision = 16
|
||||
else:
|
||||
args.precision = "bf16"
|
||||
|
||||
########################################################################################################
|
||||
|
||||
from src.trainer import train_callback, generate_init_weight
|
||||
from src.dataset import MyDataset
|
||||
|
||||
train_data = MyDataset(args)
|
||||
args.vocab_size = train_data.vocab_size
|
||||
|
||||
from src.model import RWKV, LORA_CONFIG, LoraLinear
|
||||
|
||||
if args.lora:
|
||||
assert args.lora_r > 0, "LoRA should have its `r` > 0"
|
||||
LORA_CONFIG["r"] = args.lora_r
|
||||
LORA_CONFIG["alpha"] = args.lora_alpha
|
||||
LORA_CONFIG["dropout"] = args.lora_dropout
|
||||
LORA_CONFIG["parts"] = set(str(args.lora_parts).split(","))
|
||||
enable_time_finetune = "time" in LORA_CONFIG["parts"]
|
||||
enable_ln_finetune = "ln" in LORA_CONFIG["parts"]
|
||||
model = RWKV(args)
|
||||
# only train lora parameters
|
||||
if args.lora:
|
||||
model.requires_grad_(False)
|
||||
for name, module in model.named_modules():
|
||||
if any(n.startswith("lora_") for n, _ in module.named_parameters()):
|
||||
print(f" LoRA additionally training module {name}")
|
||||
for pname, param in module.named_parameters():
|
||||
param.requires_grad = "lora_" in pname
|
||||
elif enable_ln_finetune and ".ln" in name:
|
||||
print(f" LoRA additionally training module {name}")
|
||||
for param in module.parameters():
|
||||
param.requires_grad = True
|
||||
elif enable_time_finetune and any(
|
||||
n.startswith("time") for n, _ in module.named_parameters()
|
||||
):
|
||||
for pname, param in module.named_parameters():
|
||||
if pname.startswith("time"):
|
||||
print(f" LoRA additionally training parameter {pname}")
|
||||
param.requires_grad = True
|
||||
|
||||
if (
|
||||
len(args.load_model) == 0 or args.my_pile_stage == 1
|
||||
): # shall we build the initial weights?
|
||||
init_weight_name = f"{args.proj_dir}/rwkv-init.pth"
|
||||
generate_init_weight(model, init_weight_name) # save initial weights
|
||||
args.load_model = init_weight_name
|
||||
|
||||
rank_zero_info(f"########## Loading {args.load_model}... ##########")
|
||||
try:
|
||||
load_dict = torch.load(args.load_model, map_location="cpu")
|
||||
load_keys = list(load_dict.keys())
|
||||
for k in load_keys:
|
||||
if k.startswith("_forward_module."):
|
||||
load_dict[k.replace("_forward_module.", "")] = load_dict[k]
|
||||
del load_dict[k]
|
||||
except:
|
||||
rank_zero_info(f"Bad checkpoint {args.load_model}")
|
||||
if args.my_pile_stage >= 2: # try again using another checkpoint
|
||||
max_p = args.my_pile_prev_p
|
||||
if max_p == -1:
|
||||
args.load_model = f"{args.proj_dir}/rwkv-init.pth"
|
||||
else:
|
||||
args.load_model = f"{args.proj_dir}/rwkv-{max_p}.pth"
|
||||
args.epoch_begin = max_p + 1
|
||||
rank_zero_info(f"Trying {args.load_model}")
|
||||
load_dict = torch.load(args.load_model, map_location="cpu")
|
||||
|
||||
if args.load_partial == 1:
|
||||
load_keys = load_dict.keys()
|
||||
for k in model.state_dict():
|
||||
if k not in load_keys:
|
||||
load_dict[k] = model.state_dict()[k]
|
||||
# model.load_state_dict(load_dict)
|
||||
|
||||
model.load_state_dict(load_dict, strict=(not args.lora))
|
||||
if os.path.isfile(args.lora_load):
|
||||
model.load_state_dict(
|
||||
torch.load(args.lora_load, map_location="cpu"), strict=False
|
||||
)
|
||||
|
||||
if pl.__version__[0] == "2":
|
||||
trainer = Trainer(
|
||||
accelerator=args.accelerator,
|
||||
strategy=args.strategy,
|
||||
devices=args.devices,
|
||||
num_nodes=args.num_nodes,
|
||||
precision=args.precision,
|
||||
logger=args.logger,
|
||||
callbacks=[train_callback(args)],
|
||||
max_epochs=args.max_epochs,
|
||||
check_val_every_n_epoch=args.check_val_every_n_epoch,
|
||||
num_sanity_val_steps=args.num_sanity_val_steps,
|
||||
log_every_n_steps=args.log_every_n_steps,
|
||||
enable_checkpointing=args.enable_checkpointing,
|
||||
accumulate_grad_batches=args.accumulate_grad_batches,
|
||||
gradient_clip_val=args.gradient_clip_val,
|
||||
)
|
||||
else:
|
||||
trainer = Trainer.from_argparse_args(
|
||||
args,
|
||||
callbacks=[train_callback(args)],
|
||||
)
|
||||
|
||||
if trainer.global_rank == 0:
|
||||
for n in model.state_dict():
|
||||
shape = model.state_dict()[n].shape
|
||||
shape = [i for i in shape if i != 1]
|
||||
if len(shape) > 1:
|
||||
print(f"{str(shape[0]).ljust(5)} {str(shape[1]).ljust(5)} {n}")
|
||||
else:
|
||||
print(f"{str(shape[0]).ljust(5)} {n}")
|
||||
|
||||
if "deepspeed" in args.strategy:
|
||||
trainer.strategy.config["zero_optimization"]["allgather_bucket_size"] = (
|
||||
args.ds_bucket_mb * 1000 * 1000
|
||||
)
|
||||
trainer.strategy.config["zero_optimization"]["reduce_bucket_size"] = (
|
||||
args.ds_bucket_mb * 1000 * 1000
|
||||
)
|
||||
|
||||
# must set shuffle=False, persistent_workers=False (because worker is in another thread)
|
||||
data_loader = DataLoader(
|
||||
train_data,
|
||||
shuffle=False,
|
||||
pin_memory=True,
|
||||
batch_size=args.micro_bsz,
|
||||
num_workers=1,
|
||||
persistent_workers=False,
|
||||
drop_last=True,
|
||||
)
|
||||
|
||||
trainer.fit(model, data_loader)
|
||||
32
frontend/package-lock.json
generated
32
frontend/package-lock.json
generated
@@ -13,11 +13,13 @@
|
||||
"@magenta/music": "^1.23.1",
|
||||
"@microsoft/fetch-event-source": "^2.0.1",
|
||||
"@primer/octicons-react": "^19.1.0",
|
||||
"abcjs": "^6.2.3",
|
||||
"chart.js": "^4.3.0",
|
||||
"classnames": "^2.3.2",
|
||||
"file-saver": "^2.0.5",
|
||||
"html-midi-player": "^1.5.0",
|
||||
"i18next": "^22.4.15",
|
||||
"lodash-es": "^4.17.21",
|
||||
"mobx": "^6.9.0",
|
||||
"mobx-react-lite": "^3.4.3",
|
||||
"pdfjs-dist": "^4.0.189",
|
||||
@@ -40,6 +42,7 @@
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/file-saver": "^2.0.7",
|
||||
"@types/lodash-es": "^4.17.12",
|
||||
"@types/react": "^18.2.6",
|
||||
"@types/react-beautiful-dnd": "^13.1.4",
|
||||
"@types/react-dom": "^18.2.4",
|
||||
@@ -2533,6 +2536,21 @@
|
||||
"hoist-non-react-statics": "^3.3.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@types/lodash": {
|
||||
"version": "4.14.202",
|
||||
"resolved": "https://registry.npmjs.org/@types/lodash/-/lodash-4.14.202.tgz",
|
||||
"integrity": "sha512-OvlIYQK9tNneDlS0VN54LLd5uiPCBOp7gS5Z0f1mjoJYBrtStzgmJBxONW3U6OZqdtNzZPmn9BS/7WI7BFFcFQ==",
|
||||
"dev": true
|
||||
},
|
||||
"node_modules/@types/lodash-es": {
|
||||
"version": "4.17.12",
|
||||
"resolved": "https://registry.npmjs.org/@types/lodash-es/-/lodash-es-4.17.12.tgz",
|
||||
"integrity": "sha512-0NgftHUcV4v34VhXm8QBSftKVXtbkBG3ViCjs6+eJ5a6y6Mi/jiFGPc1sC7QK+9BFhWrURE3EOggmWaSxL9OzQ==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@types/lodash": "*"
|
||||
}
|
||||
},
|
||||
"node_modules/@types/long": {
|
||||
"version": "4.0.2",
|
||||
"resolved": "https://registry.npmjs.org/@types/long/-/long-4.0.2.tgz",
|
||||
@@ -2673,6 +2691,15 @@
|
||||
"integrity": "sha512-nne9/IiQ/hzIhY6pdDnbBtz7DjPTKrY00P/zvPSm5pOFkl6xuGrGnXn/VtTNNfNtAfZ9/1RtehkszU9qcTii0Q==",
|
||||
"optional": true
|
||||
},
|
||||
"node_modules/abcjs": {
|
||||
"version": "6.2.3",
|
||||
"resolved": "https://registry.npmjs.org/abcjs/-/abcjs-6.2.3.tgz",
|
||||
"integrity": "sha512-epu8C1yRkxV7Ss9hS0Bu72rairl1p2sR3hviVowjtdDJvb5GRE0SrB4TtN4HBbaoYhvxGnSZQxGULfQlW3o3RQ==",
|
||||
"funding": {
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/paulrosen"
|
||||
}
|
||||
},
|
||||
"node_modules/acorn": {
|
||||
"version": "7.4.1",
|
||||
"resolved": "https://registry.npmjs.org/acorn/-/acorn-7.4.1.tgz",
|
||||
@@ -4210,6 +4237,11 @@
|
||||
"integrity": "sha512-7ylylesZQ/PV29jhEDl3Ufjo6ZX7gCqJr5F7PKrqc93v7fzSymt1BpwEU8nAUXs8qzzvqhbjhK5QZg6Mt/HkBg==",
|
||||
"dev": true
|
||||
},
|
||||
"node_modules/lodash-es": {
|
||||
"version": "4.17.21",
|
||||
"resolved": "https://registry.npmjs.org/lodash-es/-/lodash-es-4.17.21.tgz",
|
||||
"integrity": "sha512-mKnC+QJ9pWVzv+C4/U3rRsHapFfHvQFoFB92e52xeyGMcX6/OlIl78je1u8vePzYZSkkogMPJ2yjxxsb89cxyw=="
|
||||
},
|
||||
"node_modules/long": {
|
||||
"version": "4.0.0",
|
||||
"resolved": "https://registry.npmjs.org/long/-/long-4.0.0.tgz",
|
||||
|
||||
@@ -14,11 +14,13 @@
|
||||
"@magenta/music": "^1.23.1",
|
||||
"@microsoft/fetch-event-source": "^2.0.1",
|
||||
"@primer/octicons-react": "^19.1.0",
|
||||
"abcjs": "^6.2.3",
|
||||
"chart.js": "^4.3.0",
|
||||
"classnames": "^2.3.2",
|
||||
"file-saver": "^2.0.5",
|
||||
"html-midi-player": "^1.5.0",
|
||||
"i18next": "^22.4.15",
|
||||
"lodash-es": "^4.17.21",
|
||||
"mobx": "^6.9.0",
|
||||
"mobx-react-lite": "^3.4.3",
|
||||
"pdfjs-dist": "^4.0.189",
|
||||
@@ -41,6 +43,7 @@
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/file-saver": "^2.0.7",
|
||||
"@types/lodash-es": "^4.17.12",
|
||||
"@types/react": "^18.2.6",
|
||||
"@types/react-beautiful-dnd": "^13.1.4",
|
||||
"@types/react-dom": "^18.2.4",
|
||||
|
||||
@@ -162,7 +162,7 @@
|
||||
"Memory is not enough, try to increase the virtual memory or use a smaller model.": "メモリが不足しています。仮想メモリを増やすか、もしくは小さなモデルを使ってみてください",
|
||||
"Bad PyTorch version, please reinstall PyTorch with cuda.": "不適切なPyTorchのバージョンです。cudaと共にPyTorchを再インストールしてください。",
|
||||
"The model file is corrupted, please download again.": "モデルファイルが破損しています。再度ダウンロードしてください。",
|
||||
"Found no NVIDIA driver, please install the latest driver.": "NVIDIAのドライバが見つかりません。最新版のドライバをインストールしてください。",
|
||||
"Found no NVIDIA driver, please install the latest driver. If you are not using an Nvidia GPU, please switch the 'Strategy' to WebGPU or CPU in the Configs page.": "NVIDIAのドライバが見つかりません。最新版のドライバをインストールしてください。NvidiaのGPUを使用していない場合は、設定ページで\"Strategy\"をWebGPUまたはCPUに切り替えてください。",
|
||||
"VRAM is not enough, please reduce stored layers or use a lower precision in Configs page.": "VRAMが足りません。設定ページで保存されているレイヤーを減らすか、精度を下げてください。",
|
||||
"Failed to enable custom CUDA kernel, ninja is required to load C++ extensions. You may be using the CPU version of PyTorch, please reinstall PyTorch with CUDA. Or if you are using a custom Python interpreter, you must compile the CUDA kernel by yourself or disable Custom CUDA kernel acceleration.": "カスタムCUDAカーネルの有効化に失敗しました。C++拡張を読み込むためにはNinjaが必要です。あなたは恐らくCPU版のPyTorchを使用しており、CUDA版のPyTorchを再インストールする必要があります。または、あなたがカスタムPythonインタプリタを使用している場合は、CUDAカーネルを自分でコンパイルするか、カスタムCUDAカーネルのアクセラレーションを無効にする必要があります。",
|
||||
"Presets": "プリセット",
|
||||
@@ -312,6 +312,8 @@
|
||||
"JP": "日本語",
|
||||
"Music": "音楽",
|
||||
"Other": "その他",
|
||||
"Role Play": "ロールプレイ",
|
||||
"Recommended": "おすすめ",
|
||||
"Import MIDI": "MIDIをインポート",
|
||||
"Current Instrument": "現在の楽器",
|
||||
"Please convert model to GGML format first": "モデルをGGML形式に変換してください",
|
||||
@@ -320,5 +322,7 @@
|
||||
"Play With External Player": "外部プレーヤーで再生",
|
||||
"Core API URL": "コアAPI URL",
|
||||
"Override core API URL(/chat/completions and /completions). If you don't know what this is, leave it blank.": "コアAPI URLを上書きします(/chat/completions と /completions)。何であるかわからない場合は空白のままにしてください。",
|
||||
"Please change Strategy to CPU (rwkv.cpp) to use ggml format": "StrategyをCPU (rwkv.cpp)に変更して、ggml形式を使用してください"
|
||||
"Please change Strategy to CPU (rwkv.cpp) to use ggml format": "StrategyをCPU (rwkv.cpp)に変更して、ggml形式を使用してください",
|
||||
"Only Auto Play Generated Content": "生成されたコンテンツのみ自動再生",
|
||||
"Model has been converted and does not match current strategy. If you are using a new strategy, re-convert the model.": "モデルが変換され、現在の戦略と一致しません。新しい戦略を使用している場合は、モデルを再変換してください。"
|
||||
}
|
||||
@@ -162,7 +162,7 @@
|
||||
"Memory is not enough, try to increase the virtual memory or use a smaller model.": "内存不足,尝试增加虚拟内存,或使用一个更小规模的模型",
|
||||
"Bad PyTorch version, please reinstall PyTorch with cuda.": "错误的PyTorch版本,请重新安装CUDA版本的PyTorch",
|
||||
"The model file is corrupted, please download again.": "模型文件损坏,请重新下载",
|
||||
"Found no NVIDIA driver, please install the latest driver.": "没有找到NVIDIA驱动,请安装最新驱动",
|
||||
"Found no NVIDIA driver, please install the latest driver. If you are not using an Nvidia GPU, please switch the 'Strategy' to WebGPU or CPU in the Configs page.": "没有找到NVIDIA驱动,请安装最新驱动。如果你没有使用Nvidia显卡,请在配置页面将“Strategy”改为WebGPU或CPU",
|
||||
"VRAM is not enough, please reduce stored layers or use a lower precision in Configs page.": "显存不足,请在配置页面减少载入显存层数,或使用更低的精度",
|
||||
"Failed to enable custom CUDA kernel, ninja is required to load C++ extensions. You may be using the CPU version of PyTorch, please reinstall PyTorch with CUDA. Or if you are using a custom Python interpreter, you must compile the CUDA kernel by yourself or disable Custom CUDA kernel acceleration.": "自定义CUDA算子开启失败,需要安装Ninja来读取C++扩展。你可能正在使用CPU版本的PyTorch,请重新安装CUDA版本的PyTorch。如果你正在使用自定义Python解释器,你必须自己编译CUDA算子或禁用自定义CUDA算子加速",
|
||||
"Presets": "预设",
|
||||
@@ -312,6 +312,8 @@
|
||||
"JP": "日文",
|
||||
"Music": "音乐",
|
||||
"Other": "其他",
|
||||
"Role Play": "角色扮演",
|
||||
"Recommended": "推荐",
|
||||
"Import MIDI": "导入MIDI",
|
||||
"Current Instrument": "当前乐器",
|
||||
"Please convert model to GGML format first": "请先将模型转换为GGML格式",
|
||||
@@ -320,5 +322,7 @@
|
||||
"Play With External Player": "使用外部播放器播放",
|
||||
"Core API URL": "核心 API URL",
|
||||
"Override core API URL(/chat/completions and /completions). If you don't know what this is, leave it blank.": "覆盖核心的 API URL (/chat/completions 和 /completions)。如果你不知道这是什么,请留空",
|
||||
"Please change Strategy to CPU (rwkv.cpp) to use ggml format": "请将Strategy改为CPU (rwkv.cpp)以使用ggml格式"
|
||||
"Please change Strategy to CPU (rwkv.cpp) to use ggml format": "请将Strategy改为CPU (rwkv.cpp)以使用ggml格式",
|
||||
"Only Auto Play Generated Content": "仅自动播放新生成的内容",
|
||||
"Model has been converted and does not match current strategy. If you are using a new strategy, re-convert the model.": "所选模型已被转换过,并且不匹配当前的Strategy。如果你正在使用新的Strategy,请重新转换模型"
|
||||
}
|
||||
@@ -19,6 +19,7 @@ import { useNavigate } from 'react-router';
|
||||
import { WindowShow } from '../../wailsjs/runtime';
|
||||
import { convertToGGML, convertToSt } from '../utils/convert-model';
|
||||
import { Precision } from '../types/configs';
|
||||
import { defaultCompositionABCPrompt, defaultCompositionPrompt } from '../pages/defaultConfigs';
|
||||
|
||||
const mainButtonText = {
|
||||
[ModelStatus.Offline]: 'Run',
|
||||
@@ -48,6 +49,7 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
|
||||
|
||||
const modelConfig = commonStore.getCurrentModelConfig();
|
||||
const webgpu = modelConfig.modelParameters.device === 'WebGPU';
|
||||
const webgpuPython = modelConfig.modelParameters.device === 'WebGPU (Python)';
|
||||
const cpp = modelConfig.modelParameters.device === 'CPU (rwkv.cpp)';
|
||||
let modelName = '';
|
||||
let modelPath = '';
|
||||
@@ -77,7 +79,7 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
|
||||
});
|
||||
};
|
||||
|
||||
if (webgpu) {
|
||||
if (webgpu || webgpuPython) {
|
||||
if (!['.st', '.safetensors'].some(ext => modelPath.endsWith(ext))) {
|
||||
const stModelPath = modelPath.replace(/\.pth$/, '.st');
|
||||
if (await FileExists(stModelPath)) {
|
||||
@@ -92,7 +94,7 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
|
||||
return;
|
||||
} else {
|
||||
toastWithButton(t('Please convert model to safe tensors format first'), t('Convert'), () => {
|
||||
convertToSt(modelConfig);
|
||||
convertToSt(modelConfig, navigate);
|
||||
});
|
||||
commonStore.setStatus({ status: ModelStatus.Offline });
|
||||
return;
|
||||
@@ -100,7 +102,7 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
|
||||
}
|
||||
}
|
||||
|
||||
if (!webgpu) {
|
||||
if (!webgpu && !webgpuPython) {
|
||||
if (['.st', '.safetensors'].some(ext => modelPath.endsWith(ext))) {
|
||||
toast(t('Please change Strategy to WebGPU to use safetensors format'), { type: 'error' });
|
||||
commonStore.setStatus({ status: ModelStatus.Offline });
|
||||
@@ -176,7 +178,7 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
|
||||
const isUsingCudaBeta = modelConfig.modelParameters.device === 'CUDA-Beta';
|
||||
|
||||
startServer(commonStore.settings.customPythonPath, port, commonStore.settings.host !== '127.0.0.1' ? '0.0.0.0' : '127.0.0.1',
|
||||
!!modelConfig.enableWebUI, isUsingCudaBeta, cpp
|
||||
!!modelConfig.enableWebUI, isUsingCudaBeta, cpp, webgpuPython
|
||||
).catch((e) => {
|
||||
const errMsg = e.message || e;
|
||||
if (errMsg.includes('path contains space'))
|
||||
@@ -216,7 +218,7 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
|
||||
|
||||
const strategy = getStrategy(modelConfig);
|
||||
let customCudaFile = '';
|
||||
if ((modelConfig.modelParameters.device.includes('CUDA') || modelConfig.modelParameters.device === 'Custom')
|
||||
if ((modelConfig.modelParameters.device.startsWith('CUDA') || modelConfig.modelParameters.device === 'Custom')
|
||||
&& modelConfig.modelParameters.useCustomCuda
|
||||
&& !strategy.split('->').some(s => ['cuda', 'fp32'].every(v => s.includes(v)))) {
|
||||
if (commonStore.platform === 'windows') {
|
||||
@@ -256,6 +258,7 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
|
||||
commonStore.setStatus({ status: ModelStatus.Working });
|
||||
let buttonNameMap = {
|
||||
'novel': 'Completion',
|
||||
'abc': 'Composition',
|
||||
'midi': 'Composition'
|
||||
};
|
||||
let buttonName = 'Chat';
|
||||
@@ -263,8 +266,15 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
|
||||
const buttonFn = () => {
|
||||
navigate({ pathname: '/' + buttonName.toLowerCase() });
|
||||
};
|
||||
if (modelName.toLowerCase().includes('abc') && commonStore.compositionParams.prompt === defaultCompositionPrompt) {
|
||||
commonStore.setCompositionParams({
|
||||
...commonStore.compositionParams,
|
||||
prompt: defaultCompositionABCPrompt
|
||||
});
|
||||
commonStore.setCompositionSubmittedPrompt(defaultCompositionABCPrompt);
|
||||
}
|
||||
|
||||
if ((modelConfig.modelParameters.device === 'CUDA' || modelConfig.modelParameters.device === 'CUDA-Beta') &&
|
||||
if (modelConfig.modelParameters.device.startsWith('CUDA') &&
|
||||
modelConfig.modelParameters.storedLayers < modelConfig.modelParameters.maxStoredLayers &&
|
||||
commonStore.monitorData && commonStore.monitorData.totalVram !== 0 &&
|
||||
(commonStore.monitorData.usedVram / commonStore.monitorData.totalVram) < 0.9)
|
||||
@@ -279,9 +289,10 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
|
||||
'not enough memory': 'Memory is not enough, try to increase the virtual memory or use a smaller model.',
|
||||
'not compiled with CUDA': 'Bad PyTorch version, please reinstall PyTorch with cuda.',
|
||||
'invalid header or archive is corrupted': 'The model file is corrupted, please download again.',
|
||||
'no NVIDIA driver': 'Found no NVIDIA driver, please install the latest driver.',
|
||||
'no NVIDIA driver': 'Found no NVIDIA driver, please install the latest driver. If you are not using an Nvidia GPU, please switch the \'Strategy\' to WebGPU or CPU in the Configs page.',
|
||||
'CUDA out of memory': 'VRAM is not enough, please reduce stored layers or use a lower precision in Configs page.',
|
||||
'Ninja is required to load C++ extensions': 'Failed to enable custom CUDA kernel, ninja is required to load C++ extensions. You may be using the CPU version of PyTorch, please reinstall PyTorch with CUDA. Or if you are using a custom Python interpreter, you must compile the CUDA kernel by yourself or disable Custom CUDA kernel acceleration.'
|
||||
'Ninja is required to load C++ extensions': 'Failed to enable custom CUDA kernel, ninja is required to load C++ extensions. You may be using the CPU version of PyTorch, please reinstall PyTorch with CUDA. Or if you are using a custom Python interpreter, you must compile the CUDA kernel by yourself or disable Custom CUDA kernel acceleration.',
|
||||
're-convert the model': 'Model has been converted and does not match current strategy. If you are using a new strategy, re-convert the model.'
|
||||
};
|
||||
const matchedError = Object.entries(errorsMap).find(([key, _]) => error.includes(key));
|
||||
const message = matchedError ? t(matchedError[1]) : error;
|
||||
|
||||
@@ -15,12 +15,13 @@ import { ArrowSync20Regular, Save28Regular } from '@fluentui/react-icons';
|
||||
import { PlayerElement, VisualizerElement } from 'html-midi-player';
|
||||
import * as mm from '@magenta/music/esm/core.js';
|
||||
import { NoteSequence } from '@magenta/music/esm/protobuf.js';
|
||||
import { defaultCompositionPrompt } from './defaultConfigs';
|
||||
import { defaultCompositionABCPrompt, defaultCompositionPrompt } from './defaultConfigs';
|
||||
import {
|
||||
CloseMidiPort,
|
||||
FileExists,
|
||||
OpenFileFolder,
|
||||
OpenMidiPort,
|
||||
OpenSaveFileDialog,
|
||||
OpenSaveFileDialogBytes,
|
||||
SaveFile,
|
||||
StartFile
|
||||
@@ -36,7 +37,9 @@ const CompositionPanel: FC = observer(() => {
|
||||
const { t } = useTranslation();
|
||||
const mq = useMediaQuery('(min-width: 640px)');
|
||||
const inputRef = useRef<HTMLTextAreaElement>(null);
|
||||
const port = commonStore.getCurrentModelConfig().apiParameters.apiPort;
|
||||
const modelConfig = commonStore.getCurrentModelConfig();
|
||||
const port = modelConfig.apiParameters.apiPort;
|
||||
const isABC = modelConfig.modelParameters.modelName.toLowerCase().includes('abc');
|
||||
const visualizerRef = useRef<VisualizerElement>(null);
|
||||
const playerRef = useRef<PlayerElement>(null);
|
||||
|
||||
@@ -133,6 +136,13 @@ const CompositionPanel: FC = observer(() => {
|
||||
}, [commonStore.midiPorts]);
|
||||
|
||||
const generateNs = (autoPlay: boolean) => {
|
||||
if (commonStore.getCurrentModelConfig().modelParameters.modelName.toLowerCase().includes('abc')) {
|
||||
import('abcjs').then(ABCJS => {
|
||||
ABCJS.renderAbc('abc-paper', commonStore.compositionParams.prompt, { responsive: 'resize' });
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
fetch(getServerRoot(port) + '/text-to-midi', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
@@ -152,10 +162,14 @@ const CompositionPanel: FC = observer(() => {
|
||||
if (autoPlay) {
|
||||
if (commonStore.compositionParams.externalPlay)
|
||||
externalPlayListener();
|
||||
else
|
||||
else {
|
||||
if (commonStore.compositionParams.playOnlyGeneratedContent && playerRef.current) {
|
||||
playerRef.current.currentTime = Math.max(commonStore.compositionParams.generationStartTime - 1, 0);
|
||||
}
|
||||
setTimeout(() => {
|
||||
playerRef.current?.start();
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
@@ -314,6 +328,14 @@ const CompositionPanel: FC = observer(() => {
|
||||
autoPlay: data.checked as boolean
|
||||
});
|
||||
}} />
|
||||
<Checkbox className="select-none"
|
||||
size="large" label={t('Only Auto Play Generated Content')} checked={params.playOnlyGeneratedContent}
|
||||
onChange={async (_, data) => {
|
||||
setParams({
|
||||
autoPlay: data.checked as boolean || commonStore.compositionParams.autoPlay,
|
||||
playOnlyGeneratedContent: data.checked as boolean
|
||||
});
|
||||
}} />
|
||||
<Labeled flex breakline label={t('MIDI Input')}
|
||||
desc={t('Select the MIDI input device to be used.')}
|
||||
content={
|
||||
@@ -358,8 +380,13 @@ const CompositionPanel: FC = observer(() => {
|
||||
<DialogButton className="grow" text={t('Reset')} title={t('Reset')}
|
||||
contentText={t('Are you sure you want to reset this page? It cannot be undone.')}
|
||||
onConfirm={() => {
|
||||
commonStore.setCompositionSubmittedPrompt(defaultCompositionPrompt);
|
||||
setPrompt(defaultCompositionPrompt);
|
||||
const isABC = commonStore.getCurrentModelConfig().modelParameters.modelName.toLowerCase().includes('abc');
|
||||
const defaultPrompt = isABC ? defaultCompositionABCPrompt : defaultCompositionPrompt;
|
||||
commonStore.setCompositionSubmittedPrompt(defaultPrompt);
|
||||
setParams({
|
||||
generationStartTime: 0
|
||||
});
|
||||
setPrompt(defaultPrompt);
|
||||
}} />
|
||||
<Button className="grow" appearance="primary" onClick={() => {
|
||||
if (commonStore.compositionGenerating) {
|
||||
@@ -368,6 +395,9 @@ const CompositionPanel: FC = observer(() => {
|
||||
generateNs(params.autoPlay);
|
||||
} else {
|
||||
commonStore.setCompositionGenerating(true);
|
||||
setParams({
|
||||
generationStartTime: playerRef.current ? playerRef.current.duration : 0
|
||||
});
|
||||
onSubmit(params.prompt);
|
||||
}
|
||||
}}>{!commonStore.compositionGenerating ? t('Generate') : t('Stop')}</Button>
|
||||
@@ -376,18 +406,33 @@ const CompositionPanel: FC = observer(() => {
|
||||
</div>
|
||||
<div className="flex flex-col">
|
||||
<div className="ml-auto mr-auto">
|
||||
<midi-visualizer
|
||||
ref={visualizerRef}
|
||||
type="waterfall"
|
||||
/>
|
||||
{isABC ? <div /> :
|
||||
<midi-visualizer
|
||||
ref={visualizerRef}
|
||||
type="waterfall"
|
||||
/>}
|
||||
</div>
|
||||
<div className="flex">
|
||||
<midi-player
|
||||
ref={playerRef}
|
||||
style={{ width: '100%' }}
|
||||
/>
|
||||
{isABC ? <div className="flex flex-col overflow-y-auto grow m-1" style={{ maxHeight: '260px' }}>
|
||||
<div id="abc-paper" />
|
||||
</div> :
|
||||
<midi-player
|
||||
ref={playerRef}
|
||||
style={{ width: '100%' }}
|
||||
/>}
|
||||
<Button icon={<Save28Regular />} size={mq ? 'large' : 'medium'} appearance={mq ? 'secondary' : 'subtle'}
|
||||
onClick={() => {
|
||||
if (isABC) {
|
||||
OpenSaveFileDialog('*.txt', 'abc-music.txt', commonStore.compositionParams.prompt).then((path) => {
|
||||
if (path)
|
||||
toastWithButton(t('File Saved'), t('Open'), () => {
|
||||
OpenFileFolder(path, false);
|
||||
});
|
||||
}).catch((e) => {
|
||||
toast(t('Error') + ' - ' + (e.message || e), { type: 'error', autoClose: 2500 });
|
||||
});
|
||||
return;
|
||||
}
|
||||
if (params.midi) {
|
||||
OpenSaveFileDialogBytes('*.mid', 'music.mid', Array.from(new Uint8Array(params.midi))).then((path) => {
|
||||
if (path)
|
||||
|
||||
@@ -228,9 +228,18 @@ const Configs: FC = observer(() => {
|
||||
<Select style={{ minWidth: 0 }} className="grow"
|
||||
value={selectedConfig.modelParameters.modelName}
|
||||
onChange={(e, data) => {
|
||||
setSelectedConfigModelParams({
|
||||
modelName: data.value
|
||||
});
|
||||
const modelSource = commonStore.modelSourceList.find(item => item.name === data.value);
|
||||
if (modelSource?.customTokenizer)
|
||||
setSelectedConfigModelParams({
|
||||
modelName: data.value,
|
||||
useCustomTokenizer: true,
|
||||
customTokenizer: modelSource?.customTokenizer
|
||||
});
|
||||
else // prevent customTokenizer from being overwritten
|
||||
setSelectedConfigModelParams({
|
||||
modelName: data.value,
|
||||
useCustomTokenizer: false
|
||||
});
|
||||
}}>
|
||||
{!commonStore.modelSourceList.find(item => item.name === selectedConfig.modelParameters.modelName)?.isComplete
|
||||
&& <option key={-1}
|
||||
@@ -246,7 +255,7 @@ const Configs: FC = observer(() => {
|
||||
</div>
|
||||
} />
|
||||
{
|
||||
selectedConfig.modelParameters.device !== 'WebGPU' ?
|
||||
!selectedConfig.modelParameters.device.startsWith('WebGPU') ?
|
||||
(selectedConfig.modelParameters.device !== 'CPU (rwkv.cpp)' ?
|
||||
<ToolTipButton text={t('Convert')}
|
||||
desc={t('Convert model with these configs. Using a converted model will greatly improve the loading speed, but model parameters of the converted model cannot be modified.')}
|
||||
@@ -256,7 +265,7 @@ const Configs: FC = observer(() => {
|
||||
onClick={() => convertToGGML(selectedConfig, navigate)} />)
|
||||
: <ToolTipButton text={t('Convert To Safe Tensors Format')}
|
||||
desc=""
|
||||
onClick={() => convertToSt(selectedConfig)} />
|
||||
onClick={() => convertToSt(selectedConfig, navigate)} />
|
||||
}
|
||||
<Labeled label={t('Strategy')} content={
|
||||
<Dropdown style={{ minWidth: 0 }} className="grow" value={t(selectedConfig.modelParameters.device)!}
|
||||
@@ -274,6 +283,7 @@ const Configs: FC = observer(() => {
|
||||
<Option value="CUDA">CUDA</Option>
|
||||
<Option value="CUDA-Beta">{t('CUDA (Beta, Faster)')!}</Option>
|
||||
<Option value="WebGPU">WebGPU</Option>
|
||||
<Option value="WebGPU (Python)">WebGPU (Python)</Option>
|
||||
<Option value="Custom">{t('Custom')!}</Option>
|
||||
</Dropdown>
|
||||
} />
|
||||
@@ -281,7 +291,8 @@ const Configs: FC = observer(() => {
|
||||
selectedConfig.modelParameters.device !== 'Custom' && <Labeled label={t('Precision')}
|
||||
desc={t('int8 uses less VRAM, but has slightly lower quality. fp16 has higher quality.')}
|
||||
content={
|
||||
<Dropdown style={{ minWidth: 0 }} className="grow"
|
||||
<Dropdown
|
||||
style={{ minWidth: 0 }} className="grow"
|
||||
value={selectedConfig.modelParameters.precision}
|
||||
selectedOptions={[selectedConfig.modelParameters.precision]}
|
||||
onOptionSelect={(_, data) => {
|
||||
@@ -294,20 +305,20 @@ const Configs: FC = observer(() => {
|
||||
{selectedConfig.modelParameters.device !== 'CPU' && selectedConfig.modelParameters.device !== 'MPS' &&
|
||||
<Option>fp16</Option>}
|
||||
{selectedConfig.modelParameters.device !== 'CPU (rwkv.cpp)' && <Option>int8</Option>}
|
||||
{selectedConfig.modelParameters.device === 'WebGPU' && <Option>nf4</Option>}
|
||||
{selectedConfig.modelParameters.device !== 'CPU (rwkv.cpp)' && selectedConfig.modelParameters.device !== 'WebGPU' &&
|
||||
{selectedConfig.modelParameters.device.startsWith('WebGPU') && <Option>nf4</Option>}
|
||||
{selectedConfig.modelParameters.device !== 'CPU (rwkv.cpp)' && !selectedConfig.modelParameters.device.startsWith('WebGPU') &&
|
||||
<Option>fp32</Option>}
|
||||
{selectedConfig.modelParameters.device === 'CPU (rwkv.cpp)' && <Option>Q5_1</Option>}
|
||||
</Dropdown>
|
||||
} />
|
||||
}
|
||||
{
|
||||
selectedConfig.modelParameters.device.includes('CUDA') &&
|
||||
selectedConfig.modelParameters.device.startsWith('CUDA') &&
|
||||
<Labeled label={t('Current Strategy')}
|
||||
content={<Text> {getStrategy(selectedConfig)} </Text>} />
|
||||
}
|
||||
{
|
||||
selectedConfig.modelParameters.device.includes('CUDA') &&
|
||||
selectedConfig.modelParameters.device.startsWith('CUDA') &&
|
||||
<Labeled label={t('Stored Layers')}
|
||||
desc={t('Number of the neural network layers loaded into VRAM, the more you load, the faster the speed, but it consumes more VRAM. (If your VRAM is not enough, it will fail to load)')}
|
||||
content={
|
||||
@@ -320,7 +331,7 @@ const Configs: FC = observer(() => {
|
||||
}} />
|
||||
} />
|
||||
}
|
||||
{selectedConfig.modelParameters.device.includes('CUDA') && <div />}
|
||||
{selectedConfig.modelParameters.device.startsWith('CUDA') && <div />}
|
||||
{
|
||||
displayStrategyImg &&
|
||||
<img style={{ width: '80vh', height: 'auto', zIndex: 100 }}
|
||||
@@ -345,7 +356,7 @@ const Configs: FC = observer(() => {
|
||||
}
|
||||
{selectedConfig.modelParameters.device === 'Custom' && <div />}
|
||||
{
|
||||
(selectedConfig.modelParameters.device.includes('CUDA') || selectedConfig.modelParameters.device === 'Custom') &&
|
||||
(selectedConfig.modelParameters.device.startsWith('CUDA') || selectedConfig.modelParameters.device === 'Custom') &&
|
||||
<Labeled label={t('Use Custom CUDA kernel to Accelerate')}
|
||||
desc={t('Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues (output garbled). If it fails to start, please turn off this option, or try to upgrade your gpu driver.')}
|
||||
content={
|
||||
@@ -394,6 +405,7 @@ const Configs: FC = observer(() => {
|
||||
</div>
|
||||
}
|
||||
/>
|
||||
{mq && <div style={{ minHeight: '30px' }} />}
|
||||
</div>
|
||||
<div className="flex flex-row-reverse sm:fixed bottom-2 right-2">
|
||||
<div className="flex gap-2">
|
||||
|
||||
@@ -153,23 +153,32 @@ const columns: TableColumnDefinition<ModelSourceItem>[] = [
|
||||
})
|
||||
];
|
||||
|
||||
const getTags = () => {
|
||||
return Array.from(new Set(
|
||||
['Recommended',
|
||||
...commonStore.modelSourceList.map(item => item.tags || []).flat()
|
||||
.filter(i => !i.includes('Other') && !i.includes('Local'))
|
||||
, 'Other', 'Local']));
|
||||
};
|
||||
|
||||
const getCurrentModelList = () => {
|
||||
if (commonStore.activeModelListTags.length === 0)
|
||||
return commonStore.modelSourceList;
|
||||
else
|
||||
return commonStore.modelSourceList.filter(item => commonStore.activeModelListTags.some(tag => item.tags?.includes(tag)));
|
||||
};
|
||||
|
||||
const Models: FC = observer(() => {
|
||||
const { t } = useTranslation();
|
||||
const [tags, setTags] = useState<Array<string>>([]);
|
||||
const [modelSourceList, setModelSourceList] = useState<ModelSourceItem[]>(commonStore.modelSourceList);
|
||||
const [tags, setTags] = useState<Array<string>>(getTags());
|
||||
const [modelSourceList, setModelSourceList] = useState<ModelSourceItem[]>(getCurrentModelList());
|
||||
|
||||
useEffect(() => {
|
||||
setTags(Array.from(new Set(
|
||||
[...commonStore.modelSourceList.map(item => item.tags || []).flat()
|
||||
.filter(i => !i.includes('Other') && !i.includes('Local'))
|
||||
, 'Other', 'Local'])));
|
||||
setTags(getTags());
|
||||
}, [commonStore.modelSourceList]);
|
||||
|
||||
useEffect(() => {
|
||||
if (commonStore.activeModelListTags.length === 0)
|
||||
setModelSourceList(commonStore.modelSourceList);
|
||||
else
|
||||
setModelSourceList(commonStore.modelSourceList.filter(item => commonStore.activeModelListTags.some(tag => item.tags?.includes(tag))));
|
||||
setModelSourceList(getCurrentModelList());
|
||||
}, [commonStore.modelSourceList, commonStore.activeModelListTags]);
|
||||
|
||||
return (
|
||||
|
||||
@@ -131,7 +131,7 @@ const showError = (e: any) => {
|
||||
};
|
||||
|
||||
const errorsMap = Object.entries({
|
||||
'python3 ./finetune/lora/train.py': 'Memory is not enough, try to increase the virtual memory (Swap of WSL) or use a smaller base model.',
|
||||
'python3 ./finetune/lora/$modelInfo': 'Memory is not enough, try to increase the virtual memory (Swap of WSL) or use a smaller base model.',
|
||||
'cuda out of memory': 'VRAM is not enough',
|
||||
'valueerror: high <= 0': 'Training data is not enough, reduce context length or add more data for training',
|
||||
'+= \'+ptx\'': 'Can not find an Nvidia GPU. Perhaps the gpu driver of windows is too old, or you are using WSL 1 for training, please upgrade to WSL 2. e.g. Run "wsl --set-version Ubuntu-22.04 2"',
|
||||
@@ -299,7 +299,6 @@ const LoraFinetune: FC = observer(() => {
|
||||
(loraParams.baseModel ? `--load_model models/${loraParams.baseModel} ` : '') +
|
||||
(loraParams.loraLoad ? `--lora_load lora-models/${loraParams.loraLoad} ` : '') +
|
||||
`--data_file ${convertedDataPath} ` +
|
||||
`--vocab_size ${loraParams.baseModel.toLowerCase().includes('world') ? '65536' : '50277'} ` +
|
||||
`--ctx_len ${ctxLen} --epoch_steps ${loraParams.epochSteps} --epoch_count ${loraParams.epochCount} ` +
|
||||
`--epoch_begin ${loraParams.epochBegin} --epoch_save ${loraParams.epochSave} ` +
|
||||
`--micro_bsz ${loraParams.microBsz} --accumulate_grad_batches ${loraParams.accumGradBatches} ` +
|
||||
|
||||
@@ -2,16 +2,27 @@ import { CompletionPreset } from '../types/completion';
|
||||
import { ModelConfig } from '../types/configs';
|
||||
|
||||
export const defaultCompositionPrompt = '<pad>';
|
||||
export const defaultCompositionABCPrompt='S:3\n' +
|
||||
'B:9\n' +
|
||||
'E:4\n' +
|
||||
'B:9\n' +
|
||||
'E:4\n' +
|
||||
'E:4\n' +
|
||||
'B:9\n' +
|
||||
'L:1/8\n' +
|
||||
'M:3/4\n' +
|
||||
'K:D\n' +
|
||||
' Bc |"G" d2 cB"A" A2 FE |"Bm" F2 B4 F^G |'
|
||||
|
||||
export const defaultPresets: CompletionPreset[] = [{
|
||||
name: 'Writer',
|
||||
prompt: 'The following is an epic science fiction masterpiece that is immortalized, with delicate descriptions and grand depictions of interstellar civilization wars.\nChapter 1.\n',
|
||||
params: {
|
||||
maxResponseToken: 500,
|
||||
temperature: 1.2,
|
||||
topP: 0.5,
|
||||
presencePenalty: 0.4,
|
||||
frequencyPenalty: 0.4,
|
||||
temperature: 1,
|
||||
topP: 0.3,
|
||||
presencePenalty: 0,
|
||||
frequencyPenalty: 1,
|
||||
stop: '\\n\\nUser',
|
||||
injectStart: '',
|
||||
injectEnd: ''
|
||||
|
||||
@@ -49,7 +49,7 @@ export async function startup() {
|
||||
async function initRemoteText() {
|
||||
await fetch('https://cdn.jsdelivr.net/gh/josstorer/RWKV-Runner@master/manifest.json', { cache: 'no-cache' })
|
||||
.then(r => r.json()).then((data) => {
|
||||
if (data.version > manifest.version) {
|
||||
if (data.version >= manifest.version) {
|
||||
if (data.introduction)
|
||||
commonStore.setIntroduction(data.introduction);
|
||||
if (data.about)
|
||||
|
||||
@@ -96,7 +96,9 @@ class CommonStore {
|
||||
useLocalSoundFont: false,
|
||||
externalPlay: false,
|
||||
midi: null,
|
||||
ns: null
|
||||
ns: null,
|
||||
generationStartTime: 0,
|
||||
playOnlyGeneratedContent: true
|
||||
};
|
||||
compositionGenerating: boolean = false;
|
||||
compositionSubmittedPrompt: string = defaultCompositionPrompt;
|
||||
|
||||
@@ -11,7 +11,9 @@ export type CompositionParams = {
|
||||
useLocalSoundFont: boolean,
|
||||
externalPlay: boolean,
|
||||
midi: ArrayBuffer | null,
|
||||
ns: NoteSequence | null
|
||||
ns: NoteSequence | null,
|
||||
generationStartTime: number,
|
||||
playOnlyGeneratedContent: boolean,
|
||||
}
|
||||
export type Track = {
|
||||
id: string;
|
||||
|
||||
@@ -6,7 +6,7 @@ export type ApiParameters = {
|
||||
presencePenalty: number;
|
||||
frequencyPenalty: number;
|
||||
}
|
||||
export type Device = 'CPU' | 'CPU (rwkv.cpp)' | 'CUDA' | 'CUDA-Beta' | 'WebGPU' | 'MPS' | 'Custom';
|
||||
export type Device = 'CPU' | 'CPU (rwkv.cpp)' | 'CUDA' | 'CUDA-Beta' | 'WebGPU' | 'WebGPU (Python)' | 'MPS' | 'Custom';
|
||||
export type Precision = 'fp16' | 'int8' | 'fp32' | 'nf4' | 'Q5_1';
|
||||
export type ModelParameters = {
|
||||
// different models can not have the same name
|
||||
|
||||
@@ -1,15 +1,17 @@
|
||||
export type ModelSourceItem = {
|
||||
name: string;
|
||||
size: number;
|
||||
lastUpdated: string;
|
||||
desc?: { [lang: string]: string | undefined; };
|
||||
size: number;
|
||||
SHA256?: string;
|
||||
lastUpdated: string;
|
||||
url?: string;
|
||||
downloadUrl?: string;
|
||||
tags?: string[];
|
||||
customTokenizer?: string;
|
||||
hide?: boolean;
|
||||
|
||||
lastUpdatedMs?: number;
|
||||
isComplete?: boolean;
|
||||
isLocal?: boolean;
|
||||
localSize?: number;
|
||||
lastUpdatedMs?: number;
|
||||
tags?: string[];
|
||||
hide?: boolean;
|
||||
};
|
||||
@@ -5,6 +5,7 @@ import {
|
||||
ConvertGGML,
|
||||
ConvertModel,
|
||||
ConvertSafetensors,
|
||||
ConvertSafetensorsWithPython,
|
||||
FileExists,
|
||||
GetPyError
|
||||
} from '../../wailsjs/go/backend_golang/App';
|
||||
@@ -51,12 +52,22 @@ export const convertModel = async (selectedConfig: ModelConfig, navigate: Naviga
|
||||
};
|
||||
|
||||
|
||||
export const convertToSt = async (selectedConfig: ModelConfig) => {
|
||||
export const convertToSt = async (selectedConfig: ModelConfig, navigate: NavigateFunction) => {
|
||||
const webgpuPython = selectedConfig.modelParameters.device === 'WebGPU (Python)';
|
||||
if (webgpuPython) {
|
||||
const ok = await checkDependencies(navigate);
|
||||
if (!ok)
|
||||
return;
|
||||
}
|
||||
|
||||
const modelPath = `${commonStore.settings.customModelsPath}/${selectedConfig.modelParameters.modelName}`;
|
||||
if (await FileExists(modelPath)) {
|
||||
toast(t('Start Converting'), { autoClose: 2000, type: 'info' });
|
||||
const newModelPath = modelPath.replace(/\.pth$/, '.st');
|
||||
ConvertSafetensors(modelPath, newModelPath).then(async () => {
|
||||
const convert = webgpuPython ?
|
||||
(input: string, output: string) => ConvertSafetensorsWithPython(commonStore.settings.customPythonPath, input, output)
|
||||
: ConvertSafetensors;
|
||||
convert(modelPath, newModelPath).then(async () => {
|
||||
if (!await FileExists(newModelPath)) {
|
||||
if (commonStore.platform === 'windows' || commonStore.platform === 'linux')
|
||||
toast(t('Convert Failed') + ' - ' + await GetPyError(), { type: 'error' });
|
||||
|
||||
@@ -27,6 +27,7 @@ import logo from '../assets/images/logo.png';
|
||||
import { Preset } from '../types/presets';
|
||||
import { botName, Conversation, MessageType, userName } from '../types/chat';
|
||||
import { v4 as uuid } from 'uuid';
|
||||
import { findLastIndex } from 'lodash-es';
|
||||
|
||||
export type Cache = {
|
||||
version: string
|
||||
@@ -51,11 +52,11 @@ export async function refreshBuiltInModels(readCache: boolean = false) {
|
||||
await ReadJson('cache.json').then((cacheData: Cache) => {
|
||||
if (cacheData.models)
|
||||
cache.models = cacheData.models;
|
||||
else cache.models = manifest.models;
|
||||
else cache.models = manifest.models.slice();
|
||||
}).catch(() => {
|
||||
cache.models = manifest.models;
|
||||
cache.models = manifest.models.slice();
|
||||
});
|
||||
else cache.models = manifest.models;
|
||||
else cache.models = manifest.models.slice();
|
||||
|
||||
commonStore.setModelSourceList(cache.models);
|
||||
await saveCache().catch(() => {
|
||||
@@ -90,7 +91,7 @@ export async function refreshLocalModels(cache: {
|
||||
for (let i = 0; i < cache.models.length; i++) {
|
||||
if (!cache.models[i].lastUpdatedMs)
|
||||
cache.models[i].lastUpdatedMs = Date.parse(cache.models[i].lastUpdated);
|
||||
if (!cache.models[i].tags)
|
||||
if (!cache.models[i].tags || !Array.isArray(cache.models[i].tags) || cache.models[i].tags?.length === 0)
|
||||
cache.models[i].tags = ['Other'];
|
||||
|
||||
for (let j = i + 1; j < cache.models.length; j++) {
|
||||
@@ -145,7 +146,7 @@ function initLastUnfinishedModelDownloads() {
|
||||
|
||||
export async function refreshRemoteModels(cache: {
|
||||
models: ModelSourceItem[]
|
||||
}) {
|
||||
}, filter: boolean = true, initUnfinishedModels: boolean = false) {
|
||||
const manifestUrls = commonStore.modelSourceManifestList.split(/[,,;;\n]/);
|
||||
const requests = manifestUrls.filter(url => url.endsWith('.json')).map(
|
||||
url => fetch(url, { cache: 'no-cache' }).then(r => r.json()));
|
||||
@@ -162,18 +163,16 @@ export async function refreshRemoteModels(cache: {
|
||||
});
|
||||
cache.models = cache.models.filter((model, index, self) => {
|
||||
return modelSuffix.some((ext => model.name.endsWith(ext)))
|
||||
&& index === self.findIndex(
|
||||
m => m.name === model.name || (m.SHA256 && m.SHA256 === model.SHA256 && m.size === model.size));
|
||||
});
|
||||
commonStore.setModelSourceList(cache.models);
|
||||
await saveCache().catch(() => {
|
||||
&& index === findLastIndex(self,
|
||||
m => m.name === model.name || (!!m.SHA256 && m.SHA256 === model.SHA256 && m.size === model.size));
|
||||
});
|
||||
await refreshLocalModels(cache, filter, initUnfinishedModels);
|
||||
}
|
||||
|
||||
export const refreshModels = async (readCache: boolean = false, initUnfinishedModels: boolean = false) => {
|
||||
const cache = await refreshBuiltInModels(readCache);
|
||||
await refreshLocalModels(cache, false, initUnfinishedModels);
|
||||
await refreshRemoteModels(cache);
|
||||
await refreshRemoteModels(cache, false, initUnfinishedModels);
|
||||
};
|
||||
|
||||
export const getStrategy = (modelConfig: ModelConfig | undefined = undefined) => {
|
||||
@@ -192,6 +191,7 @@ export const getStrategy = (modelConfig: ModelConfig | undefined = undefined) =>
|
||||
strategy += params.precision === 'int8' ? 'fp32i8' : 'fp32';
|
||||
break;
|
||||
case 'WebGPU':
|
||||
case 'WebGPU (Python)':
|
||||
strategy += params.precision === 'nf4' ? 'fp16i4' : params.precision === 'int8' ? 'fp16i8' : 'fp16';
|
||||
break;
|
||||
case 'CUDA':
|
||||
@@ -202,6 +202,8 @@ export const getStrategy = (modelConfig: ModelConfig | undefined = undefined) =>
|
||||
strategy += params.precision === 'int8' ? 'fp16i8' : params.precision === 'fp32' ? 'fp32' : 'fp16';
|
||||
if (params.storedLayers < params.maxStoredLayers)
|
||||
strategy += ` *${params.storedLayers}+`;
|
||||
else
|
||||
strategy += ` -> cuda fp16 *1`;
|
||||
break;
|
||||
case 'MPS':
|
||||
if (avoidOverflow)
|
||||
@@ -307,7 +309,7 @@ export function getServerRoot(defaultLocalPort: number, isCore: boolean = false)
|
||||
const coreCustomApiUrl = commonStore.settings.coreApiUrl.trim().replace(/\/$/, '');
|
||||
if (isCore && coreCustomApiUrl)
|
||||
return coreCustomApiUrl;
|
||||
|
||||
|
||||
const defaultRoot = `http://127.0.0.1:${defaultLocalPort}`;
|
||||
if (commonStore.status.status !== ModelStatus.Offline)
|
||||
return defaultRoot;
|
||||
|
||||
@@ -12,7 +12,7 @@ const vendor = [
|
||||
'mobx', 'mobx-react-lite',
|
||||
'i18next', 'react-i18next',
|
||||
'usehooks-ts', 'react-toastify',
|
||||
'classnames'
|
||||
'classnames', 'lodash-es'
|
||||
];
|
||||
|
||||
const embedded = [
|
||||
|
||||
4
frontend/wailsjs/go/backend_golang/App.d.ts
generated
vendored
Executable file → Normal file
4
frontend/wailsjs/go/backend_golang/App.d.ts
generated
vendored
Executable file → Normal file
@@ -16,6 +16,8 @@ export function ConvertModel(arg1:string,arg2:string,arg3:string,arg4:string):Pr
|
||||
|
||||
export function ConvertSafetensors(arg1:string,arg2:string):Promise<string>;
|
||||
|
||||
export function ConvertSafetensorsWithPython(arg1:string,arg2:string,arg3:string):Promise<string>;
|
||||
|
||||
export function CopyFile(arg1:string,arg2:string):Promise<void>;
|
||||
|
||||
export function DeleteFile(arg1:string):Promise<void>;
|
||||
@@ -64,7 +66,7 @@ export function SaveJson(arg1:string,arg2:any):Promise<void>;
|
||||
|
||||
export function StartFile(arg1:string):Promise<void>;
|
||||
|
||||
export function StartServer(arg1:string,arg2:number,arg3:string,arg4:boolean,arg5:boolean,arg6:boolean):Promise<string>;
|
||||
export function StartServer(arg1:string,arg2:number,arg3:string,arg4:boolean,arg5:boolean,arg6:boolean,arg7:boolean):Promise<string>;
|
||||
|
||||
export function StartWebGPUServer(arg1:number,arg2:string):Promise<string>;
|
||||
|
||||
|
||||
8
frontend/wailsjs/go/backend_golang/App.js
generated
Executable file → Normal file
8
frontend/wailsjs/go/backend_golang/App.js
generated
Executable file → Normal file
@@ -30,6 +30,10 @@ export function ConvertSafetensors(arg1, arg2) {
|
||||
return window['go']['backend_golang']['App']['ConvertSafetensors'](arg1, arg2);
|
||||
}
|
||||
|
||||
export function ConvertSafetensorsWithPython(arg1, arg2, arg3) {
|
||||
return window['go']['backend_golang']['App']['ConvertSafetensorsWithPython'](arg1, arg2, arg3);
|
||||
}
|
||||
|
||||
export function CopyFile(arg1, arg2) {
|
||||
return window['go']['backend_golang']['App']['CopyFile'](arg1, arg2);
|
||||
}
|
||||
@@ -126,8 +130,8 @@ export function StartFile(arg1) {
|
||||
return window['go']['backend_golang']['App']['StartFile'](arg1);
|
||||
}
|
||||
|
||||
export function StartServer(arg1, arg2, arg3, arg4, arg5, arg6) {
|
||||
return window['go']['backend_golang']['App']['StartServer'](arg1, arg2, arg3, arg4, arg5, arg6);
|
||||
export function StartServer(arg1, arg2, arg3, arg4, arg5, arg6, arg7) {
|
||||
return window['go']['backend_golang']['App']['StartServer'](arg1, arg2, arg3, arg4, arg5, arg6, arg7);
|
||||
}
|
||||
|
||||
export function StartWebGPUServer(arg1, arg2) {
|
||||
|
||||
0
frontend/wailsjs/go/models.ts
generated
Executable file → Normal file
0
frontend/wailsjs/go/models.ts
generated
Executable file → Normal file
2
main.go
2
main.go
@@ -109,7 +109,7 @@ func main() {
|
||||
err = wails.Run(&options.App{
|
||||
Title: "RWKV-Runner",
|
||||
Width: 1024,
|
||||
Height: 680,
|
||||
Height: 700,
|
||||
MinWidth: 375,
|
||||
MinHeight: 640,
|
||||
EnableDefaultContextMenu: true,
|
||||
|
||||
278
manifest.json
278
manifest.json
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"version": "1.6.1",
|
||||
"version": "1.6.7",
|
||||
"introduction": {
|
||||
"en": "RWKV is an open-source, commercially usable large language model with high flexibility and great potential for development.\n### About This Tool\nThis tool aims to lower the barrier of entry for using large language models, making it accessible to everyone. It provides fully automated dependency and model management. You simply need to click and run, following the instructions, to deploy a local large language model. The tool itself is very compact and only requires a single executable file for one-click deployment.\nAdditionally, this tool offers an interface that is fully compatible with the OpenAI API. This means you can use any ChatGPT client as a client for RWKV, enabling capability expansion beyond just chat functionality.\n### Preset Configuration Rules at the Bottom\nThis tool comes with a series of preset configurations to reduce complexity. The naming rules for each configuration represent the following in order: device - required VRAM/memory - model size - model language.\nFor example, \"GPU-8G-3B-EN\" indicates that this configuration is for a graphics card with 8GB of VRAM, a model size of 3 billion parameters, and it uses an English language model.\nLarger model sizes have higher performance and VRAM requirements. Among configurations with the same model size, those with higher VRAM usage will have faster runtime.\nFor example, if you have 12GB of VRAM but running the \"GPU-12G-7B-EN\" configuration is slow, you can downgrade to \"GPU-8G-3B-EN\" for a significant speed improvement.\n### About RWKV\nRWKV is an RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's 100% attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the \"GPT\" mode to quickly compute the hidden state for the \"RNN\" mode.<br/>So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, \"infinite\" ctx_len, and free sentence embedding (using the final hidden state).",
|
||||
"zh": "RWKV是一个开源且允许商用的大语言模型,灵活性很高且极具发展潜力。\n### 关于本工具\n本工具旨在降低大语言模型的使用门槛,做到人人可用,本工具提供了全自动化的依赖和模型管理,你只需要直接点击运行,跟随引导,即可完成本地大语言模型的部署,工具本身体积极小,只需要一个exe即可完成一键部署。\n此外,本工具提供了与OpenAI API完全兼容的接口,这意味着你可以把任意ChatGPT客户端用作RWKV的客户端,实现能力拓展,而不局限于聊天。\n### 底部的预设配置规则\n本工具内置了一系列预设配置,以降低使用难度,每个配置名的规则,依次代表着:设备-所需显存/内存-模型规模-模型语言。\n例如,GPU-8G-3B-CN,表示该配置用于显卡,需要8G显存,模型规模为30亿参数,使用的是中文模型。\n模型规模越大,性能要求越高,显存要求也越高,而同样模型规模的配置中,显存占用越高的,运行速度越快。\n例如当你有12G显存,但运行GPU-12G-7B-CN配置速度比较慢,可降级成GPU-8G-3B-CN,将会大幅提速。\n### 关于RWKV\nRWKV是具有Transformer级别LLM性能的RNN,也可以像GPT Transformer一样直接进行训练(可并行化)。而且它是100% attention-free的。你只需在位置t处获得隐藏状态即可计算位置t + 1处的状态。你可以使用“GPT”模式快速计算用于“RNN”模式的隐藏状态。\n因此,它将RNN和Transformer的优点结合起来 - 高性能、快速推理、节省显存、快速训练、“无限”上下文长度以及免费的语句嵌入(使用最终隐藏状态)。"
|
||||
},
|
||||
"about": {
|
||||
"en": "<div align=\"center\">\n\nProject Source Code:\nhttps://github.com/josStorer/RWKV-Runner\nAuthor: [@josStorer](https://github.com/josStorer)\nFAQs: https://github.com/josStorer/RWKV-Runner/wiki/FAQs\n\nRelated Repositories:\nRWKV-4-World: https://huggingface.co/BlinkDL/rwkv-4-world/tree/main\nRWKV-4-Raven: https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main\nChatRWKV: https://github.com/BlinkDL/ChatRWKV\nRWKV-LM: https://github.com/BlinkDL/RWKV-LM\nRWKV-LM-LoRA: https://github.com/Blealtan/RWKV-LM-LoRA\nMIDI-LLM-tokenizer: https://github.com/briansemrau/MIDI-LLM-tokenizer\n\n</div>",
|
||||
"zh": "<div align=\"center\">\n\n本项目源码:\nhttps://github.com/josStorer/RWKV-Runner\n作者: [@josStorer](https://github.com/josStorer)\n演示与常见问题说明视频: https://www.bilibili.com/video/BV1hM4y1v76R\n疑难解答: https://www.bilibili.com/read/cv23921171\n\n相关仓库:\nRWKV-4-World: https://huggingface.co/BlinkDL/rwkv-4-world/tree/main\nRWKV-4-Raven: https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main\nChatRWKV: https://github.com/BlinkDL/ChatRWKV\nRWKV-LM: https://github.com/BlinkDL/RWKV-LM\nRWKV-LM-LoRA: https://github.com/Blealtan/RWKV-LM-LoRA\nMIDI-LLM-tokenizer: https://github.com/briansemrau/MIDI-LLM-tokenizer\n\n</div>"
|
||||
"en": "<div align=\"center\">\n\nProject Source Code and Introduction:\nhttps://github.com/josStorer/RWKV-Runner\nAuthor: [@josStorer](https://github.com/josStorer)\n\nRelated Repositories:\nRWKV-5-World: https://huggingface.co/BlinkDL/rwkv-5-world/tree/main\nRWKV-4-World: https://huggingface.co/BlinkDL/rwkv-4-world/tree/main\nRWKV-4-Raven: https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main\nChatRWKV: https://github.com/BlinkDL/ChatRWKV\nRWKV-LM: https://github.com/BlinkDL/RWKV-LM\nRWKV-LM-LoRA: https://github.com/Blealtan/RWKV-LM-LoRA\nMIDI-LLM-tokenizer: https://github.com/briansemrau/MIDI-LLM-tokenizer\nai00_rwkv_server: https://github.com/cgisky1980/ai00_rwkv_server\nrwkv.cpp: https://github.com/saharNooby/rwkv.cpp\nweb-rwkv-py: https://github.com/cryscan/web-rwkv-py\n\n</div>",
|
||||
"zh": "<div align=\"center\">\n\n本项目源码及介绍页:\nhttps://github.com/josStorer/RWKV-Runner\n作者: [@josStorer](https://github.com/josStorer)\n演示与常见问题说明视频: https://www.bilibili.com/video/BV1hM4y1v76R\n\n相关仓库:\nRWKV-5-World: https://huggingface.co/BlinkDL/rwkv-5-world/tree/main\nRWKV-4-World: https://huggingface.co/BlinkDL/rwkv-4-world/tree/main\nRWKV-4-Raven: https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main\nChatRWKV: https://github.com/BlinkDL/ChatRWKV\nRWKV-LM: https://github.com/BlinkDL/RWKV-LM\nRWKV-LM-LoRA: https://github.com/Blealtan/RWKV-LM-LoRA\nMIDI-LLM-tokenizer: https://github.com/briansemrau/MIDI-LLM-tokenizer\nai00_rwkv_server: https://github.com/cgisky1980/ai00_rwkv_server\nrwkv.cpp: https://github.com/saharNooby/rwkv.cpp\nweb-rwkv-py: https://github.com/cryscan/web-rwkv-py\n\n</div>"
|
||||
},
|
||||
"programFiles": [
|
||||
{
|
||||
@@ -25,8 +25,8 @@
|
||||
"size": 385598386,
|
||||
"SHA256": "c844a3ee05bcb9065848cb05b10c48a3f381f5ac1953aad89e156ecdf31d7703",
|
||||
"lastUpdated": "2023-08-03T15:18:46",
|
||||
"url": "https://huggingface.co/BlinkDL/rwkv-5-world/blob/main/RWKV-5-World-0.1B-v1-20230803-ctx4096.pth?download=true",
|
||||
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-world/resolve/main/RWKV-5-World-0.1B-v1-20230803-ctx4096.pth?download=true",
|
||||
"url": "https://huggingface.co/BlinkDL/rwkv-5-world/blob/main/RWKV-5-World-0.1B-v1-20230803-ctx4096.pth",
|
||||
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-world/resolve/main/RWKV-5-World-0.1B-v1-20230803-ctx4096.pth",
|
||||
"tags": [
|
||||
"Main",
|
||||
"RWKV-5",
|
||||
@@ -43,8 +43,8 @@
|
||||
"size": 923523954,
|
||||
"SHA256": "5a288c54c7f30b0e2d4af23991133fad2af2d5e59ec7ad850ffe78054a5e4f92",
|
||||
"lastUpdated": "2023-11-14T01:23:49",
|
||||
"url": "https://huggingface.co/BlinkDL/rwkv-5-world/blob/main/RWKV-5-World-0.4B-v2-20231113-ctx4096.pth?download=true",
|
||||
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-world/resolve/main/RWKV-5-World-0.4B-v2-20231113-ctx4096.pth?download=true",
|
||||
"url": "https://huggingface.co/BlinkDL/rwkv-5-world/blob/main/RWKV-5-World-0.4B-v2-20231113-ctx4096.pth",
|
||||
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-world/resolve/main/RWKV-5-World-0.4B-v2-20231113-ctx4096.pth",
|
||||
"tags": [
|
||||
"Main",
|
||||
"RWKV-5",
|
||||
@@ -69,6 +69,45 @@
|
||||
"Global"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "RWKV-5-1B5-one-state-slim.pth",
|
||||
"desc": {
|
||||
"en": "RWKV-5 Global Languages 1.5B v2 Ctx16k Role Play",
|
||||
"zh": "RWKV-5 全球语言 1.5B v2 16k上下文 角色扮演",
|
||||
"ja": "RWKV-5 グローバル言語 1.5B v2 16kコンテキスト ロールプレイ"
|
||||
},
|
||||
"size": 3155589871,
|
||||
"SHA256": "43e7b922d7ad49eafa17f8909c2813c91394925bc7f24caf0e19a91aa3281273",
|
||||
"lastUpdated": "2023-11-02T04:03:27",
|
||||
"url": "https://huggingface.co/xiaol/RWKV-v5-world-v2-1.5B-one-state-slim-16k/blob/main/RWKV-5-1B5-one-state-slim.pth",
|
||||
"downloadUrl": "https://huggingface.co/xiaol/RWKV-v5-world-v2-1.5B-one-state-slim-16k/resolve/main/RWKV-5-1B5-one-state-slim.pth",
|
||||
"tags": [
|
||||
"Finetuned",
|
||||
"RWKV-5",
|
||||
"Global",
|
||||
"Role Play"
|
||||
],
|
||||
"customTokenizer": "backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt"
|
||||
},
|
||||
{
|
||||
"name": "RWKV-5-1B5-one-state-slim-novel-tuned.pth",
|
||||
"desc": {
|
||||
"en": "RWKV-5 Global Languages 1.5B v2 Ctx16k Novel",
|
||||
"zh": "RWKV-5 全球语言 1.5B v2 16k上下文 小说",
|
||||
"ja": "RWKV-5 グローバル言語 1.5B v2 16kコンテキスト 小説"
|
||||
},
|
||||
"size": 3155589871,
|
||||
"SHA256": "4f0aaecdce676e5236018ebd63e3d37c2f300fbac04001ee3a9c00d2f4244d0f",
|
||||
"lastUpdated": "2023-11-03T02:45:52",
|
||||
"url": "https://huggingface.co/xiaol/RWKV-v5-world-v2-1.5B-one-state-slim-16k-novel-tuned/blob/main/RWKV-5-1B5-one-state-slim-novel-tuned.pth",
|
||||
"downloadUrl": "https://huggingface.co/xiaol/RWKV-v5-world-v2-1.5B-one-state-slim-16k-novel-tuned/resolve/main/RWKV-5-1B5-one-state-slim-novel-tuned.pth",
|
||||
"tags": [
|
||||
"Finetuned",
|
||||
"RWKV-5",
|
||||
"Global"
|
||||
],
|
||||
"customTokenizer": "backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt"
|
||||
},
|
||||
{
|
||||
"name": "RWKV-5-World-3B-v2-20231113-ctx4096.pth",
|
||||
"desc": {
|
||||
@@ -79,12 +118,13 @@
|
||||
"size": 6126106674,
|
||||
"SHA256": "a4bd430343c6fd138b85bbc68bb20262d3a2f053ea57dc4b41078269af68ff9c",
|
||||
"lastUpdated": "2023-11-14T01:23:49",
|
||||
"url": "https://huggingface.co/BlinkDL/rwkv-5-world/blob/main/RWKV-5-World-3B-v2-20231113-ctx4096.pth?download=true",
|
||||
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-world/resolve/main/RWKV-5-World-3B-v2-20231113-ctx4096.pth?download=true",
|
||||
"url": "https://huggingface.co/BlinkDL/rwkv-5-world/blob/main/RWKV-5-World-3B-v2-20231113-ctx4096.pth",
|
||||
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-world/resolve/main/RWKV-5-World-3B-v2-20231113-ctx4096.pth",
|
||||
"tags": [
|
||||
"Main",
|
||||
"RWKV-5",
|
||||
"Global"
|
||||
"Global",
|
||||
"Recommended"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -97,14 +137,94 @@
|
||||
"size": 6126106467,
|
||||
"SHA256": "efa5178d1c824b94ef17c6c9a456674e5581a8be832becbda9aba4dc533f88c2",
|
||||
"lastUpdated": "2023-11-19T04:21:04",
|
||||
"url": "https://huggingface.co/BlinkDL/rwkv-5-world/blob/main/RWKV-5-World-3B-v2-20231118-ctx16k.pth?download=true",
|
||||
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-world/resolve/main/RWKV-5-World-3B-v2-20231118-ctx16k.pth?download=true",
|
||||
"url": "https://huggingface.co/BlinkDL/rwkv-5-world/blob/main/RWKV-5-World-3B-v2-20231118-ctx16k.pth",
|
||||
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-world/resolve/main/RWKV-5-World-3B-v2-20231118-ctx16k.pth",
|
||||
"tags": [
|
||||
"Main",
|
||||
"RWKV-5",
|
||||
"Global"
|
||||
"Global",
|
||||
"Recommended"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "rwkv-v5-7B-0.4-long-ctx-16k.pth",
|
||||
"desc": {
|
||||
"en": "RWKV-5 Global Languages 7B v2 40% Ctx300k Document Reader",
|
||||
"zh": "RWKV-5 全球语言 7B v2 40% 300k上下文 文档阅读",
|
||||
"ja": "RWKV-5 グローバル言語 7B v2 40% 300kコンテキスト ドキュメントリーダー"
|
||||
},
|
||||
"size": 15036198115,
|
||||
"SHA256": "5888471a45caab903c1bd9c35af1c639ac8d03be6ee6eb39fa9fd3194fa6d437",
|
||||
"lastUpdated": "2023-11-10T17:12:04",
|
||||
"url": "https://huggingface.co/xiaol/RWKV-5-world-v2-7B-0.4-300k/blob/main/rwkv-v5-7B-0.4-long-ctx-16k.pth",
|
||||
"downloadUrl": "https://huggingface.co/xiaol/RWKV-5-world-v2-7B-0.4-300k/resolve/main/rwkv-v5-7B-0.4-long-ctx-16k.pth",
|
||||
"tags": [
|
||||
"Finetuned",
|
||||
"RWKV-5",
|
||||
"Global"
|
||||
],
|
||||
"customTokenizer": "backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt"
|
||||
},
|
||||
{
|
||||
"name": "rwkv-v5.2-7B-horror-16k.pth",
|
||||
"desc": {
|
||||
"en": "RWKV-5 Global Languages 7B v2 40% Ctx16k Horror",
|
||||
"zh": "RWKV-5 全球语言 7B v2 40% 16k上下文 恐怖",
|
||||
"ja": "RWKV-5 グローバル言語 7B v2 40% 16kコンテキスト ホラー"
|
||||
},
|
||||
"size": 15036198115,
|
||||
"SHA256": "3b36ce99bef06627dcb5d860972e2c1515327afe7db415b8c82dd5c3b926b52f",
|
||||
"lastUpdated": "2023-11-13T15:21:25",
|
||||
"url": "https://huggingface.co/xiaol/RWKV-v5.2-7B-horror-16k/blob/main/rwkv-v5.2-7B-horror-16k.pth",
|
||||
"downloadUrl": "https://huggingface.co/xiaol/RWKV-v5.2-7B-horror-16k/resolve/main/rwkv-v5.2-7B-horror-16k.pth",
|
||||
"tags": [
|
||||
"Finetuned",
|
||||
"RWKV-5",
|
||||
"Global"
|
||||
],
|
||||
"customTokenizer": "backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt"
|
||||
},
|
||||
{
|
||||
"name": "rwkv_v5.2_7B_role_play_16k.pth",
|
||||
"desc": {
|
||||
"en": "RWKV-5 Global Languages 7B v2 Ctx16k Claude Like",
|
||||
"zh": "RWKV-5 全球语言 7B v2 16k上下文 Claude功能",
|
||||
"ja": "RWKV-5 グローバル言語 7B v2 16kコンテキスト Claude機能"
|
||||
},
|
||||
"size": 15036198115,
|
||||
"SHA256": "6fe8a7bf06b9f5e5b740cd87e24bff91325518ad19bf92bf5c75799b3c24b150",
|
||||
"lastUpdated": "2023-11-14T04:18:16",
|
||||
"url": "https://huggingface.co/xiaol/RWKV-v5.2-7B-Role-play-16k/blob/main/rwkv_v5.2_7B_role_play_16k.pth",
|
||||
"downloadUrl": "https://huggingface.co/xiaol/RWKV-v5.2-7B-Role-play-16k/resolve/main/rwkv_v5.2_7B_role_play_16k.pth",
|
||||
"tags": [
|
||||
"Finetuned",
|
||||
"RWKV-5",
|
||||
"Global",
|
||||
"Role Play",
|
||||
"Recommended"
|
||||
],
|
||||
"customTokenizer": "backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt"
|
||||
},
|
||||
{
|
||||
"name": "RWKV-5-12B-one-state-chat-16k.pth",
|
||||
"desc": {
|
||||
"en": "RWKV-5 Global Languages 12B Ctx16k",
|
||||
"zh": "RWKV-5 全球语言 12B 16k上下文",
|
||||
"ja": "RWKV-5 グローバル言語 12B 16kコンテキスト"
|
||||
},
|
||||
"size": 23157296483,
|
||||
"SHA256": "330be74738d3936f4c9bd6caf838db11c96f52ff360d0f4fa5401d9bafc898ab",
|
||||
"lastUpdated": "2023-12-16T16:34:30",
|
||||
"url": "https://huggingface.co/xiaol/RWKV-v5-12B-one-state-chat-16k/blob/main/RWKV-5-12B-one-state-chat-16k.pth",
|
||||
"downloadUrl": "https://huggingface.co/xiaol/RWKV-v5-12B-one-state-chat-16k/resolve/main/RWKV-5-12B-one-state-chat-16k.pth",
|
||||
"tags": [
|
||||
"Finetuned",
|
||||
"RWKV-5",
|
||||
"Global",
|
||||
"Recommended"
|
||||
],
|
||||
"customTokenizer": "backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt"
|
||||
},
|
||||
{
|
||||
"name": "RWKV-4-World-CHNtuned-0.1B-v1-20230617-ctx4096.pth",
|
||||
"desc": {
|
||||
@@ -257,6 +377,25 @@
|
||||
"Global"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "RWKV-for-mobile-4-world-1.5B-20230906-ctx16k.pth",
|
||||
"desc": {
|
||||
"en": "Global Languages 1.5B v1 Ctx16k Claude Like",
|
||||
"zh": "全球语言 1.5B v1 16k上下文 Claude功能",
|
||||
"ja": "グローバル言語 1.5B v1 16kコンテキスト Claude機能"
|
||||
},
|
||||
"size": 3155280301,
|
||||
"SHA256": "20547a6deca32add57c45d2f6cff52c6b59cd3b92676ee369b964affba35619d",
|
||||
"lastUpdated": "2023-09-07T01:35:46",
|
||||
"url": "https://huggingface.co/xiaol/RWKV-claude-for-mobile-v4-world-1.5B-16k/blob/main/RWKV-for-mobile-4-world-1.5B-20230906-ctx16k.pth",
|
||||
"downloadUrl": "https://huggingface.co/xiaol/RWKV-claude-for-mobile-v4-world-1.5B-16k/resolve/main/RWKV-for-mobile-4-world-1.5B-20230906-ctx16k.pth",
|
||||
"tags": [
|
||||
"Finetuned",
|
||||
"RWKV-4",
|
||||
"Global"
|
||||
],
|
||||
"customTokenizer": "backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt"
|
||||
},
|
||||
{
|
||||
"name": "RWKV-4-World-3B-v1-OnlyForTest_35%_trained-20230529-ctx4096.pth",
|
||||
"desc": {
|
||||
@@ -513,6 +652,83 @@
|
||||
"Global"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "RWKV-7B-world-one.pth",
|
||||
"desc": {
|
||||
"en": "Global Languages 7B v1 Ctx65k Novel",
|
||||
"zh": "全球语言 7B v1 65k上下文 小说",
|
||||
"ja": "グローバル言語 7B v1 65kコンテキスト 小説"
|
||||
},
|
||||
"size": 15035391533,
|
||||
"SHA256": "7ce95a4b460c3385c75c29b6ebe3cd7db438b1107e85d7d3e42dff85cfaa0b78",
|
||||
"lastUpdated": "2023-10-09T05:23:38",
|
||||
"url": "https://huggingface.co/xiaol/RWKV-v4-world-7B-one-state-65k/blob/main/RWKV-7B-world-one.pth",
|
||||
"downloadUrl": "https://huggingface.co/xiaol/RWKV-v4-world-7B-one-state-65k/resolve/main/RWKV-7B-world-one.pth",
|
||||
"tags": [
|
||||
"Finetuned",
|
||||
"RWKV-4",
|
||||
"Global"
|
||||
],
|
||||
"customTokenizer": "backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt"
|
||||
},
|
||||
{
|
||||
"name": "rwkv-world-one-novel-cot-ultrachat-novel-instructions.pth",
|
||||
"desc": {
|
||||
"en": "Global Languages 7B v1 Ctx65k Novel Instruction",
|
||||
"zh": "全球语言 7B v1 65k上下文 小说指令",
|
||||
"ja": "グローバル言語 7B v1 65kコンテキスト 小説指示"
|
||||
},
|
||||
"size": 15035391533,
|
||||
"SHA256": "fc2d4643828bb9dfe0733c3b2eb54ba2d996ed3eb6afa051b558da2eb2c1e309",
|
||||
"lastUpdated": "2023-10-22T09:50:39",
|
||||
"url": "https://huggingface.co/xiaol/RWKV-4-world-one-state-ultrachat-COT-65k/blob/main/rwkv-world-one-novel-cot-ultrachat-novel-instructions.pth",
|
||||
"downloadUrl": "https://huggingface.co/xiaol/RWKV-4-world-one-state-ultrachat-COT-65k/resolve/main/rwkv-world-one-novel-cot-ultrachat-novel-instructions.pth",
|
||||
"tags": [
|
||||
"Finetuned",
|
||||
"RWKV-4",
|
||||
"Global"
|
||||
],
|
||||
"customTokenizer": "backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt"
|
||||
},
|
||||
{
|
||||
"name": "RWKV-world-novel-one-state-ultrachat-cot-tuned-Role-play-65k.pth",
|
||||
"desc": {
|
||||
"en": "Global Languages 7B v1 Ctx65k Role Play",
|
||||
"zh": "全球语言 7B v1 65k上下文 角色扮演",
|
||||
"ja": "グローバル言語 7B v1 65kコンテキスト ロールプレイ"
|
||||
},
|
||||
"size": 15035391533,
|
||||
"SHA256": "2f55b4710dcd360e83b4df9a6358661284d9a6c6108f62c5a30b86df181ed67a",
|
||||
"lastUpdated": "2023-10-22T05:54:27",
|
||||
"url": "https://huggingface.co/xiaol/RWKV-4-world-one-state-ultrachat-COT-65k/blob/main/RWKV-world-novel-one-state-ultrachat-cot-tuned-Role-play-65k.pth",
|
||||
"downloadUrl": "https://huggingface.co/xiaol/RWKV-4-world-one-state-ultrachat-COT-65k/resolve/main/RWKV-world-novel-one-state-ultrachat-cot-tuned-Role-play-65k.pth",
|
||||
"tags": [
|
||||
"Finetuned",
|
||||
"RWKV-4",
|
||||
"Global",
|
||||
"Role Play"
|
||||
],
|
||||
"customTokenizer": "backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt"
|
||||
},
|
||||
{
|
||||
"name": "RWKV-4-7B-world-one-novel-tuned-65k.pth",
|
||||
"desc": {
|
||||
"en": "Global Languages 7B v1 Ctx65k Chinese Novel Instruction",
|
||||
"zh": "全球语言 7B v1 65k上下文 中文小说指令",
|
||||
"ja": "グローバル言語 7B v1 65kコンテキスト 中国語小説指示"
|
||||
},
|
||||
"size": 15035391533,
|
||||
"SHA256": "e8ff256d74ca404621dcbf87c43c37e25ea745fed30c404fbf45cc5acc7ba2b5",
|
||||
"lastUpdated": "2023-10-15T00:57:53",
|
||||
"url": "https://huggingface.co/xiaol/RWKV-4-world-one-state-novel-tuned-65k/blob/main/RWKV-4-7B-world-one-novel-tuned-65k.pth",
|
||||
"downloadUrl": "https://huggingface.co/xiaol/RWKV-4-world-one-state-novel-tuned-65k/resolve/main/RWKV-4-7B-world-one-novel-tuned-65k.pth",
|
||||
"tags": [
|
||||
"Finetuned",
|
||||
"RWKV-4",
|
||||
"CN"
|
||||
],
|
||||
"customTokenizer": "backend-python/rwkv_pip/rwkv_vocab_v20230424_special_token.txt"
|
||||
},
|
||||
{
|
||||
"name": "RWKV-4-World-CHNtuned-7B-v1-20230709-ctx4096.pth",
|
||||
"desc": {
|
||||
@@ -998,6 +1214,24 @@
|
||||
"Music"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "RWKV-4-ABC-82M-v1-20230805-ctx1024.pth",
|
||||
"desc": {
|
||||
"en": "Music ABC 82M v1",
|
||||
"zh": "作曲 ABC 82M v1",
|
||||
"ja": "作曲 ABC 82M v1"
|
||||
},
|
||||
"size": 164183345,
|
||||
"SHA256": "4c83859f387bc3953d19890338a3e50ea7f2278e1bbb9d6eae9b773c81958a01",
|
||||
"lastUpdated": "2023-08-06T05:46:55",
|
||||
"url": "https://huggingface.co/BlinkDL/rwkv-4-music/blob/main/RWKV-4-ABC-82M-v1-20230805-ctx1024.pth",
|
||||
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-4-music/resolve/main/RWKV-4-ABC-82M-v1-20230805-ctx1024.pth",
|
||||
"tags": [
|
||||
"Main",
|
||||
"RWKV-4",
|
||||
"Music"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "RWKV-5-MIDI-120M-v1-20230728-ctx4096.pth",
|
||||
"desc": {
|
||||
@@ -1033,6 +1267,24 @@
|
||||
"RWKV-5",
|
||||
"Music"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "RWKV-5-ABC-82M-v1-20230901-ctx1024.pth",
|
||||
"desc": {
|
||||
"en": "RWKV-5 Music ABC 82M v1",
|
||||
"zh": "RWKV-5 作曲 ABC 82M v1",
|
||||
"ja": "RWKV-5 作曲 ABC 82M v1"
|
||||
},
|
||||
"size": 164222002,
|
||||
"SHA256": "5bf9ae32e4ef05c3851d6010709c6c00dda926d110766b9a712bc48c0a53e098",
|
||||
"lastUpdated": "2023-09-02T06:55:12",
|
||||
"url": "https://huggingface.co/BlinkDL/rwkv-5-music/blob/main/RWKV-5-ABC-82M-v1-20230901-ctx1024.pth",
|
||||
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-music/resolve/main/RWKV-5-ABC-82M-v1-20230901-ctx1024.pth",
|
||||
"tags": [
|
||||
"Main",
|
||||
"RWKV-5",
|
||||
"Music"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
Reference in New Issue
Block a user