Compare commits

...

84 Commits

Author SHA1 Message Date
josc146
c7dcff52a1 release v1.5.1 2023-11-08 23:41:17 +08:00
josc146
c6ef32958e when client webUI enabled, set server into deployment mode 2023-11-08 23:31:13 +08:00
josc146
7235e1067b add deployment mode. If /switch-model with deploy: true, will disable /switch-model, /exit and other dangerous APIs (state cache APIs, part of midi APIs) 2023-11-08 23:29:42 +08:00
josc146
0594290b92 disable WebUI Option of WebGPU Mode (webgpu not supported yet) 2023-11-08 23:05:59 +08:00
josc146
d249a4c29a print error.txt 2023-11-08 22:57:38 +08:00
josc146
02ba37fab4 improve api url getter 2023-11-08 22:25:41 +08:00
josc146
b5a6f8a425 set deepspeed to 0.11.2 to avoid finetune error 2023-11-08 22:20:11 +08:00
josc146
1ad86d737c chore 2023-11-08 22:18:49 +08:00
josc146
cfa3669f6f fix /docs default api params (Pydantic v2) 2023-11-07 22:53:11 +08:00
josc146
26d4c9f0ed chore 2023-11-07 22:28:13 +08:00
josc146
3ddcf9f62e add webui entry 2023-11-07 22:24:06 +08:00
josc146
e734fce64f create webui assets 2023-11-07 22:23:26 +08:00
josc146
150beb578c chore 2023-11-07 22:23:00 +08:00
josc146
db6fbe8366 add python webui server 2023-11-07 22:22:29 +08:00
josc146
46f52923c3 improve webui 2023-11-07 22:21:41 +08:00
josc146
893be5cf43 webui build 2023-11-07 19:27:21 +08:00
github-actions[bot]
384e4ce4d0 release v1.5.0 2023-11-05 13:10:50 +00:00
josc146
b8712e0b89 release v1.5.0 2023-11-05 21:10:21 +08:00
josc146
37dda4333d chat attachment is now related to single message 2023-11-05 21:05:06 +08:00
josc146
64826b9af7 fix log encoding error 2023-11-05 21:00:31 +08:00
josc146
47b0c35441 update ngrok_connect 2023-11-04 20:22:28 +08:00
josc146
1dcda47013 improve startup process 2023-11-04 20:21:55 +08:00
josc146
1f81a1e5a8 upgrade to rwkv 0.8.20 2023-11-03 23:27:14 +08:00
josc146
35e92d2aef chore 2023-11-03 23:22:52 +08:00
josc146
0d99e5549e port occupied detection 2023-11-03 21:18:42 +08:00
josc146
fed1594ddc fix stop button status of Chat page 2023-10-30 21:09:23 +08:00
josc146
14b90bb36b improve dml mode performance (20% faster, https://github.com/BlinkDL/ChatRWKV/pull/181) 2023-10-30 20:24:57 +08:00
josc146
f86b7f1f08 python38 compatibility 2023-10-29 14:11:11 +08:00
josc146
54355d5a7a improve the compatibility between frontend presets and chatgpt api 2023-10-28 23:06:19 +08:00
josc146
ff7306349a improve memory usage of state cache 2023-10-28 23:04:49 +08:00
github-actions[bot]
77df56cddc release v1.4.9 2023-10-27 06:04:00 +00:00
josc146
97ae139de5 release v1.4.9 2023-10-27 14:03:28 +08:00
josc146
afd15ef2c5 base64 preset support 2023-10-27 13:35:29 +08:00
josc146
6c73eae9f6 edited chat message now is marked as Normal 2023-10-27 13:11:12 +08:00
josc146
7078f47f72 allow avatarImg to be local absolute path 2023-10-27 12:53:20 +08:00
josc146
d43954cc88 improve message interruption and retry for Chat page 2023-10-27 12:13:05 +08:00
josc146
c87de93498 allow conversation with some document (.pdf, .txt) 2023-10-27 11:36:29 +08:00
josc146
810843a5ab update manifest.json 2023-10-27 00:48:37 +08:00
josc146
f7cbd2c803 update manifest.json 2023-10-26 18:04:06 +08:00
josc146
faf1852012 update stop strategy 2023-10-26 17:47:40 +08:00
josc146
43cfab5d4b change default World series prefix to User/Assistant 2023-10-26 16:58:53 +08:00
josc146
627a20936d RWKVType now no longer relies on the file name 2023-10-26 16:55:33 +08:00
josc146
1d7f19ffaf update sample.jsonl 2023-10-26 14:08:16 +08:00
josc146
d80565d780 mark rwkv raven series as old model 2023-10-26 13:32:59 +08:00
josc146
d7ba88953d chore 2023-10-25 22:53:14 +08:00
josc146
30e1c3171e update kernel (CUDA Compute Capability 5.3) 2023-10-25 22:53:14 +08:00
josc146
1f058b16ac update kernel (CUDA Compute Capability 6.1, Previously 7.5) 2023-10-25 22:53:13 +08:00
josc146
4a192f4057 upgrade to webgpu 0.2.2 (https://github.com/josStorer/ai00_rwkv_server) 2023-10-25 21:02:44 +08:00
josc146
0331bf47f7 upgrade rwkv 0.8.16 (DirectML support; rwkv 5.2 no longer needs to ensure custom cuda kernel enabled) 2023-10-25 17:56:18 +08:00
josc146
2acdaa96b2 chore 2023-10-25 17:51:59 +08:00
josc146
1d200d53ab fix beta linux kernel 2023-10-25 17:51:13 +08:00
josc146
df9e1f408e add /file-to-text api 2023-10-25 17:14:33 +08:00
josc146
4a18696686 add pip --no-warn-script-location 2023-10-25 17:08:50 +08:00
josc146
46b3b285f5 upgrade packages 2023-10-25 17:07:40 +08:00
josc146
1d6aeab9dc fix the make command on Linux and macOS, no longer need manual operations on the wsl.go file. (#158, #173, #207) 2023-10-25 16:12:34 +08:00
josc146
ab110ba30b chore 2023-10-24 23:41:18 +08:00
josc146
2f0fa4ee56 update readme 2023-10-24 21:11:55 +08:00
josc146
0005816c1d fix linux kernel (partial revert 68228a45) 2023-10-05 00:08:18 +08:00
josc146
f70672e5a0 update .gitignore 2023-10-05 00:08:02 +08:00
github-actions[bot]
ee057071a5 release v1.4.8 2023-10-03 07:05:41 +00:00
josc146
4f26404002 release v1.4.8 2023-10-03 15:05:13 +08:00
josc146
df7652856a completion page: add format content button 2023-10-03 14:54:36 +08:00
josc146
de755463e3 improve overflow 2023-10-03 14:27:44 +08:00
josc146
2fe98d9a2c add rwkv5 cuda kernel error prompt 2023-10-03 14:25:31 +08:00
josc146
2e42039607 chore 2023-10-03 14:04:46 +08:00
josc146
71abd357a4 update startup 2023-10-03 13:50:58 +08:00
josc146
68228a4552 rwkv5 pre-compiled kernel (for windows) 2023-10-03 13:39:07 +08:00
josc146
79851433f8 upgrade rwkv pip (0.8.13) 2023-10-03 13:33:55 +08:00
github-actions[bot]
bd4de12e05 release v1.4.7 2023-09-18 15:04:47 +00:00
josc146
c0aa6aaba9 release v1.4.7 2023-09-18 23:03:54 +08:00
josc146
d7abe5f0d1 add pre-compiled beta cuda kernel (rwkv-beta==0.8.5, 40%+ faster for fp16) (thanks to #180, pre-compiled kernel of RTX 40 Series will be included later) 2023-09-18 23:02:49 +08:00
josc146
5e5e1e9651 custom tokenizer .txt support 2023-09-18 17:20:55 +08:00
github-actions[bot]
f8388a0527 release v1.4.6 2023-09-16 05:06:08 +00:00
josc146
f8b764ef8f release v1.4.6 2023-09-16 13:05:34 +08:00
josc146
fcfaa5944e frontend feature adaptation for api params (user_name, assistant_name, presystem) 2023-09-16 13:02:06 +08:00
josc146
f89e89c1c9 chore 2023-09-16 12:23:16 +08:00
josc146
a25965530c custom tokenizer (#77) 2023-09-16 00:34:11 +08:00
josc146
971124d0d7 upgrade to wails@v2.6.0 (EnableDefaultContextMenu: true) 2023-09-16 00:29:45 +08:00
josc146
d7dcc90008 chore 2023-09-15 16:31:14 +08:00
josc146
df969fcfc6 upgrade cuda-beta 2023-09-15 16:30:11 +08:00
josc146
c4042bbfd8 improve ui desc 2023-09-15 16:26:32 +08:00
josc146
4112200b4c revert(2d5456): refresh local models when download complete (for macOS) 2023-09-15 16:25:04 +08:00
Ikko Eltociear Ashimine
3f9a54e36f Update README_JA.md
add translation.
2023-09-13 16:11:43 +08:00
github-actions[bot]
3ed4456135 release v1.4.5 2023-08-27 15:57:18 +00:00
102 changed files with 5109 additions and 1842 deletions

View File

@@ -63,10 +63,10 @@ jobs:
Expand-Archive ./python-3.10.11-embed-amd64.zip -DestinationPath ./py310 Expand-Archive ./python-3.10.11-embed-amd64.zip -DestinationPath ./py310
$content=Get-Content "./py310/python310._pth"; $content | ForEach-Object {if ($_.ReadCount -eq 3) {"Lib\\site-packages"} else {$_}} | Set-Content ./py310/python310._pth $content=Get-Content "./py310/python310._pth"; $content | ForEach-Object {if ($_.ReadCount -eq 3) {"Lib\\site-packages"} else {$_}} | Set-Content ./py310/python310._pth
./py310/python ./backend-python/get-pip.py ./py310/python ./backend-python/get-pip.py
./py310/python -m pip install Cython==0.29.36 ./py310/python -m pip install Cython==3.0.4
Copy-Item -Path "${{ steps.cp310.outputs.python-path }}/../include" -Destination "py310/include" -Recurse Copy-Item -Path "${{ steps.cp310.outputs.python-path }}/../include" -Destination "py310/include" -Recurse
Copy-Item -Path "${{ steps.cp310.outputs.python-path }}/../libs" -Destination "py310/libs" -Recurse Copy-Item -Path "${{ steps.cp310.outputs.python-path }}/../libs" -Destination "py310/libs" -Recurse
./py310/python -m pip install cyac==1.7 ./py310/python -m pip install cyac==1.9
git clone https://github.com/josStorer/ai00_rwkv_server --depth=1 git clone https://github.com/josStorer/ai00_rwkv_server --depth=1
cd ai00_rwkv_server cd ai00_rwkv_server
cargo build --release cargo build --release
@@ -107,11 +107,10 @@ jobs:
mv ./target/x86_64-unknown-linux-gnu/release/ai00_server ../backend-rust/webgpu_server mv ./target/x86_64-unknown-linux-gnu/release/ai00_server ../backend-rust/webgpu_server
cd .. cd ..
go install github.com/wailsapp/wails/v2/cmd/wails@latest go install github.com/wailsapp/wails/v2/cmd/wails@latest
rm -rf ./backend-python/wkv_cuda_utils rm ./backend-python/rwkv_pip/wkv_cuda.pyd
rm ./backend-python/rwkv_pip/rwkv5.pyd
rm ./backend-python/rwkv_pip/beta/wkv_cuda.pyd
rm ./backend-python/get-pip.py rm ./backend-python/get-pip.py
sed -i '1,2d' ./backend-golang/wsl_not_windows.go
rm ./backend-golang/wsl.go
mv ./backend-golang/wsl_not_windows.go ./backend-golang/wsl.go
make make
mv build/bin/RWKV-Runner build/bin/RWKV-Runner_linux_x64 mv build/bin/RWKV-Runner build/bin/RWKV-Runner_linux_x64
@@ -139,11 +138,10 @@ jobs:
mv ./target/release/ai00_server ../backend-rust/webgpu_server mv ./target/release/ai00_server ../backend-rust/webgpu_server
cd .. cd ..
go install github.com/wailsapp/wails/v2/cmd/wails@latest go install github.com/wailsapp/wails/v2/cmd/wails@latest
rm -rf ./backend-python/wkv_cuda_utils rm ./backend-python/rwkv_pip/wkv_cuda.pyd
rm ./backend-python/rwkv_pip/rwkv5.pyd
rm ./backend-python/rwkv_pip/beta/wkv_cuda.pyd
rm ./backend-python/get-pip.py rm ./backend-python/get-pip.py
sed -i '' '1,2d' ./backend-golang/wsl_not_windows.go
rm ./backend-golang/wsl.go
mv ./backend-golang/wsl_not_windows.go ./backend-golang/wsl.go
make make
cp build/darwin/Readme_Install.txt build/bin/Readme_Install.txt cp build/darwin/Readme_Install.txt build/bin/Readme_Install.txt
cp build/bin/RWKV-Runner.app/Contents/MacOS/RWKV-Runner build/bin/RWKV-Runner_darwin_universal cp build/bin/RWKV-Runner.app/Contents/MacOS/RWKV-Runner build/bin/RWKV-Runner_darwin_universal

2
.gitignore vendored
View File

@@ -18,6 +18,7 @@ __pycache__
/cmd-helper.bat /cmd-helper.bat
/install-py-dep.bat /install-py-dep.bat
/backend-python/wkv_cuda /backend-python/wkv_cuda
/backend-python/rwkv5
*.exe *.exe
*.old *.old
.DS_Store .DS_Store
@@ -26,3 +27,4 @@ __pycache__
train_log.txt train_log.txt
finetune/json2binidx_tool/data finetune/json2binidx_tool/data
/wsl.state /wsl.state
/components

View File

@@ -1,14 +1,22 @@
## Changes ## Changes
- frontend: update manifest (a lot of new models) ### Features
- frontend: correct Preset UI description
- frontend: add HardwareMonitor (Windows Only) - add webUI for easier service sharing (enable it in Configs page or --webui command line parameter, compile it
- lora finetune: fix max_epochs (#170) with `make
- python-backend: allow message content to be empty build-web`)
- python-backend: extra ChatCompletionBody params (`raw`, `presystem`) - add deployment mode. If `/switch-model` with `deploy: true`, will disable /switch-model, /exit and other dangerous
- python-backend: add default_stop when stop is null APIs (state cache APIs, part of midi APIs)
- webgpu: fix webgpu_server file permissions of linux and macos
- chore ### Chores
- print error.txt to console when script fails
- api url getter
### Fixes
- set deepspeed to 0.11.2 to avoid finetune error
- fix `/docs` default api params (Pydantic v2)
## Install ## Install

View File

@@ -18,6 +18,16 @@ build-linux:
@echo ---- build for linux @echo ---- build for linux
wails build -upx -ldflags "-s -w" -platform linux/amd64 wails build -upx -ldflags "-s -w" -platform linux/amd64
build-web:
@echo ---- build for web
cd frontend && npm run build
dev: dev:
wails dev wails dev
dev-web:
cd frontend && npm run dev
preview:
cd frontend && npm run preview

View File

@@ -47,7 +47,9 @@ English | [简体中文](README_ZH.md) | [日本語](README_JA.md)
</div> </div>
#### Default configs has enabled custom CUDA kernel acceleration, which is much faster and consumes much less VRAM. If you encounter possible compatibility issues, go to the Configs page and turn off `Use Custom CUDA kernel to Accelerate`. #### Tip: You can deploy [backend-python](./backend-python/) on a server and use this program as a client only. Fill in your server address in the Settings `API URL`.
#### Default configs has enabled custom CUDA kernel acceleration, which is much faster and consumes much less VRAM. If you encounter possible compatibility issues (output garbled), go to the Configs page and turn off `Use Custom CUDA kernel to Accelerate`, or try to upgrade your gpu driver.
#### If Windows Defender claims this is a virus, you can try downloading [v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip) and letting it update automatically to the latest version, or add it to the trusted list (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`). #### If Windows Defender claims this is a virus, you can try downloading [v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip) and letting it update automatically to the latest version, or add it to the trusted list (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`).

View File

@@ -47,7 +47,9 @@
</div> </div>
#### デフォルトの設定はカスタム CUDA カーネルアクセラレーションを有効にしています。互換性の問題が発生する可能性がある場合は、コンフィグページに移動し、`Use Custom CUDA kernel to Accelerate` をオフにしてください。 #### ヒント:サーバーに[backend-python](./backend-python/)をデプロイし、このプログラムをクライアントとして使用することができます。設定された`API URL`にサーバーアドレスを入力してください。
#### デフォルトの設定はカスタム CUDA カーネルアクセラレーションを有効にしています。互換性の問題 (文字化けを出力する) が発生する可能性がある場合は、コンフィグページに移動し、`Use Custom CUDA kernel to Accelerate` をオフにしてください、あるいは、GPUドライバーをアップグレードしてみてください。
#### Windows Defender がこれをウイルスだと主張する場合は、[v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip) をダウンロードして最新版に自動更新させるか、信頼済みリストに追加してみてください (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`)。 #### Windows Defender がこれをウイルスだと主張する場合は、[v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip) をダウンロードして最新版に自動更新させるか、信頼済みリストに追加してみてください (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`)。
@@ -91,8 +93,8 @@ body.json:
## 埋め込み API の例 ## 埋め込み API の例
Note: v1.4.0 has improved the quality of embeddings API. The generated results are not compatible 注意: v1.4.0 では、埋め込み API の品質が向上しました。生成される結果は、以前のバージョンとは互換性がありません。
with previous versions. If you are using embeddings API to generate knowledge bases or similar, please regenerate. もし、embeddings API を使って知識ベースなどを生成している場合は、再生成してください。
LangChain を使用している場合は、`OpenAIEmbeddings(openai_api_base="http://127.0.0.1:8000", openai_api_key="sk-")` LangChain を使用している場合は、`OpenAIEmbeddings(openai_api_base="http://127.0.0.1:8000", openai_api_key="sk-")`
を使用してください を使用してください

View File

@@ -46,7 +46,9 @@ API兼容的接口这意味着一切ChatGPT客户端都是RWKV客户端。
</div> </div>
#### 预设配置已经开启自定义CUDA算子加速速度更快且显存消耗更少。如果你遇到可能的兼容性问题前往配置页面关闭`使用自定义CUDA算子加速` #### 小贴士:你可以在服务器部署[backend-python](./backend-python/),然后将此程序仅用作客户端,在设置的`API URL`中填入你的服务器地址
#### 预设配置已经开启自定义CUDA算子加速速度更快且显存消耗更少。如果你遇到可能的兼容性(输出乱码)问题,前往配置页面,关闭`使用自定义CUDA算子加速`,或更新你的显卡驱动
#### 如果Windows Defender说这是一个病毒你可以尝试下载[v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip),然后让其自动更新到最新版,或添加信任 (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`) #### 如果Windows Defender说这是一个病毒你可以尝试下载[v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip),然后让其自动更新到最新版,或添加信任 (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`)

View File

@@ -53,12 +53,12 @@ type FileInfo struct {
ModTime string `json:"modTime"` ModTime string `json:"modTime"`
} }
func (a *App) ReadFileInfo(fileName string) (FileInfo, error) { func (a *App) ReadFileInfo(fileName string) (*FileInfo, error) {
info, err := os.Stat(a.exDir + fileName) info, err := os.Stat(a.exDir + fileName)
if err != nil { if err != nil {
return FileInfo{}, err return nil, err
} }
return FileInfo{ return &FileInfo{
Name: info.Name(), Name: info.Name(),
Size: info.Size(), Size: info.Size(),
IsDir: info.IsDir(), IsDir: info.IsDir(),
@@ -145,6 +145,20 @@ func (a *App) OpenSaveFileDialogBytes(filterPattern string, defaultFileName stri
return path, nil return path, nil
} }
// Only return the path of the selected file, because communication between frontend and backend is slow. Use AssetServer Handler to read the file.
func (a *App) OpenOpenFileDialog(filterPattern string) (string, error) {
path, err := wruntime.OpenFileDialog(a.ctx, wruntime.OpenDialogOptions{
Filters: []wruntime.FileFilter{{Pattern: filterPattern}},
})
if err != nil {
return "", err
}
if path == "" {
return "", nil
}
return path, nil
}
func (a *App) OpenFileFolder(path string, relative bool) error { func (a *App) OpenFileFolder(path string, relative bool) error {
var absPath string var absPath string
var err error var err error

View File

@@ -10,7 +10,7 @@ import (
"strings" "strings"
) )
func (a *App) StartServer(python string, port int, host string, rwkvBeta bool) (string, error) { func (a *App) StartServer(python string, port int, host string, webui bool, rwkvBeta bool) (string, error) {
var err error var err error
if python == "" { if python == "" {
python, err = GetPython() python, err = GetPython()
@@ -19,6 +19,9 @@ func (a *App) StartServer(python string, port int, host string, rwkvBeta bool) (
return "", err return "", err
} }
args := []string{python, "./backend-python/main.py"} args := []string{python, "./backend-python/main.py"}
if webui {
args = append(args, "--webui")
}
if rwkvBeta { if rwkvBeta {
args = append(args, "--rwkv-beta") args = append(args, "--rwkv-beta")
} }
@@ -28,8 +31,7 @@ func (a *App) StartServer(python string, port int, host string, rwkvBeta bool) (
func (a *App) StartWebGPUServer(port int, host string) (string, error) { func (a *App) StartWebGPUServer(port int, host string) (string, error) {
args := []string{"./backend-rust/webgpu_server"} args := []string{"./backend-rust/webgpu_server"}
args = append(args, "-a", "0", "-t", "backend-rust/assets/rwkv_vocab_v20230424.json", args = append(args, "--port", strconv.Itoa(port), "--ip", host)
"--port", strconv.Itoa(port), "--ip", host)
return Cmd(args...) return Cmd(args...)
} }
@@ -149,9 +151,9 @@ func (a *App) InstallPyDep(python string, cnMirror bool) (string, error) {
if runtime.GOOS == "windows" { if runtime.GOOS == "windows" {
ChangeFileLine("./py310/python310._pth", 3, "Lib\\site-packages") ChangeFileLine("./py310/python310._pth", 3, "Lib\\site-packages")
installScript := python + " ./backend-python/get-pip.py -i https://pypi.tuna.tsinghua.edu.cn/simple\n" + installScript := python + " ./backend-python/get-pip.py -i https://pypi.tuna.tsinghua.edu.cn/simple --no-warn-script-location\n" +
python + " -m pip install torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 --index-url https://download.pytorch.org/whl/cu117\n" + python + " -m pip install torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 --index-url https://download.pytorch.org/whl/cu117 --no-warn-script-location\n" +
python + " -m pip install -r ./backend-python/requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple\n" + python + " -m pip install -r ./backend-python/requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple --no-warn-script-location\n" +
"exit" "exit"
if !cnMirror { if !cnMirror {
installScript = strings.Replace(installScript, " -i https://pypi.tuna.tsinghua.edu.cn/simple", "", -1) installScript = strings.Replace(installScript, " -i https://pypi.tuna.tsinghua.edu.cn/simple", "", -1)

View File

@@ -5,12 +5,15 @@ import (
"bufio" "bufio"
"embed" "embed"
"errors" "errors"
"fmt"
"io" "io"
"io/fs" "io/fs"
"net"
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"runtime" "runtime"
"strconv"
"strings" "strings"
) )
@@ -205,3 +208,12 @@ func Unzip(source, destination string) error {
} }
return nil return nil
} }
func (a *App) IsPortAvailable(port int) bool {
l, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%s", strconv.Itoa(port)))
if err != nil {
return false
}
defer l.Close()
return true
}

View File

@@ -231,5 +231,6 @@ try:
convert_and_save_and_exit=args.out, convert_and_save_and_exit=args.out,
) )
except Exception as e: except Exception as e:
print(e)
with open("error.txt", "w") as f: with open("error.txt", "w") as f:
f.write(str(e)) f.write(str(e))

View File

@@ -18,20 +18,31 @@ parser.add_argument(
args = parser.parse_args() args = parser.parse_args()
def convert_file( def rename_key(rename, name):
pt_filename: str, for k, v in rename.items():
sf_filename: str, if k in name:
): name = name.replace(k, v)
return name
def convert_file(pt_filename: str, sf_filename: str, transpose_names=[], rename={}):
loaded = torch.load(pt_filename, map_location="cpu") loaded = torch.load(pt_filename, map_location="cpu")
if "state_dict" in loaded: if "state_dict" in loaded:
loaded = loaded["state_dict"] loaded = loaded["state_dict"]
loaded = {k: v.clone().half() for k, v in loaded.items()} loaded = {k: v.clone().half() for k, v in loaded.items()}
for k, v in loaded.items(): # for k, v in loaded.items():
print(f"{k}\t{v.shape}\t{v.dtype}") # print(f'{k}\t{v.shape}\t{v.dtype}')
# For tensors to be contiguous # For tensors to be contiguous
loaded = {k: v.contiguous() for k, v in loaded.items()} for k, v in loaded.items():
for transpose_name in transpose_names:
if transpose_name in k:
loaded[k] = v.transpose(0, 1)
loaded = {rename_key(rename, k).lower(): v.contiguous() for k, v in loaded.items()}
for k, v in loaded.items():
print(f"{k}\t{v.shape}\t{v.dtype}")
dirname = os.path.dirname(sf_filename) dirname = os.path.dirname(sf_filename)
os.makedirs(dirname, exist_ok=True) os.makedirs(dirname, exist_ok=True)
@@ -46,8 +57,14 @@ def convert_file(
if __name__ == "__main__": if __name__ == "__main__":
try: try:
convert_file(args.input, args.output) convert_file(
args.input,
args.output,
["lora_A"],
{"time_faaaa": "time_first", "lora_A": "lora.0", "lora_B": "lora.1"},
)
print(f"Saved to {args.output}") print(f"Saved to {args.output}")
except Exception as e: except Exception as e:
print(e)
with open("error.txt", "w") as f: with open("error.txt", "w") as f:
f.write(str(e)) f.write(str(e))

View File

@@ -1,3 +1,5 @@
import multipart
import fitz
import safetensors import safetensors
import midi2audio import midi2audio
import mido import mido
@@ -9,6 +11,7 @@ import GPUtil
import torch import torch
import rwkv import rwkv
import langchain
import numpy import numpy
import tokenizers import tokenizers
import fastapi import fastapi

View File

@@ -4,6 +4,7 @@ Args = "args"
Model = "model" Model = "model"
Model_Status = "model_status" Model_Status = "model_status"
Model_Config = "model_config" Model_Config = "model_config"
Deploy_Mode = "deploy_mode"
class ModelStatus(Enum): class ModelStatus(Enum):
@@ -16,6 +17,7 @@ def init():
global GLOBALS global GLOBALS
GLOBALS = {} GLOBALS = {}
set(Model_Status, ModelStatus.Offline) set(Model_Status, ModelStatus.Offline)
set(Deploy_Mode, False)
def set(key, value): def set(key, value):

View File

@@ -2,70 +2,8 @@ import time
start_time = time.time() start_time = time.time()
import os
import sys
import argparse import argparse
from typing import Sequence from typing import Union, Sequence
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
import psutil
from fastapi import Depends, FastAPI
from fastapi.middleware.cors import CORSMiddleware
import uvicorn
from utils.rwkv import *
from utils.torch import *
from utils.ngrok import *
from utils.log import log_middleware
from routes import completion, config, state_cache, midi, misc
import global_var
app = FastAPI(dependencies=[Depends(log_middleware)])
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(completion.router)
app.include_router(config.router)
app.include_router(midi.router)
app.include_router(misc.router)
app.include_router(state_cache.router)
@app.on_event("startup")
def init():
global_var.init()
cmd_params = os.environ["RWKV_RUNNER_PARAMS"]
global_var.set(
global_var.Args, get_args(cmd_params.split(" ") if cmd_params else None)
)
state_cache.init()
set_torch()
if os.environ.get("ngrok_token") is not None:
ngrok_connect()
@app.get("/", tags=["Root"])
def read_root():
return {"Hello": "World!"}
@app.post("/exit", tags=["Root"])
def exit():
parent_pid = os.getpid()
parent = psutil.Process(parent_pid)
for child in parent.children(recursive=True):
child.kill()
parent.kill()
def get_args(args: Union[Sequence[str], None] = None): def get_args(args: Union[Sequence[str], None] = None):
@@ -84,6 +22,11 @@ def get_args(args: Union[Sequence[str], None] = None):
help="host to run the server on (default: 127.0.0.1)", help="host to run the server on (default: 127.0.0.1)",
) )
group = parser.add_argument_group(title="mode arguments") group = parser.add_argument_group(title="mode arguments")
group.add_argument(
"--webui",
action="store_true",
help="whether to enable WebUI (default: False)",
)
group.add_argument( group.add_argument(
"--rwkv-beta", "--rwkv-beta",
action="store_true", action="store_true",
@@ -96,6 +39,96 @@ def get_args(args: Union[Sequence[str], None] = None):
if __name__ == "__main__": if __name__ == "__main__":
args = get_args() args = get_args()
import os
import sys
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
import psutil
from contextlib import asynccontextmanager
from fastapi import Depends, FastAPI, status
from fastapi.middleware.cors import CORSMiddleware
import uvicorn
from utils.rwkv import *
from utils.torch import *
from utils.ngrok import *
from utils.log import log_middleware
from routes import completion, config, state_cache, midi, misc, file_process
import global_var
@asynccontextmanager
async def lifespan(app: FastAPI):
init()
yield
app = FastAPI(lifespan=lifespan, dependencies=[Depends(log_middleware)])
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(completion.router)
app.include_router(config.router)
app.include_router(midi.router)
app.include_router(file_process.router)
app.include_router(misc.router)
app.include_router(state_cache.router)
@app.post("/exit", tags=["Root"])
def exit():
if global_var.get(global_var.Deploy_Mode) is True:
raise HTTPException(status.HTTP_403_FORBIDDEN)
parent_pid = os.getpid()
parent = psutil.Process(parent_pid)
for child in parent.children(recursive=True):
child.kill()
parent.kill()
try:
if (
"RWKV_RUNNER_PARAMS" in os.environ
and "--webui" in os.environ["RWKV_RUNNER_PARAMS"].split(" ")
) or args.webui:
from webui_server import webui_server
app.mount("/", webui_server)
except NameError:
pass
@app.get("/", tags=["Root"])
def read_root():
return {"Hello": "World!"}
def init():
global_var.init()
cmd_params = os.environ["RWKV_RUNNER_PARAMS"]
global_var.set(
global_var.Args, get_args(cmd_params.split(" ") if cmd_params else None)
)
state_cache.init()
set_torch()
if os.environ.get("ngrok_token") is not None:
ngrok_connect()
if __name__ == "__main__":
os.environ["RWKV_RUNNER_PARAMS"] = " ".join(sys.argv[1:]) os.environ["RWKV_RUNNER_PARAMS"] = " ".join(sys.argv[1:])
print("--- %s seconds ---" % (time.time() - start_time)) print("--- %s seconds ---" % (time.time() - start_time))
uvicorn.run("main:app", port=args.port, host=args.host, workers=1) uvicorn.run("main:app", port=args.port, host=args.host, workers=1)

Binary file not shown.

View File

@@ -40,19 +40,21 @@ default_stop = [
class ChatCompletionBody(ModelConfigBody): class ChatCompletionBody(ModelConfigBody):
messages: Union[List[Message], None] messages: Union[List[Message], None]
model: str = "rwkv" model: Union[str, None] = "rwkv"
stream: bool = False stream: bool = False
stop: Union[str, List[str], None] = default_stop stop: Union[str, List[str], None] = default_stop
user_name: Union[str, None] = Field(None, description="Internal user name") user_name: Union[str, None] = Field(
None, description="Internal user name", min_length=1
)
assistant_name: Union[str, None] = Field( assistant_name: Union[str, None] = Field(
None, description="Internal assistant name" None, description="Internal assistant name", min_length=1
) )
presystem: bool = Field( presystem: bool = Field(
True, description="Whether to insert default system prompt at the beginning" True, description="Whether to insert default system prompt at the beginning"
) )
class Config: model_config = {
schema_extra = { "json_schema_extra": {
"example": { "example": {
"messages": [ "messages": [
{"role": Role.User.value, "content": "hello", "raw": False} {"role": Role.User.value, "content": "hello", "raw": False}
@@ -70,16 +72,17 @@ class ChatCompletionBody(ModelConfigBody):
"frequency_penalty": 0.4, "frequency_penalty": 0.4,
} }
} }
}
class CompletionBody(ModelConfigBody): class CompletionBody(ModelConfigBody):
prompt: Union[str, List[str], None] prompt: Union[str, List[str], None]
model: str = "rwkv" model: Union[str, None] = "rwkv"
stream: bool = False stream: bool = False
stop: Union[str, List[str], None] = None stop: Union[str, List[str], None] = None
class Config: model_config = {
schema_extra = { "json_schema_extra": {
"example": { "example": {
"prompt": "The following is an epic science fiction masterpiece that is immortalized, " "prompt": "The following is an epic science fiction masterpiece that is immortalized, "
+ "with delicate descriptions and grand depictions of interstellar civilization wars.\nChapter 1.\n", + "with delicate descriptions and grand depictions of interstellar civilization wars.\nChapter 1.\n",
@@ -93,6 +96,7 @@ class CompletionBody(ModelConfigBody):
"frequency_penalty": 0.4, "frequency_penalty": 0.4,
} }
} }
}
completion_lock = Lock() completion_lock = Lock()
@@ -317,11 +321,13 @@ The following is a coherent verbose detailed conversation between a girl named {
completion_text += append_message + "\n\n" completion_text += append_message + "\n\n"
completion_text += f"{bot}{interface}" completion_text += f"{bot}{interface}"
user_code = model.pipeline.decode([model.pipeline.encode(user)[0]])
bot_code = model.pipeline.decode([model.pipeline.encode(bot)[0]])
if type(body.stop) == str: if type(body.stop) == str:
body.stop = [body.stop, f"\n\n{user}", f"\n\n{bot}"] body.stop = [body.stop, f"\n\n{user_code}", f"\n\n{bot_code}"]
elif type(body.stop) == list: elif type(body.stop) == list:
body.stop.append(f"\n\n{user}") body.stop.append(f"\n\n{user_code}")
body.stop.append(f"\n\n{bot}") body.stop.append(f"\n\n{bot_code}")
elif body.stop is None: elif body.stop is None:
body.stop = default_stop body.stop = default_stop
@@ -368,12 +374,12 @@ async def completions(body: CompletionBody, request: Request):
class EmbeddingsBody(BaseModel): class EmbeddingsBody(BaseModel):
input: Union[str, List[str], List[List[int]], None] input: Union[str, List[str], List[List[int]], None]
model: str = "rwkv" model: Union[str, None] = "rwkv"
encoding_format: str = None encoding_format: str = None
fast_mode: bool = False fast_mode: bool = False
class Config: model_config = {
schema_extra = { "json_schema_extra": {
"example": { "example": {
"input": "a big apple", "input": "a big apple",
"model": "rwkv", "model": "rwkv",
@@ -381,6 +387,7 @@ class EmbeddingsBody(BaseModel):
"fast_mode": False, "fast_mode": False,
} }
} }
}
def embedding_base64(embedding: List[float]) -> str: def embedding_base64(embedding: List[float]) -> str:

View File

@@ -10,39 +10,34 @@ import global_var
router = APIRouter() router = APIRouter()
def get_tokens_path(model_path: str):
model_path = model_path.lower()
tokenizer_dir = f"{pathlib.Path(__file__).parent.parent.resolve()}/rwkv_pip/"
default_tokens_path = tokenizer_dir + "20B_tokenizer.json"
if "raven" in model_path:
return default_tokens_path
elif "world" in model_path:
return "rwkv_vocab_v20230424"
elif "midi" in model_path:
return tokenizer_dir + "tokenizer-midi.json"
else:
return default_tokens_path
class SwitchModelBody(BaseModel): class SwitchModelBody(BaseModel):
model: str model: str
strategy: str strategy: str
tokenizer: Union[str, None] = None
customCuda: bool = False customCuda: bool = False
deploy: bool = Field(
False,
description="Deploy mode. If success, will disable /switch-model, /exit and other dangerous APIs (state cache APIs, part of midi APIs)",
)
class Config: model_config = {
schema_extra = { "json_schema_extra": {
"example": { "example": {
"model": "models/RWKV-4-World-3B-v1-20230619-ctx4096.pth", "model": "models/RWKV-4-World-3B-v1-20230619-ctx4096.pth",
"strategy": "cuda fp16", "strategy": "cuda fp16",
"tokenizer": None,
"customCuda": False, "customCuda": False,
"deploy": False,
} }
} }
}
@router.post("/switch-model", tags=["Configs"]) @router.post("/switch-model", tags=["Configs"])
def switch_model(body: SwitchModelBody, response: Response, request: Request): def switch_model(body: SwitchModelBody, response: Response, request: Request):
if global_var.get(global_var.Deploy_Mode) is True:
raise HTTPException(Status.HTTP_403_FORBIDDEN)
if global_var.get(global_var.Model_Status) is global_var.ModelStatus.Loading: if global_var.get(global_var.Model_Status) is global_var.ModelStatus.Loading:
response.status_code = Status.HTTP_304_NOT_MODIFIED response.status_code = Status.HTTP_304_NOT_MODIFIED
return return
@@ -68,17 +63,7 @@ def switch_model(body: SwitchModelBody, response: Response, request: Request):
try: try:
global_var.set( global_var.set(
global_var.Model, global_var.Model,
TextRWKV( RWKV(model=body.model, strategy=body.strategy, tokenizer=body.tokenizer),
model=body.model,
strategy=body.strategy,
tokens_path=get_tokens_path(body.model),
)
if "midi" not in body.model.lower()
else MusicRWKV(
model=body.model,
strategy=body.strategy,
tokens_path=get_tokens_path(body.model),
),
) )
except Exception as e: except Exception as e:
print(e) print(e)
@@ -88,6 +73,8 @@ def switch_model(body: SwitchModelBody, response: Response, request: Request):
Status.HTTP_500_INTERNAL_SERVER_ERROR, f"failed to load: {e}" Status.HTTP_500_INTERNAL_SERVER_ERROR, f"failed to load: {e}"
) )
if body.deploy:
global_var.set(global_var.Deploy_Mode, True)
if global_var.get(global_var.Model_Config) is None: if global_var.get(global_var.Model_Config) is None:
global_var.set( global_var.set(
global_var.Model_Config, get_rwkv_config(global_var.get(global_var.Model)) global_var.Model_Config, get_rwkv_config(global_var.get(global_var.Model))

View File

@@ -0,0 +1,79 @@
import os
from fastapi import (
APIRouter,
HTTPException,
status,
Depends,
File,
UploadFile,
)
from pydantic import BaseModel
from typing import Iterator
router = APIRouter()
class FileToTextParams(BaseModel):
file_name: str
file_encoding: str = "utf-8"
@router.post("/file-to-text", tags=["File Process"])
async def file_to_text(
params: FileToTextParams = Depends(), file_data: UploadFile = File(...)
):
from langchain.schema import Document
from langchain.document_loaders.blob_loaders import Blob
# from langchain
def parse_text(blob: Blob) -> Iterator[Document]:
yield Document(page_content=blob.as_string(), metadata={"source": blob.source})
# from langchain
def parse_pdf(blob: Blob) -> Iterator[Document]:
import fitz
with blob.as_bytes_io() as stream:
doc = fitz.Document(stream=stream)
yield from [
Document(
page_content=page.get_text(),
metadata=dict(
{
"source": blob.source,
"file_path": blob.source,
"page": page.number,
"total_pages": len(doc),
},
**{
k: doc.metadata[k]
for k in doc.metadata
if type(doc.metadata[k]) in [str, int]
},
),
)
for page in doc
]
file_parsers = {".txt": parse_text, ".pdf": parse_pdf}
file_name = file_data.filename or params.file_name
file_ext = os.path.splitext(file_name)[-1]
if file_ext not in file_parsers:
raise HTTPException(status.HTTP_400_BAD_REQUEST, "file type not supported")
try:
pages: Iterator[Document] = file_parsers[file_ext](
Blob.from_data(
await file_data.read(),
encoding=params.file_encoding,
path=file_name,
)
)
pages = list(pages)
except Exception as e:
raise HTTPException(status.HTTP_400_BAD_REQUEST, f"{e}")
return {"pages": pages}

View File

@@ -1,4 +1,5 @@
import io import io
import global_var
from fastapi import APIRouter, HTTPException, status from fastapi import APIRouter, HTTPException, status
from starlette.responses import StreamingResponse from starlette.responses import StreamingResponse
from pydantic import BaseModel from pydantic import BaseModel
@@ -11,12 +12,13 @@ router = APIRouter()
class TextToMidiBody(BaseModel): class TextToMidiBody(BaseModel):
text: str text: str
class Config: model_config = {
schema_extra = { "json_schema_extra": {
"example": { "example": {
"text": "p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:2d:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:2d:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:1f:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:26:a g:39:a g:39:a g:3e:a g:3e:a g:42:a g:42:a pi:39:a pi:3e:a pi:42:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0", "text": "p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:2d:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:2d:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:1f:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:26:a g:39:a g:39:a g:3e:a g:3e:a g:42:a g:42:a pi:39:a pi:3e:a pi:42:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0",
} }
} }
}
@router.post("/text-to-midi", tags=["MIDI"]) @router.post("/text-to-midi", tags=["MIDI"])
@@ -35,17 +37,21 @@ class TxtToMidiBody(BaseModel):
txt_path: str txt_path: str
midi_path: str midi_path: str
class Config: model_config = {
schema_extra = { "json_schema_extra": {
"example": { "example": {
"txt_path": "midi/sample.txt", "txt_path": "midi/sample.txt",
"midi_path": "midi/sample.mid", "midi_path": "midi/sample.mid",
} }
} }
}
@router.post("/txt-to-midi", tags=["MIDI"]) @router.post("/txt-to-midi", tags=["MIDI"])
def txt_to_midi(body: TxtToMidiBody): def txt_to_midi(body: TxtToMidiBody):
if global_var.get(global_var.Deploy_Mode) is True:
raise HTTPException(status.HTTP_403_FORBIDDEN)
if not body.midi_path.startswith("midi/"): if not body.midi_path.startswith("midi/"):
raise HTTPException(status.HTTP_400_BAD_REQUEST, "bad output path") raise HTTPException(status.HTTP_400_BAD_REQUEST, "bad output path")
@@ -65,14 +71,15 @@ class MidiToWavBody(BaseModel):
wav_path: str wav_path: str
sound_font_path: str = "assets/default_sound_font.sf2" sound_font_path: str = "assets/default_sound_font.sf2"
class Config: model_config = {
schema_extra = { "json_schema_extra": {
"example": { "example": {
"midi_path": "midi/sample.mid", "midi_path": "midi/sample.mid",
"wav_path": "midi/sample.wav", "wav_path": "midi/sample.wav",
"sound_font_path": "assets/default_sound_font.sf2", "sound_font_path": "assets/default_sound_font.sf2",
} }
} }
}
@router.post("/midi-to-wav", tags=["MIDI"]) @router.post("/midi-to-wav", tags=["MIDI"])
@@ -81,6 +88,9 @@ def midi_to_wav(body: MidiToWavBody):
Install fluidsynth first, see more: https://github.com/FluidSynth/fluidsynth/wiki/Download#distributions Install fluidsynth first, see more: https://github.com/FluidSynth/fluidsynth/wiki/Download#distributions
""" """
if global_var.get(global_var.Deploy_Mode) is True:
raise HTTPException(status.HTTP_403_FORBIDDEN)
if not body.wav_path.startswith("midi/"): if not body.wav_path.startswith("midi/"):
raise HTTPException(status.HTTP_400_BAD_REQUEST, "bad output path") raise HTTPException(status.HTTP_400_BAD_REQUEST, "bad output path")
@@ -95,14 +105,15 @@ class TextToWavBody(BaseModel):
wav_name: str wav_name: str
sound_font_path: str = "assets/default_sound_font.sf2" sound_font_path: str = "assets/default_sound_font.sf2"
class Config: model_config = {
schema_extra = { "json_schema_extra": {
"example": { "example": {
"text": "p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:2d:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:2d:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:1f:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:26:a g:39:a g:39:a g:3e:a g:3e:a g:42:a g:42:a pi:39:a pi:3e:a pi:42:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0", "text": "p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:2d:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:2d:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:1f:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:26:a g:39:a g:39:a g:3e:a g:3e:a g:42:a g:42:a pi:39:a pi:3e:a pi:42:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0",
"wav_name": "sample", "wav_name": "sample",
"sound_font_path": "assets/default_sound_font.sf2", "sound_font_path": "assets/default_sound_font.sf2",
} }
} }
}
@router.post("/text-to-wav", tags=["MIDI"]) @router.post("/text-to-wav", tags=["MIDI"])
@@ -111,6 +122,9 @@ def text_to_wav(body: TextToWavBody):
Install fluidsynth first, see more: https://github.com/FluidSynth/fluidsynth/wiki/Download#distributions Install fluidsynth first, see more: https://github.com/FluidSynth/fluidsynth/wiki/Download#distributions
""" """
if global_var.get(global_var.Deploy_Mode) is True:
raise HTTPException(status.HTTP_403_FORBIDDEN)
text = body.text.strip() text = body.text.strip()
if not text.startswith("<start>"): if not text.startswith("<start>"):
text = "<start> " + text text = "<start> " + text

View File

@@ -4,12 +4,13 @@ from fastapi import APIRouter, HTTPException, Request, Response, status
from pydantic import BaseModel from pydantic import BaseModel
import gc import gc
import copy import copy
import global_var
router = APIRouter() router = APIRouter()
trie = None trie = None
dtrie: Dict = {} dtrie: Dict = {}
max_trie_len = 3000 max_trie_len = 300
loop_start_id = 1 # to prevent preloaded prompts from being deleted loop_start_id = 1 # to prevent preloaded prompts from being deleted
loop_del_trie_id = loop_start_id loop_del_trie_id = loop_start_id
@@ -36,6 +37,9 @@ def init():
def disable_state_cache(): def disable_state_cache():
global trie, dtrie global trie, dtrie
if global_var.get(global_var.Deploy_Mode) is True:
raise HTTPException(status.HTTP_403_FORBIDDEN)
trie = None trie = None
dtrie = {} dtrie = {}
gc.collect() gc.collect()
@@ -46,6 +50,10 @@ def disable_state_cache():
@router.post("/enable-state-cache", tags=["State Cache"]) @router.post("/enable-state-cache", tags=["State Cache"])
def enable_state_cache(): def enable_state_cache():
global trie, dtrie global trie, dtrie
if global_var.get(global_var.Deploy_Mode) is True:
raise HTTPException(status.HTTP_403_FORBIDDEN)
try: try:
import cyac import cyac
@@ -68,6 +76,10 @@ class AddStateBody(BaseModel):
@router.post("/add-state", tags=["State Cache"]) @router.post("/add-state", tags=["State Cache"])
def add_state(body: AddStateBody): def add_state(body: AddStateBody):
global trie, dtrie, loop_del_trie_id global trie, dtrie, loop_del_trie_id
if global_var.get(global_var.Deploy_Mode) is True:
raise HTTPException(status.HTTP_403_FORBIDDEN)
if trie is None: if trie is None:
raise HTTPException(status.HTTP_400_BAD_REQUEST, "trie not loaded") raise HTTPException(status.HTTP_400_BAD_REQUEST, "trie not loaded")
@@ -108,6 +120,10 @@ def add_state(body: AddStateBody):
@router.post("/reset-state", tags=["State Cache"]) @router.post("/reset-state", tags=["State Cache"])
def reset_state(): def reset_state():
global trie, dtrie global trie, dtrie
if global_var.get(global_var.Deploy_Mode) is True:
raise HTTPException(status.HTTP_403_FORBIDDEN)
if trie is None: if trie is None:
raise HTTPException(status.HTTP_400_BAD_REQUEST, "trie not loaded") raise HTTPException(status.HTTP_400_BAD_REQUEST, "trie not loaded")
@@ -144,6 +160,10 @@ def __get_a_dtrie_buff_size(dtrie_v):
@router.post("/longest-prefix-state", tags=["State Cache"]) @router.post("/longest-prefix-state", tags=["State Cache"])
def longest_prefix_state(body: LongestPrefixStateBody, request: Request): def longest_prefix_state(body: LongestPrefixStateBody, request: Request):
global trie global trie
if global_var.get(global_var.Deploy_Mode) is True:
raise HTTPException(status.HTTP_403_FORBIDDEN)
if trie is None: if trie is None:
raise HTTPException(status.HTTP_400_BAD_REQUEST, "trie not loaded") raise HTTPException(status.HTTP_400_BAD_REQUEST, "trie not loaded")
@@ -183,6 +203,10 @@ def longest_prefix_state(body: LongestPrefixStateBody, request: Request):
@router.post("/save-state", tags=["State Cache"]) @router.post("/save-state", tags=["State Cache"])
def save_state(): def save_state():
global trie global trie
if global_var.get(global_var.Deploy_Mode) is True:
raise HTTPException(status.HTTP_403_FORBIDDEN)
if trie is None: if trie is None:
raise HTTPException(status.HTTP_400_BAD_REQUEST, "trie not loaded") raise HTTPException(status.HTTP_400_BAD_REQUEST, "trie not loaded")

View File

@@ -88,7 +88,7 @@ struct Mix {
using torch::Tensor; using torch::Tensor;
void gemm_fp16_cublas(Tensor a, Tensor b, Tensor c); void gemm_fp16_cublas_tensor(Tensor a, Tensor b, Tensor c);
Tensor att_one(Tensor x, Tensor ln_w, Tensor ln_b, Tensor sx, Tensor k_mix, Tensor att_one(Tensor x, Tensor ln_w, Tensor ln_b, Tensor sx, Tensor k_mix,
Tensor v_mix, Tensor r_mix, Tensor kw, Tensor v_mix, Tensor r_mix, Tensor kw,
@@ -105,9 +105,9 @@ Tensor att_one(Tensor x, Tensor ln_w, Tensor ln_b, Tensor sx, Tensor k_mix,
data_ptr<half>(vx), data_ptr<half>(rx)}, data_ptr<half>(vx), data_ptr<half>(rx)},
x.numel()); x.numel());
gemm_fp16_cublas(kx, kw, k); gemm_fp16_cublas_tensor(kx, kw, k);
gemm_fp16_cublas(vx, vw, v); gemm_fp16_cublas_tensor(vx, vw, v);
gemm_fp16_cublas(rx, rw, r); gemm_fp16_cublas_tensor(rx, rw, r);
at::sigmoid_(r); at::sigmoid_(r);
element_wise(WkvForwardOne{data_ptr<float>(t_first), data_ptr<float>(k), element_wise(WkvForwardOne{data_ptr<float>(t_first), data_ptr<float>(k),
@@ -118,7 +118,7 @@ Tensor att_one(Tensor x, Tensor ln_w, Tensor ln_b, Tensor sx, Tensor k_mix,
data_ptr<half>(r)}, data_ptr<half>(r)},
x.numel()); x.numel());
gemm_fp16_cublas(r, ow, x_plus_out); gemm_fp16_cublas_tensor(r, ow, x_plus_out);
x_plus_out += x; x_plus_out += x;
return xx; return xx;
} }

View File

@@ -0,0 +1,109 @@
#include "ATen/ATen.h"
#include <cuda_fp16.h>
#include <cuda_runtime.h>
#include <torch/extension.h>
#include "element_wise.h"
#include "util.h"
// Equivalent Python code:
// s1 = t_first * a + s
// s2 = a + t_decay * s
struct Fused1 {
const float *t_first;
const float *t_decay;
const float *a;
const float *s;
const int32_t inner_size;
/* out */ float *s1;
/* out */ float *s2;
__device__ void operator()(int i) const {
const int j = i / inner_size;
s1[i] = t_first[j] * a[i] + s[i];
s2[i] = a[i] + t_decay[j] * s[i];
}
};
/*
Equivalent Python code:
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
*/
struct Mix {
const half *xx;
const half *sx;
const half *k_mix;
const half *v_mix;
const half *r_mix;
/* out */ half *kx;
/* out */ half *vx;
/* out */ half *rx;
__device__ void operator()(int i) const {
half xx_ = xx[i];
half sx_ = sx[i];
half k_mix_ = k_mix[i];
half v_mix_ = v_mix[i];
half r_mix_ = r_mix[i];
kx[i] = __hadd(__hmul(xx_, k_mix_),
__hmul(sx_, __hsub(__float2half(1), k_mix_)));
vx[i] = __hadd(__hmul(xx_, v_mix_),
__hmul(sx_, __hsub(__float2half(1), v_mix_)));
rx[i] = __hadd(__hmul(xx_, r_mix_),
__hmul(sx_, __hsub(__float2half(1), r_mix_)));
}
};
using torch::Tensor;
void gemm_fp16_cublas_tensor(Tensor a, Tensor b, Tensor c);
Tensor att_one_v5(Tensor x, Tensor sx, Tensor s, Tensor ln_w, Tensor ln_b,
Tensor lx_w, Tensor lx_b, Tensor k_mix, Tensor v_mix,
Tensor r_mix, Tensor kw,
/* imm */ Tensor kx, Tensor vw, /* imm */ Tensor vx,
Tensor rw,
/* imm */ Tensor rx, Tensor ow, Tensor t_first,
/* imm */ Tensor k, Tensor t_decay, /* imm */ Tensor v,
/* imm */ Tensor r, /* imm */ Tensor s1,
/* out */ Tensor x_plus_out, /* out */ Tensor s2) {
Tensor xx = at::layer_norm(x, {x.size(-1)}, ln_w, ln_b);
element_wise(Mix{data_ptr<half>(xx), data_ptr<half>(sx),
data_ptr<half>(k_mix), data_ptr<half>(v_mix),
data_ptr<half>(r_mix), data_ptr<half>(kx),
data_ptr<half>(vx), data_ptr<half>(rx)},
x.numel());
int H = t_decay.size(0);
int S = x.size(-1) / H;
gemm_fp16_cublas_tensor(rx, rw, r);
r = at::reshape(r, {H, 1, S});
gemm_fp16_cublas_tensor(kx, kw, k);
k = at::reshape(k, {H, S, 1});
gemm_fp16_cublas_tensor(vx, vw, v);
v = at::reshape(v, {H, 1, S});
{
Tensor a = at::matmul(k, v);
// s1 = t_first * a + s
// s2 = a + t_decay * s
element_wise(Fused1{data_ptr<float>(t_first), data_ptr<float>(t_decay),
data_ptr<float>(a), data_ptr<float>(s),
static_cast<int32_t>(a.size(1) * a.size(2)),
data_ptr<float>(s1), data_ptr<float>(s2)},
a.numel());
}
Tensor out = at::matmul(r, s1);
out = at::flatten(out);
out = at::squeeze(at::group_norm(at::unsqueeze(out, 0), H, lx_w, lx_b), 0);
out = at::_cast_Half(out);
gemm_fp16_cublas_tensor(out, ow, x_plus_out);
x_plus_out += x;
return xx;
}

View File

@@ -8,7 +8,6 @@
using torch::Tensor; using torch::Tensor;
void gemm_fp16_cublas(Tensor a, Tensor b, Tensor c);
void gemm_fp16_cublas(const void *a, const void *b, void *c, int m, void gemm_fp16_cublas(const void *a, const void *b, void *c, int m,
int n, int k, bool output_fp32); int n, int k, bool output_fp32);

View File

@@ -70,11 +70,59 @@ void gemm_fp16_cublas(const void *a, const void *b, void *c, int ori_m,
cuda_c_data_type, cublas_ldc, compute_type, algo)); cuda_c_data_type, cublas_ldc, compute_type, algo));
} }
void gemm_fp16_cublas(torch::Tensor a, torch::Tensor b, torch::Tensor c) { /*
// comptiable with rwkv one mode, 1-D tensor * 2-D tensor NOTE: blas gemm is column-major by default, but we need row-major output.
const int m = a.dense_dim() == 1 ? 1 : a.size(0); The data of row-major, transposed matrix is exactly the same as the
const int n = b.size(1); column-major, non-transposed matrix, and C = A * B ---> C^T = B^T * A^T
const int k = b.size(0); */
gemm_fp16_cublas(a.data_ptr(), b.data_ptr(), c.data_ptr(), m, n, k, void gemm_fp16_cublas_tensor(torch::Tensor a, torch::Tensor b, torch::Tensor c) {
c.dtype() == torch::kFloat32); if (a.sizes().size() == 1) {
assert(b.sizes().size() == 2);
a = at::unsqueeze(a, 0);
}
const auto cuda_data_type = CUDA_R_16F;
const auto cuda_c_data_type =
c.dtype() == torch::kFloat32 ? CUDA_R_32F : CUDA_R_16F;
const auto compute_type = CUDA_R_32F;
const float sp_alpha = 1.f;
// swap a and b, and use CUBLAS_OP_N. see the notes above
std::swap(a, b);
const cublasOperation_t cublas_trans_a = CUBLAS_OP_N;
const cublasOperation_t cublas_trans_b = CUBLAS_OP_N;
// m = (B^T).size(0) = B.size(1), and = A.size(1) after swap,
// negative axis is used because of the existence of batch matmul.
const int m = a.size(-1);
const int k = a.size(-2);
const int n = b.size(-2);
const int cublas_lda = m;
const int cublas_ldb = k;
const int cublas_ldc = m;
cublasHandle_t cublas_handle = get_cublas_handle();
#if CUDA_VERSION >= 11000
cublasGemmAlgo_t algo = CUBLAS_GEMM_DEFAULT;
#else
cublasGemmAlgo_t algo = CUBLAS_GEMM_DFALT_TENSOR_OP;
#endif
const float sp_beta = 0.f;
if (a.sizes().size() == 2 && b.sizes().size() == 2) {
CUBLAS_CHECK(cublasGemmEx(
cublas_handle, cublas_trans_a, cublas_trans_b, m, n, k, &sp_alpha,
a.data_ptr(), cuda_data_type, cublas_lda, b.data_ptr(), cuda_data_type,
cublas_ldb, &sp_beta, c.data_ptr(), cuda_c_data_type, cublas_ldc,
compute_type, algo));
} else {
// batch matmul
assert(a.sizes().size() == 3 && b.sizes().size() == 3);
const long long int cublas_stride_a = m * k;
const long long int cublas_stride_b = k * n;
const long long int cublas_stride_c = m * n;
CUBLAS_CHECK(cublasGemmStridedBatchedEx(
cublas_handle, cublas_trans_a, cublas_trans_b, m,
n, k, &sp_alpha, a.data_ptr(), cuda_data_type, cublas_lda,
cublas_stride_a, b.data_ptr(), cuda_data_type, cublas_ldb, cublas_stride_b,
&sp_beta, c.data_ptr(), cuda_c_data_type, cublas_ldc, cublas_stride_c,
a.size(0), compute_type, algo));
}
} }

View File

@@ -118,7 +118,9 @@ void mm8_one(int64_t N, int64_t M,
using torch::Tensor; using torch::Tensor;
void gemm_fp16_cublas(Tensor a, Tensor b, Tensor c); #ifndef DISABLE_CUBLAS_GEMM
void gemm_fp16_cublas_tensor(Tensor a, Tensor b, Tensor c);
#endif
Tensor att_one(Tensor x, Tensor ln_w, Tensor ln_b, Tensor sx, Tensor k_mix, Tensor att_one(Tensor x, Tensor ln_w, Tensor ln_b, Tensor sx, Tensor k_mix,
Tensor v_mix, Tensor r_mix, Tensor kw, Tensor v_mix, Tensor r_mix, Tensor kw,
@@ -134,6 +136,16 @@ Tensor att_seq(Tensor x, Tensor sx, Tensor ln_w, Tensor ln_b, Tensor k_mix,
Tensor ow, Tensor t_first, Tensor pp, Tensor aa, Tensor bb, Tensor ow, Tensor t_first, Tensor pp, Tensor aa, Tensor bb,
Tensor t_decay, /* imm */ Tensor buf, /* out */ Tensor x_plus_out); Tensor t_decay, /* imm */ Tensor buf, /* out */ Tensor x_plus_out);
Tensor att_one_v5(Tensor x, Tensor sx, Tensor s, Tensor ln_w, Tensor ln_b,
Tensor lx_w, Tensor lx_b, Tensor k_mix, Tensor v_mix,
Tensor r_mix, Tensor kw,
/* imm */ Tensor kx, Tensor vw, /* imm */ Tensor vx,
Tensor rw,
/* imm */ Tensor rx, Tensor ow, Tensor t_first,
/* imm */ Tensor k, Tensor t_decay, /* imm */ Tensor v,
/* imm */ Tensor r, /* imm */ Tensor s1,
/* out */ Tensor x_plus_out, /* out */ Tensor s2);
Tensor ffn_seq(Tensor x, Tensor sx, Tensor ln_w, Tensor ln_b, Tensor k_mix, Tensor ffn_seq(Tensor x, Tensor sx, Tensor ln_w, Tensor ln_b, Tensor k_mix,
Tensor r_mix, Tensor kw, Tensor vw, Tensor rw, Tensor r_mix, Tensor kw, Tensor vw, Tensor rw,
/* imm */ Tensor buf, /* imm */ Tensor buf,
@@ -148,8 +160,9 @@ PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("wkv_forward", &wkv_forward, "wkv forward"); m.def("wkv_forward", &wkv_forward, "wkv forward");
m.def("mm8_seq", &mm8_seq, "mm8 seq"); m.def("mm8_seq", &mm8_seq, "mm8 seq");
m.def("mm8_one", &mm8_one, "mm8 one"); m.def("mm8_one", &mm8_one, "mm8 one");
m.def("gemm_fp16_cublas", &gemm_fp16_cublas, "gemv fp16 cublas"); m.def("gemm_fp16_cublas", &gemm_fp16_cublas_tensor, "gemv fp16 cublas");
m.def("att_one", &att_one, "att one"); m.def("att_one", &att_one, "att one");
m.def("att_one_v5", &att_one_v5, "att one v5");
m.def("att_seq", &att_seq, "att seq"); m.def("att_seq", &att_seq, "att seq");
m.def("ffn_seq", &ffn_seq, "ffn seq"); m.def("ffn_seq", &ffn_seq, "ffn seq");
m.def("ffn_one", &ffn_one, "ffn one"); m.def("ffn_one", &ffn_one, "ffn one");
@@ -159,8 +172,9 @@ TORCH_LIBRARY(rwkv, m) {
m.def("wkv_forward", wkv_forward); m.def("wkv_forward", wkv_forward);
m.def("mm8_seq", mm8_seq); m.def("mm8_seq", mm8_seq);
m.def("mm8_one", mm8_one); m.def("mm8_one", mm8_one);
m.def("gemm_fp16_cublas", gemm_fp16_cublas); m.def("gemm_fp16_cublas", gemm_fp16_cublas_tensor);
m.def("att_one", att_one); m.def("att_one", att_one);
m.def("att_one_v5", &att_one_v5);
m.def("att_seq", att_seq); m.def("att_seq", att_seq);
m.def("ffn_seq", ffn_seq); m.def("ffn_seq", ffn_seq);
m.def("ffn_one", ffn_one); m.def("ffn_one", ffn_one);

View File

@@ -3,7 +3,7 @@
######################################################################################################## ########################################################################################################
from typing import Optional from typing import Optional
import types, gc, os, time, re import types, gc, os, time, re, platform
import torch import torch
from torch.nn import functional as F from torch.nn import functional as F
@@ -91,8 +91,10 @@ if os.environ.get("RWKV_CUDA_ON") == "1":
f"{current_path}/cuda/att_one.cu", f"{current_path}/cuda/att_one.cu",
f"{current_path}/cuda/att_seq.cu", f"{current_path}/cuda/att_seq.cu",
f"{current_path}/cuda/ffn.cu", f"{current_path}/cuda/ffn.cu",
f"{current_path}/cuda/att_one_v5.cu",
], ],
verbose=True, verbose=True,
extra_ldflags=["cublas.lib" if os.name == "nt" else ""],
extra_cuda_cflags=[ extra_cuda_cflags=[
"-t 4", "-t 4",
"-std=c++17", "-std=c++17",
@@ -149,26 +151,40 @@ if os.environ.get("RWKV_CUDA_ON") == "1":
torch.ops.rwkv.mm8_one(N, M, x, w, mx, rx, my, ry, y) torch.ops.rwkv.mm8_one(N, M, x, w, mx, rx, my, ry, y)
return y.to(dtype=x.dtype) return y.to(dtype=x.dtype)
else:
os.environ["RWKV_CUDA_ON"] = "0"
if os.environ.get("RWKV_CUDA_ON") == "1":
@MyStatic @MyStatic
def gemm(a, b, output_dtype: Optional[torch.dtype] = None): def gemm(a, b, output_dtype: Optional[torch.dtype] = None):
if output_dtype is None: if output_dtype is None:
output_dtype = a.dtype output_dtype = a.dtype
if a.dtype == b.dtype == torch.float16 and a.device.type == "cuda": if a.dtype == b.dtype == torch.float16 and a.device.type == "cuda":
assert len(b.shape) == 2
if len(a.shape) == 1: if len(a.shape) == 1:
assert len(b.shape) == 2
c = torch.empty((b.shape[-1],), dtype=output_dtype, device=a.device) c = torch.empty((b.shape[-1],), dtype=output_dtype, device=a.device)
a = a.unsqueeze(0) a = a.unsqueeze(0)
else: else:
c = torch.empty( assert len(a.shape) == len(b.shape)
(a.shape[0], b.shape[-1]), dtype=output_dtype, device=a.device assert len(a.shape) == 2 or len(a.shape) == 3
) # torch.empty((*a.shape[:-1], b.shape[-1])) doesn't work with jit
if len(a.shape) == 2:
c = torch.empty(
(a.shape[0], b.shape[-1]), dtype=output_dtype, device=a.device
)
else:
c = torch.empty(
(a.shape[0], a.shape[1], b.shape[-1]),
dtype=output_dtype,
device=a.device,
)
torch.ops.rwkv.gemm_fp16_cublas(a, b, c) torch.ops.rwkv.gemm_fp16_cublas(a, b, c)
return c return c
else: else:
return (a @ b).to(output_dtype) return (a @ b).to(output_dtype)
else: else:
os.environ["RWKV_CUDA_ON"] = "0"
def gemm(a, b, output_dtype: Optional[torch.dtype] = None): def gemm(a, b, output_dtype: Optional[torch.dtype] = None):
if output_dtype is None: if output_dtype is None:
@@ -217,7 +233,7 @@ class RWKV(MyModule):
) # load model to CPU first ) # load model to CPU first
# it is supported to load a pure meta-tensor state dict (e.g. for quick testing) # it is supported to load a pure meta-tensor state dict (e.g. for quick testing)
for k, v in self.w.items(): for k, v in self.w.items():
if v.is_meta: if isinstance(v, torch.Tensor) and v.is_meta:
# torch.zeros_like(v, device='cpu') doesn't produce an all-zero tensor # torch.zeros_like(v, device='cpu') doesn't produce an all-zero tensor
# if v is a meta tensor # if v is a meta tensor
self.w[k] = torch.zeros(v.shape, dtype=v.dtype, device="cpu") self.w[k] = torch.zeros(v.shape, dtype=v.dtype, device="cpu")
@@ -247,9 +263,14 @@ class RWKV(MyModule):
args.n_embd = w["emb.weight"].shape[1] args.n_embd = w["emb.weight"].shape[1]
args.n_layer = 0 args.n_layer = 0
keys = list(w.keys()) keys = list(w.keys())
self.version = 4
for x in keys: for x in keys:
layer_id = int(x.split(".")[1]) if ("blocks." in x) else 0 layer_id = int(x.split(".")[1]) if ("blocks." in x) else 0
args.n_layer = max(args.n_layer, layer_id + 1) args.n_layer = max(args.n_layer, layer_id + 1)
if "ln_x" in x:
self.version = 5
if self.version == 5 and "att.time_decay" in x:
args.n_head = w[x].shape[0]
####################### Compute strategy ####################### Compute strategy
@@ -352,6 +373,20 @@ class RWKV(MyModule):
del w["blocks.0.ln0.bias"] del w["blocks.0.ln0.bias"]
print_need_newline = False print_need_newline = False
REAL_TIME_FIRST = False
for x in list(w.keys()):
if ".time_faaaa" in x:
REAL_TIME_FIRST = True
if REAL_TIME_FIRST:
w = {
k.replace(".time_faaaa", ".time_first")
if ".time_faaaa" in k
else k: v
for k, v in w.items()
}
self.w = w
keys = list(w.keys()) keys = list(w.keys())
for x in keys: for x in keys:
w[x].requires_grad = False w[x].requires_grad = False
@@ -382,8 +417,19 @@ class RWKV(MyModule):
w[x] = w[x].t() w[x] = w[x].t()
if ".time_decay" in x: # need fp32 for this if ".time_decay" in x: # need fp32 for this
w[x] = -torch.exp(w[x].float()) if self.version == 4:
w[x] = -torch.exp(w[x].float())
elif self.version == 5:
w[x] = torch.exp(-torch.exp(w[x].float())).reshape(-1, 1, 1)
elif ".time_first" in x: # need fp32 for this elif ".time_first" in x: # need fp32 for this
if self.version == 4:
w[x] = w[x].float()
elif self.version == 5:
if REAL_TIME_FIRST:
w[x] = w[x].float().reshape(-1, 1, 1)
else:
w[x] = torch.exp(w[x].float()).reshape(-1, 1, 1)
elif ".ln_x" in x: # need fp32 for group_norm
w[x] = w[x].float() w[x] = w[x].float()
else: else:
if (len(w[x].shape) == 2) and ("emb" not in x): if (len(w[x].shape) == 2) and ("emb" not in x):
@@ -931,6 +977,147 @@ class RWKV(MyModule):
######################################################################################################## ########################################################################################################
@MyFunction
def att_one_v5(
self,
x,
sx,
s,
ln_w,
ln_b,
lx_w,
lx_b,
k_mix,
v_mix,
r_mix,
t_decay,
t_first,
kw,
vw,
rw,
ow,
kmx,
krx,
kmy,
kry,
vmx,
vrx,
vmy,
vry,
rmx,
rrx,
rmy,
rry,
omx,
orx,
omy,
ory,
):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
H = t_decay.shape[0]
S = x.shape[-1] // H
r = gemm(rx, rw, output_dtype=torch.float32).view(H, 1, S)
k = gemm(kx, kw, output_dtype=torch.float32).view(H, S, 1)
v = gemm(vx, vw, output_dtype=torch.float32).view(H, 1, S)
a = gemm(k, v)
out = r @ (t_first * a + s)
s = a + t_decay * s
out = out.flatten()
out = F.group_norm(
out.unsqueeze(0), num_groups=H, weight=lx_w, bias=lx_b
).squeeze(0)
out = out.to(dtype=x.dtype)
out = gemm(out, ow)
return x + out, xx, s
@MyFunction
def att_seq_v5(
self,
x,
sx,
s,
ln_w,
ln_b,
lx_w,
lx_b,
k_mix,
v_mix,
r_mix,
t_decay,
t_first,
kw,
vw,
rw,
ow,
kmx,
krx,
kmy,
kry,
vmx,
vrx,
vmy,
vry,
rmx,
rrx,
rmy,
rry,
omx,
orx,
omy,
ory,
):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1, :]))
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
H = t_decay.shape[0]
S = x.shape[-1] // H
T = x.shape[0]
w = t_decay.reshape(-1, 1)
u = t_first.reshape(-1, 1)
ws = w.pow(T).reshape(H, 1, 1)
ind = torch.arange(T - 1, -1, -1, device=w.device).unsqueeze(0).repeat(H, 1)
w = w.repeat(1, T).pow(ind)
wk = w.reshape(H, 1, T)
wb = wk.transpose(-2, -1).flip(1)
w = torch.cat([w[:, 1:], u], dim=1)
w = F.pad(w, (0, T))
w = torch.tile(w, [T])
w = w[:, :-T].reshape(-1, T, 2 * T - 1)
w = w[:, :, T - 1 :].reshape(H, T, T)
r = gemm(rx, rw, output_dtype=torch.float32).view(T, H, S).transpose(0, 1)
k = (
gemm(kx, kw, output_dtype=torch.float32)
.view(T, H, S)
.transpose(0, 1)
.transpose(-2, -1)
)
v = gemm(vx, vw, output_dtype=torch.float32).view(T, H, S).transpose(0, 1)
out = ((r @ k) * w) @ v + (r @ s) * wb
s = ws * s + (k * wk) @ v
out = out.transpose(0, 1).contiguous().reshape(T, H * S)
out = F.group_norm(out, num_groups=H, weight=lx_w, bias=lx_b)
out = out.to(dtype=x.dtype)
out = gemm(out, ow)
return x + out, xx[-1, :], s
########################################################################################################
if os.environ["RWKV_CUDA_ON"] == "1": if os.environ["RWKV_CUDA_ON"] == "1":
@MyFunction @MyFunction
@@ -1140,7 +1327,7 @@ class RWKV(MyModule):
xx = torch.ops.rwkv.ffn_seq( xx = torch.ops.rwkv.ffn_seq(
x, sx, ln_w, ln_b, k_mix, r_mix, kw, vw, rw, buf, x_plus_out x, sx, ln_w, ln_b, k_mix, r_mix, kw, vw, rw, buf, x_plus_out
) )
return x_plus_out, xx[-1:] return x_plus_out, xx[-1, :]
@MyFunction @MyFunction
def cuda_att_one_fp16( def cuda_att_one_fp16(
@@ -1220,6 +1407,86 @@ class RWKV(MyModule):
) )
return x_plus_out_t, xx, t1_t, t2_t, p_t return x_plus_out_t, xx, t1_t, t2_t, p_t
@MyFunction
def cuda_att_one_v5_fp16(
self,
x,
sx,
s,
ln_w,
ln_b,
lx_w,
lx_b,
k_mix,
v_mix,
r_mix,
t_decay,
t_first,
kw,
vw,
rw,
ow,
kmx,
krx,
kmy,
kry,
vmx,
vrx,
vmy,
vry,
rmx,
rrx,
rmy,
rry,
omx,
orx,
omy,
ory,
):
kx = torch.empty_like(x)
vx = torch.empty_like(x)
rx = torch.empty_like(x)
H = t_decay.shape[0]
S = x.shape[-1] // H
r = torch.empty((H * S,), dtype=torch.float32, device=x.device)
k = torch.empty((H * S,), dtype=torch.float32, device=x.device)
v = torch.empty((H * S,), dtype=torch.float32, device=x.device)
s1 = torch.empty((H, S, S), dtype=torch.float32, device=x.device)
s2 = torch.empty((H, S, S), dtype=torch.float32, device=x.device)
x_plus_out = torch.empty_like(x)
xx = torch.ops.rwkv.att_one_v5(
x,
sx,
s,
ln_w,
ln_b,
lx_w,
lx_b,
k_mix,
v_mix,
r_mix,
kw,
kx,
vw,
vx,
rw,
rx,
ow,
t_first,
k,
t_decay,
v,
r,
s1,
x_plus_out,
s2,
)
return x_plus_out, xx, s2
@MyFunction @MyFunction
def cuda_ffn_one_fp16( def cuda_ffn_one_fp16(
self, self,
@@ -1265,34 +1532,63 @@ class RWKV(MyModule):
args = self.args args = self.args
if state == None: if state == None:
state = [None] * args.n_layer * 5 if self.version == 4:
for i in range( state = [None] * args.n_layer * 5
args.n_layer for i in range(
): # state: 0=att_xx 1=att_aa 2=att_bb 3=att_pp 4=ffn_xx args.n_layer
dd = self.strategy[i] ): # state: 0=att_xx 1=att_aa 2=att_bb 3=att_pp 4=ffn_xx
dev = dd.device dd = self.strategy[i]
atype = dd.atype dev = dd.device
state[i * 5 + 0] = torch.zeros( atype = dd.atype
args.n_embd, dtype=atype, requires_grad=False, device=dev state[i * 5 + 0] = torch.zeros(
).contiguous() args.n_embd, dtype=atype, requires_grad=False, device=dev
state[i * 5 + 1] = torch.zeros( ).contiguous()
args.n_embd, dtype=torch.float, requires_grad=False, device=dev state[i * 5 + 1] = torch.zeros(
).contiguous()
state[i * 5 + 2] = torch.zeros(
args.n_embd, dtype=torch.float, requires_grad=False, device=dev
).contiguous()
state[i * 5 + 3] = (
torch.zeros(
args.n_embd, args.n_embd,
dtype=torch.float, dtype=torch.float,
requires_grad=False, requires_grad=False,
device=dev, device=dev,
).contiguous() ).contiguous()
- 1e30 state[i * 5 + 2] = torch.zeros(
) args.n_embd,
state[i * 5 + 4] = torch.zeros( dtype=torch.float,
args.n_embd, dtype=atype, requires_grad=False, device=dev requires_grad=False,
).contiguous() device=dev,
).contiguous()
state[i * 5 + 3] = (
torch.zeros(
args.n_embd,
dtype=torch.float,
requires_grad=False,
device=dev,
).contiguous()
- 1e30
)
state[i * 5 + 4] = torch.zeros(
args.n_embd, dtype=atype, requires_grad=False, device=dev
).contiguous()
elif self.version == 5:
state = [None] * args.n_layer * 3
for i in range(args.n_layer): # state: 0=att_xx 1=att_kv 2=ffn_xx
dd = self.strategy[i]
dev = dd.device
atype = dd.atype
state[i * 3 + 0] = torch.zeros(
args.n_embd, dtype=atype, requires_grad=False, device=dev
).contiguous()
state[i * 3 + 1] = torch.zeros(
(
args.n_head,
args.n_embd // args.n_head,
args.n_embd // args.n_head,
),
dtype=torch.float,
requires_grad=False,
device=dev,
).contiguous()
state[i * 3 + 2] = torch.zeros(
args.n_embd, dtype=atype, requires_grad=False, device=dev
).contiguous()
seq_mode = len(tokens) > 1 seq_mode = len(tokens) > 1
@@ -1317,9 +1613,13 @@ class RWKV(MyModule):
ATT = self.cuda_att_seq_i8 ATT = self.cuda_att_seq_i8
else: else:
ATT = self.cuda_att_seq_naive ATT = self.cuda_att_seq_naive
if self.version == 5:
ATT = self.att_seq_v5
else: else:
ATT = self.att_one if wtype != torch.uint8 else self.att_one_i8 ATT = self.att_one if wtype != torch.uint8 else self.att_one_i8
FFN = self.ffn_one if wtype != torch.uint8 else self.ffn_one_i8 FFN = self.ffn_one if wtype != torch.uint8 else self.ffn_one_i8
if self.version == 5:
ATT = self.att_one_v5
if ( if (
"cuda" in str(dev) "cuda" in str(dev)
and os.environ["RWKV_CUDA_ON"] == "1" and os.environ["RWKV_CUDA_ON"] == "1"
@@ -1327,6 +1627,8 @@ class RWKV(MyModule):
): ):
ATT = self.cuda_att_one_fp16 ATT = self.cuda_att_one_fp16
FFN = self.cuda_ffn_one_fp16 FFN = self.cuda_ffn_one_fp16
if self.version == 5:
ATT = self.cuda_att_one_v5_fp16
x = x.to(dtype=atype, device=dev) x = x.to(dtype=atype, device=dev)
@@ -1355,46 +1657,82 @@ class RWKV(MyModule):
orx = w[f"{att}output.weight_rx"] if wtype == torch.uint8 else x orx = w[f"{att}output.weight_rx"] if wtype == torch.uint8 else x
omy = w[f"{att}output.weight_my"] if wtype == torch.uint8 else x omy = w[f"{att}output.weight_my"] if wtype == torch.uint8 else x
ory = w[f"{att}output.weight_ry"] if wtype == torch.uint8 else x ory = w[f"{att}output.weight_ry"] if wtype == torch.uint8 else x
( if self.version == 4:
x, (
state[i * 5 + 0], x,
state[i * 5 + 1], state[i * 5 + 0],
state[i * 5 + 2], state[i * 5 + 1],
state[i * 5 + 3], state[i * 5 + 2],
) = ATT( state[i * 5 + 3],
x, ) = ATT(
state[i * 5 + 0], x,
state[i * 5 + 1], state[i * 5 + 0],
state[i * 5 + 2], state[i * 5 + 1],
state[i * 5 + 3], state[i * 5 + 2],
w[f"{bbb}ln1.weight"], state[i * 5 + 3],
w[f"{bbb}ln1.bias"], w[f"{bbb}ln1.weight"],
w[f"{att}time_mix_k"], w[f"{bbb}ln1.bias"],
w[f"{att}time_mix_v"], w[f"{att}time_mix_k"],
w[f"{att}time_mix_r"], w[f"{att}time_mix_v"],
w[f"{att}time_decay"], w[f"{att}time_mix_r"],
w[f"{att}time_first"], w[f"{att}time_decay"],
kw, w[f"{att}time_first"],
vw, kw,
rw, vw,
ow, rw,
kmx, ow,
krx, kmx,
kmy, krx,
kry, kmy,
vmx, kry,
vrx, vmx,
vmy, vrx,
vry, vmy,
rmx, vry,
rrx, rmx,
rmy, rrx,
rry, rmy,
omx, rry,
orx, omx,
omy, orx,
ory, omy,
) ory,
)
elif self.version == 5:
x, state[i * 3 + 0], state[i * 3 + 1] = ATT(
x,
state[i * 3 + 0],
state[i * 3 + 1],
w[f"{bbb}ln1.weight"],
w[f"{bbb}ln1.bias"],
w[f"{att}ln_x.weight"],
w[f"{att}ln_x.bias"],
w[f"{att}time_mix_k"],
w[f"{att}time_mix_v"],
w[f"{att}time_mix_r"],
w[f"{att}time_decay"],
w[f"{att}time_first"],
kw,
vw,
rw,
ow,
kmx,
krx,
kmy,
kry,
vmx,
vrx,
vmy,
vry,
rmx,
rrx,
rmy,
rry,
omx,
orx,
omy,
ory,
)
if dd.stream: if dd.stream:
del kw, vw, rw, ow del kw, vw, rw, ow
@@ -1417,9 +1755,13 @@ class RWKV(MyModule):
rrx = w[f"{ffn}receptance.weight_rx"] if wtype == torch.uint8 else x rrx = w[f"{ffn}receptance.weight_rx"] if wtype == torch.uint8 else x
rmy = w[f"{ffn}receptance.weight_my"] if wtype == torch.uint8 else x rmy = w[f"{ffn}receptance.weight_my"] if wtype == torch.uint8 else x
rry = w[f"{ffn}receptance.weight_ry"] if wtype == torch.uint8 else x rry = w[f"{ffn}receptance.weight_ry"] if wtype == torch.uint8 else x
x, state[i * 5 + 4] = FFN( if self.version == 4:
offset = i * 5 + 4
elif self.version == 5:
offset = i * 3 + 2
x, state[offset] = FFN(
x, x,
state[i * 5 + 4], state[offset],
w[f"{bbb}ln2.weight"], w[f"{bbb}ln2.weight"],
w[f"{bbb}ln2.bias"], w[f"{bbb}ln2.bias"],
w[f"{ffn}time_mix_k"], w[f"{ffn}time_mix_k"],

Binary file not shown.

View File

@@ -0,0 +1,75 @@
#include <cublas_v2.h>
#include <cuda.h>
#include <cuda_fp16.h>
#include <cuda_runtime.h>
#include <torch/extension.h>
#include <c10/cuda/CUDAGuard.h>
#include <ATen/cuda/CUDAContext.h>
#define CUBLAS_CHECK(condition) \
for (cublasStatus_t _cublas_check_status = (condition); \
_cublas_check_status != CUBLAS_STATUS_SUCCESS;) \
throw std::runtime_error("cuBLAS error " + \
std::to_string(_cublas_check_status) + " at " + \
std::to_string(__LINE__));
#define CUDA_CHECK(condition) \
for (cudaError_t _cuda_check_status = (condition); \
_cuda_check_status != cudaSuccess;) \
throw std::runtime_error( \
"CUDA error " + std::string(cudaGetErrorString(_cuda_check_status)) + \
" at " + std::to_string(__LINE__));
/*
NOTE: blas gemm is column-major by default, but we need row-major output.
The data of row-major, transposed matrix is exactly the same as the
column-major, non-transposed matrix, and C = A * B ---> C^T = B^T * A^T
*/
void gemm_fp16_cublas(torch::Tensor a, torch::Tensor b, torch::Tensor c) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(a));
const auto cuda_data_type = CUDA_R_16F;
const auto cuda_c_data_type =
c.dtype() == torch::kFloat32 ? CUDA_R_32F : CUDA_R_16F;
const auto compute_type = CUDA_R_32F;
const float sp_alpha = 1.f;
// swap a and b, and use CUBLAS_OP_N. see the notes above
std::swap(a, b);
const cublasOperation_t cublas_trans_a = CUBLAS_OP_N;
const cublasOperation_t cublas_trans_b = CUBLAS_OP_N;
// m = (B^T).size(0) = B.size(1), and = A.size(1) after swap,
// negative axis is used because of the existence of batch matmul.
const int m = a.size(-1);
const int k = a.size(-2);
const int n = b.size(-2);
const int cublas_lda = m;
const int cublas_ldb = k;
const int cublas_ldc = m;
cublasHandle_t cublas_handle = at::cuda::getCurrentCUDABlasHandle();
#if CUDA_VERSION >= 11000
cublasGemmAlgo_t algo = CUBLAS_GEMM_DEFAULT;
#else
cublasGemmAlgo_t algo = CUBLAS_GEMM_DFALT_TENSOR_OP;
#endif
const float sp_beta = 0.f;
if (a.sizes().size() == 2 && b.sizes().size() == 2) {
CUBLAS_CHECK(cublasGemmEx(
cublas_handle, cublas_trans_a, cublas_trans_b, m, n, k, &sp_alpha,
a.data_ptr(), cuda_data_type, cublas_lda, b.data_ptr(), cuda_data_type,
cublas_ldb, &sp_beta, c.data_ptr(), cuda_c_data_type, cublas_ldc,
compute_type, algo));
} else {
// batch matmul
assert(a.sizes().size() == 3 && b.sizes().size() == 3);
const long long int cublas_stride_a = m * k;
const long long int cublas_stride_b = k * n;
const long long int cublas_stride_c = m * n;
CUBLAS_CHECK(cublasGemmStridedBatchedEx(
cublas_handle, cublas_trans_a, cublas_trans_b, m,
n, k, &sp_alpha, a.data_ptr(), cuda_data_type, cublas_lda,
cublas_stride_a, b.data_ptr(), cuda_data_type, cublas_ldb, cublas_stride_b,
&sp_beta, c.data_ptr(), cuda_c_data_type, cublas_ldc, cublas_stride_c,
a.size(0), compute_type, algo));
}
}

View File

@@ -0,0 +1,246 @@
#include <stdio.h>
#include <assert.h>
#include "ATen/ATen.h"
#include <cuda_fp16.h>
#define MIN_VALUE (-1e38)
typedef at::Half fp16;
__half *cast(fp16 *ptr) {
return reinterpret_cast<__half *>(ptr);
}
template <typename F>
__global__ void kernel_wkv_forward(const int B, const int T, const int C,
const float *__restrict__ const _w, const float *__restrict__ const _u, const F *__restrict__ const _k, const F *__restrict__ const _v,
F *__restrict__ const _y, float *__restrict__ const _aa, float *__restrict__ const _bb, float *__restrict__ const _pp) {
const int idx = blockIdx.x * blockDim.x + threadIdx.x;
const int _b = idx / C;
const int _c = idx % C;
const int _offset = _b * T * C + _c;
const int _state_offset = _b * C + _c;
float u = _u[_c];
float w = _w[_c];
const F *__restrict__ const k = _k + _offset;
const F *__restrict__ const v = _v + _offset;
F *__restrict__ const y = _y + _offset;
float aa = _aa[_state_offset];
float bb = _bb[_state_offset];
float pp = _pp[_state_offset];
for (int i = 0; i < T; i++) {
const int ii = i * C;
const float kk = float(k[ii]);
const float vv = float(v[ii]);
float ww = u + kk;
float p = max(pp, ww);
float e1 = exp(pp - p);
float e2 = exp(ww - p);
y[ii] = F((e1 * aa + e2 * vv) / (e1 * bb + e2));
ww = w + pp;
p = max(ww, kk);
e1 = exp(ww - p);
e2 = exp(kk - p);
aa = e1 * aa + e2 * vv;
bb = e1 * bb + e2;
pp = p;
}
_aa[_state_offset] = aa;
_bb[_state_offset] = bb;
_pp[_state_offset] = pp;
}
template <typename F>
void cuda_wkv_forward(int B, int T, int C, float *w, float *u, F *k, F *v, F *y, float *aa, float *bb, float *pp) {
dim3 threadsPerBlock( min(C, 32) );
assert(B * C % threadsPerBlock.x == 0);
dim3 numBlocks(B * C / threadsPerBlock.x);
kernel_wkv_forward<<<numBlocks, threadsPerBlock>>>(B, T, C, w, u, k, v, y, aa, bb, pp);
}
template void cuda_wkv_forward<fp16>(
int B, int T, int C,
float *w, float *u, fp16 *k, fp16 *v, fp16 *y,
float *aa, float *bb, float *pp);
template void cuda_wkv_forward<float>(
int B, int T, int C,
float *w, float *u, float *k, float *v, float *y,
float *aa, float *bb, float *pp);
__global__ void kernel_mm_seq_fp32i8(
const int B, const int N, const int M,
const float *__restrict__ const x, const int x_stride,
const uint8_t *__restrict__ const w, const int w_stride,
const float *__restrict__ const mx,
const float *__restrict__ const rx,
const float *__restrict__ const my,
const float *__restrict__ const ry,
float *__restrict__ const y, const int y_stride) {
const int i = blockIdx.x * blockDim.x + threadIdx.x;
const int k = blockIdx.y * blockDim.y + threadIdx.y;
if (i < B && k < M) {
float y_local = 0;
for (int j = 0; j < N; ++j) {
y_local += x[i * x_stride + j] * (
(float(w[j * w_stride + k]) + 0.5f)
* rx[k] * ry[j] + mx[k] + my[j]
);
}
y[i * y_stride + k] = y_local;
}
}
template <typename F>
void cuda_mm8_seq(int B, int N, int M,
F *x, int x_stride,
uint8_t *w, int w_stride,
F *mx, F *rx,
F *my, F *ry,
F *y, int y_stride);
template <>
void cuda_mm8_seq<float>(int B, int N, int M,
float *x, int x_stride,
uint8_t *w, int w_stride,
float *mx, float *rx,
float *my, float *ry,
float *y, int y_stride) {
dim3 blockSize(1, 128);
dim3 gridSize((B + blockSize.x - 1) / blockSize.x, (M + blockSize.y - 1) / blockSize.y);
kernel_mm_seq_fp32i8<<<gridSize, blockSize>>>(
B, N, M, x, x_stride, w, w_stride,
mx, rx, my, ry, y, y_stride);
}
__global__ void kernel_mm_seq_fp16i8(
const int B, const int N, const int M,
const __half *__restrict__ const x, const int x_stride,
const uint8_t *__restrict__ const w, const int w_stride,
const __half *__restrict__ const mx,
const __half *__restrict__ const rx,
const __half *__restrict__ const my,
const __half *__restrict__ const ry,
__half *__restrict__ const y, const int y_stride) {
const int i = blockIdx.x * blockDim.x + threadIdx.x;
const int k = blockIdx.y * blockDim.y + threadIdx.y;
if (i < B && k < M) {
float y_local = 0;
for (int j = 0; j < N; ++j) {
y_local += __half2float(x[i * x_stride + j]) * (
(float(w[j * w_stride + k]) + 0.5f)
* __half2float(rx[k]) * __half2float(ry[j])
+ __half2float(mx[k]) + __half2float(my[j])
);
}
y[i * y_stride + k] = __float2half(y_local);
}
}
template <>
void cuda_mm8_seq<fp16>(int B, int N, int M,
fp16 *x, int x_stride,
uint8_t *w, int w_stride,
fp16 *mx, fp16 *rx,
fp16 *my, fp16 *ry,
fp16 *y, int y_stride) {
dim3 blockSize(1, 128);
dim3 gridSize((B + blockSize.x - 1) / blockSize.x, (M + blockSize.y - 1) / blockSize.y);
kernel_mm_seq_fp16i8<<<gridSize, blockSize>>>(
B, N, M, cast(x), x_stride, w, w_stride,
cast(mx), cast(rx), cast(my), cast(ry), cast(y), y_stride);
}
#define MM8_ONE_JSPLIT 24
#define MM8_ONE_TILE 1024
__global__ void kernel_mm_one_fp32i8(
const int N, const int M,
const float *__restrict__ const x,
const uint8_t *__restrict__ const w, const int w_stride,
const float *__restrict__ const mx,
const float *__restrict__ const rx,
const float *__restrict__ const my,
const float *__restrict__ const ry,
float *__restrict__ const y) {
const int k = blockIdx.y * blockDim.y + threadIdx.y;
const int j0 = min(N, blockIdx.x * ((N + MM8_ONE_JSPLIT - 1) / MM8_ONE_JSPLIT));
const int j1 = min(N, (blockIdx.x + 1) * ((N + MM8_ONE_JSPLIT - 1) / MM8_ONE_JSPLIT));
if (k < M) {
float y_local = 0;
for (int j = j0; j < j1; ++j) {
y_local += x[j] * (
(float(w[j * w_stride + k]) + 0.5f)
* rx[k] * ry[j] + mx[k] + my[j]
);
}
atomicAdd(&y[k], y_local);
}
}
template <typename F>
void cuda_mm8_one(int N, int M,
F *x,
uint8_t *w, int w_stride,
F *mx, F *rx,
F *my, F *ry,
float *y);
template <>
void cuda_mm8_one<float>(int N, int M,
float *x,
uint8_t *w, int w_stride,
float *mx, float *rx,
float *my, float *ry,
float *y) {
dim3 blockSize(1, MM8_ONE_TILE);
dim3 gridSize(MM8_ONE_JSPLIT, (M + blockSize.y - 1) / blockSize.y);
kernel_mm_one_fp32i8<<<gridSize, blockSize>>>(
N, M, x, w, w_stride,
mx, rx, my, ry, y);
}
__global__ void kernel_mm_one_fp16i8(
const int N, const int M,
const __half *__restrict__ const x,
const uint8_t *__restrict__ const w, const int w_stride,
const __half *__restrict__ const mx,
const __half *__restrict__ const rx,
const __half *__restrict__ const my,
const __half *__restrict__ const ry,
float *__restrict__ const y) {
const int k = blockIdx.y * blockDim.y + threadIdx.y;
const int j0 = min(N, blockIdx.x * ((N + MM8_ONE_JSPLIT - 1) / MM8_ONE_JSPLIT));
const int j1 = min(N, (blockIdx.x + 1) * ((N + MM8_ONE_JSPLIT - 1) / MM8_ONE_JSPLIT));
if (k < M) {
float y_local = 0;
for (int j = j0; j < j1; ++j) {
y_local += __half2float(x[j]) * (
(float(w[j * w_stride + k]) + 0.5f)
* __half2float(rx[k]) * __half2float(ry[j])
+ __half2float(mx[k]) + __half2float(my[j])
);
}
atomicAdd(&y[k], y_local);
}
}
template <>
void cuda_mm8_one<fp16>(int N, int M,
fp16 *x,
uint8_t *w, int w_stride,
fp16 *mx, fp16 *rx,
fp16 *my, fp16 *ry,
float *y) {
dim3 blockSize(1, MM8_ONE_TILE);
dim3 gridSize(MM8_ONE_JSPLIT, (M + blockSize.y - 1) / blockSize.y);
kernel_mm_one_fp16i8<<<gridSize, blockSize>>>(
N, M, cast(x), w, w_stride,
cast(mx), cast(rx), cast(my), cast(ry), y);
}

88
backend-python/rwkv_pip/cuda/rwkv5.cu vendored Normal file
View File

@@ -0,0 +1,88 @@
#include <stdio.h>
#include <assert.h>
#include "ATen/ATen.h"
typedef at::BFloat16 bf16;
typedef at::Half fp16;
typedef float fp32;
template <typename F>
__global__ void kernel_forward(const int B, const int T, const int C, const int H, float *__restrict__ _state,
const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const F *__restrict__ _u,
F *__restrict__ const _y)
{
const int b = blockIdx.x / H;
const int h = blockIdx.x % H;
const int i = threadIdx.x;
_w += h*_N_;
_u += h*_N_;
_state += h*_N_*_N_ + i*_N_; // wrong if B > 1 !!!
__shared__ float r[_N_], k[_N_], u[_N_], w[_N_];
float state[_N_];
#pragma unroll
for (int j = 0; j < _N_; j++)
state[j] = _state[j];
__syncthreads();
u[i] = float(_u[i]);
w[i] = _w[i];
__syncthreads();
for (int t = b*T*C + h*_N_ + i; t < (b+1)*T*C + h*_N_ + i; t += C)
{
__syncthreads();
r[i] = float(_r[t]);
k[i] = float(_k[t]);
__syncthreads();
const float v = float(_v[t]);
float y = 0;
#pragma unroll
for (int j = 0; j < _N_; j+=4)
{
const float4& r_ = (float4&)(r[j]);
const float4& k_ = (float4&)(k[j]);
const float4& w_ = (float4&)(w[j]);
const float4& u_ = (float4&)(u[j]);
float4& s = (float4&)(state[j]);
float4 x;
x.x = k_.x * v;
x.y = k_.y * v;
x.z = k_.z * v;
x.w = k_.w * v;
y += r_.x * (u_.x * x.x + s.x);
y += r_.y * (u_.y * x.y + s.y);
y += r_.z * (u_.z * x.z + s.z);
y += r_.w * (u_.w * x.w + s.w);
s.x = s.x * w_.x + x.x;
s.y = s.y * w_.y + x.y;
s.z = s.z * w_.z + x.z;
s.w = s.w * w_.w + x.w;
}
_y[t] = F(y);
}
#pragma unroll
for (int j = 0; j < _N_; j++)
_state[j] = state[j];
}
void cuda_forward_bf16(int B, int T, int C, int H, float *state, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y)
{
assert(H*_N_ == C);
kernel_forward<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, state, r, k, v, w, u, y);
}
void cuda_forward_fp16(int B, int T, int C, int H, float *state, fp16 *r, fp16 *k, fp16 *v, float *w, fp16 *u, fp16 *y)
{
assert(H*_N_ == C);
kernel_forward<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, state, r, k, v, w, u, y);
}
void cuda_forward_fp32(int B, int T, int C, int H, float *state, fp32 *r, fp32 *k, fp32 *v, float *w, fp32 *u, fp32 *y)
{
assert(H*_N_ == C);
kernel_forward<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, state, r, k, v, w, u, y);
}

View File

@@ -0,0 +1,34 @@
#include <torch/extension.h>
#include "ATen/ATen.h"
#include <c10/cuda/CUDAGuard.h>
typedef at::BFloat16 bf16;
typedef at::Half fp16;
typedef float fp32;
void cuda_forward_bf16(int B, int T, int C, int H, float *state, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y);
void cuda_forward_fp16(int B, int T, int C, int H, float *state, fp16 *r, fp16 *k, fp16 *v, float *w, fp16 *u, fp16 *y);
void cuda_forward_fp32(int B, int T, int C, int H, float *state, fp32 *r, fp32 *k, fp32 *v, float *w, fp32 *u, fp32 *y);
void forward_bf16(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &state, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &y) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(state));
cuda_forward_bf16(B, T, C, H, state.data_ptr<float>(), r.data_ptr<bf16>(), k.data_ptr<bf16>(), v.data_ptr<bf16>(), w.data_ptr<float>(), u.data_ptr<bf16>(), y.data_ptr<bf16>());
}
void forward_fp16(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &state, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &y) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(state));
cuda_forward_fp16(B, T, C, H, state.data_ptr<float>(), r.data_ptr<fp16>(), k.data_ptr<fp16>(), v.data_ptr<fp16>(), w.data_ptr<float>(), u.data_ptr<fp16>(), y.data_ptr<fp16>());
}
void forward_fp32(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &state, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &y) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(state));
cuda_forward_fp32(B, T, C, H, state.data_ptr<float>(), r.data_ptr<fp32>(), k.data_ptr<fp32>(), v.data_ptr<fp32>(), w.data_ptr<float>(), u.data_ptr<fp32>(), y.data_ptr<fp32>());
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("forward_bf16", &forward_bf16, "rwkv5 forward_bf16");
m.def("forward_fp16", &forward_fp16, "rwkv5 forward_fp16");
m.def("forward_fp32", &forward_fp32, "rwkv5 forward_fp32");
}
TORCH_LIBRARY(rwkv5, m) {
m.def("forward_bf16", forward_bf16);
m.def("forward_fp16", forward_fp16);
m.def("forward_fp32", forward_fp32);
}

141
backend-python/rwkv_pip/cuda/wrapper.cpp vendored Normal file
View File

@@ -0,0 +1,141 @@
#include <torch/extension.h>
#include "ATen/ATen.h"
#include <iostream>
#include <c10/cuda/CUDAGuard.h>
typedef at::Half fp16;
template <typename F>
void cuda_wkv_forward(int B, int T, int C,
float *w, float *u, F *k, F *v, F *y,
float *aa, float *bb, float *pp);
template <typename F>
void cuda_mm8_seq(int B, int N, int M,
F *x, int x_stride,
uint8_t *w, int w_stride,
F *mx, F *rx,
F *my, F *ry,
F *y, int y_stride);
template <typename F>
void cuda_mm8_one(int N, int M,
F *x,
uint8_t *w, int w_stride,
F *mx, F *rx,
F *my, F *ry,
float *y);
void wkv_forward(int64_t B, int64_t T, int64_t C,
torch::Tensor &w, torch::Tensor &u,
torch::Tensor &k, torch::Tensor &v, torch::Tensor &y,
torch::Tensor &aa, torch::Tensor &bb, torch::Tensor &pp) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(w));
switch (k.scalar_type()) {
case c10::ScalarType::Half:
cuda_wkv_forward(B, T, C,
w.data_ptr<float>(), u.data_ptr<float>(),
k.data_ptr<fp16>(), v.data_ptr<fp16>(), y.data_ptr<fp16>(),
aa.data_ptr<float>(), bb.data_ptr<float>(), pp.data_ptr<float>());
break;
case c10::ScalarType::Float:
cuda_wkv_forward(B, T, C,
w.data_ptr<float>(), u.data_ptr<float>(),
k.data_ptr<float>(), v.data_ptr<float>(), y.data_ptr<float>(),
aa.data_ptr<float>(), bb.data_ptr<float>(), pp.data_ptr<float>());
break;
default:
assert(false && "Only FP16 and FP32 are currently supported");
}
}
void mm8_seq(int64_t B, int64_t N, int64_t M,
torch::Tensor &x, torch::Tensor &w,
torch::Tensor &mx, torch::Tensor &rx,
torch::Tensor &my, torch::Tensor &ry,
torch::Tensor &y) {
assert(x.stride(1) == 1);
assert(w.stride(1) == 1);
assert(mx.stride(0) == 1 && rx.stride(0) == 1);
assert(my.stride(0) == 1 && ry.stride(0) == 1);
assert(y.stride(1) == 1);
const at::cuda::OptionalCUDAGuard device_guard(device_of(w));
switch (x.scalar_type()) {
case c10::ScalarType::Half:
cuda_mm8_seq(
B, N, M,
x.data_ptr<fp16>(), x.stride(0),
w.data_ptr<uint8_t>(), w.stride(0),
mx.data_ptr<fp16>(), rx.data_ptr<fp16>(),
my.data_ptr<fp16>(), ry.data_ptr<fp16>(),
y.data_ptr<fp16>(), y.stride(0));
break;
case c10::ScalarType::Float:
cuda_mm8_seq(
B, N, M,
x.data_ptr<float>(), x.stride(0),
w.data_ptr<uint8_t>(), w.stride(0),
mx.data_ptr<float>(), rx.data_ptr<float>(),
my.data_ptr<float>(), ry.data_ptr<float>(),
y.data_ptr<float>(), y.stride(0));
break;
default:
assert(false && "Only FP16 and FP32 are currently supported");
}
}
void mm8_one(int64_t N, int64_t M,
torch::Tensor &x, torch::Tensor &w,
torch::Tensor &mx, torch::Tensor &rx,
torch::Tensor &my, torch::Tensor &ry,
torch::Tensor &y) {
assert(x.stride(0) == 1);
assert(w.stride(1) == 1);
assert(mx.stride(0) == 1 && rx.stride(0) == 1);
assert(my.stride(0) == 1 && ry.stride(0) == 1);
assert(y.stride(0) == 1);
const at::cuda::OptionalCUDAGuard device_guard(device_of(w));
switch (x.scalar_type()) {
case c10::ScalarType::Half:
cuda_mm8_one(
N, M,
x.data_ptr<fp16>(),
w.data_ptr<uint8_t>(), w.stride(0),
mx.data_ptr<fp16>(), rx.data_ptr<fp16>(),
my.data_ptr<fp16>(), ry.data_ptr<fp16>(),
y.data_ptr<float>());
break;
case c10::ScalarType::Float:
cuda_mm8_one(
N, M,
x.data_ptr<float>(),
w.data_ptr<uint8_t>(), w.stride(0),
mx.data_ptr<float>(), rx.data_ptr<float>(),
my.data_ptr<float>(), ry.data_ptr<float>(),
y.data_ptr<float>());
break;
default:
assert(false && "Only FP16 and FP32 are currently supported");
}
}
using torch::Tensor;
#ifndef DISABLE_CUBLAS_GEMM
void gemm_fp16_cublas(Tensor a, Tensor b, Tensor c);
#endif
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("wkv_forward", &wkv_forward, "wkv forward");
m.def("mm8_seq", &mm8_seq, "mm8 seq");
m.def("mm8_one", &mm8_one, "mm8 one");
#ifndef DISABLE_CUBLAS_GEMM
m.def("gemm_fp16_cublas", &gemm_fp16_cublas, "gemv fp16 cublas");
#endif
}
TORCH_LIBRARY(rwkv, m) {
m.def("wkv_forward", wkv_forward);
m.def("mm8_seq", mm8_seq);
m.def("mm8_one", mm8_one);
#ifndef DISABLE_CUBLAS_GEMM
m.def("gemm_fp16_cublas", gemm_fp16_cublas);
#endif
}

1800
backend-python/rwkv_pip/model.py vendored Normal file

File diff suppressed because it is too large Load Diff

BIN
backend-python/rwkv_pip/rwkv5.pyd vendored Normal file

Binary file not shown.

View File

@@ -16,6 +16,7 @@ class PIPELINE_ARGS:
top_k=0, top_k=0,
alpha_frequency=0.2, alpha_frequency=0.2,
alpha_presence=0.2, alpha_presence=0.2,
alpha_decay=0.996,
token_ban=[], token_ban=[],
token_stop=[], token_stop=[],
chunk_len=256, chunk_len=256,
@@ -25,6 +26,7 @@ class PIPELINE_ARGS:
self.top_k = top_k self.top_k = top_k
self.alpha_frequency = alpha_frequency # Frequency Penalty (as in GPT-3) self.alpha_frequency = alpha_frequency # Frequency Penalty (as in GPT-3)
self.alpha_presence = alpha_presence # Presence Penalty (as in GPT-3) self.alpha_presence = alpha_presence # Presence Penalty (as in GPT-3)
self.alpha_decay = alpha_decay # gradually decay the penalty
self.token_ban = token_ban # ban the generation of some tokens self.token_ban = token_ban # ban the generation of some tokens
self.token_stop = token_stop # stop generation whenever you see any token here self.token_stop = token_stop # stop generation whenever you see any token here
self.chunk_len = ( self.chunk_len = (
@@ -33,7 +35,7 @@ class PIPELINE_ARGS:
class PIPELINE: class PIPELINE:
def __init__(self, model, WORD_NAME): def __init__(self, model, WORD_NAME: str):
self.model = model self.model = model
if WORD_NAME == "cl100k_base": if WORD_NAME == "cl100k_base":
import tiktoken import tiktoken
@@ -47,9 +49,15 @@ class PIPELINE:
os.path.dirname(os.path.abspath(__file__)) + "/rwkv_vocab_v20230424.txt" os.path.dirname(os.path.abspath(__file__)) + "/rwkv_vocab_v20230424.txt"
) )
else: else:
from tokenizers import Tokenizer if WORD_NAME.endswith(".txt"):
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from rwkv_tokenizer import TRIE_TOKENIZER
self.tokenizer = Tokenizer.from_file(WORD_NAME) self.tokenizer = TRIE_TOKENIZER(WORD_NAME)
else:
from tokenizers import Tokenizer
self.tokenizer = Tokenizer.from_file(WORD_NAME)
def refine_context(self, context): def refine_context(self, context):
context = context.strip().split("\n") context = context.strip().split("\n")
@@ -73,12 +81,13 @@ class PIPELINE:
def sample_logits(self, logits, temperature=1.0, top_p=0.85, top_k=0): def sample_logits(self, logits, temperature=1.0, top_p=0.85, top_k=0):
probs = F.softmax(logits.float(), dim=-1) probs = F.softmax(logits.float(), dim=-1)
top_k = int(top_k) top_k = int(top_k)
if probs.device == torch.device("cpu"): # 'privateuseone' is the type of custom devices like `torch_directml.device()`
probs = probs.numpy() if probs.device.type in ["cpu", "privateuseone"]:
probs = probs.cpu().numpy()
sorted_ids = np.argsort(probs) sorted_ids = np.argsort(probs)
sorted_probs = probs[sorted_ids][::-1] sorted_probs = probs[sorted_ids][::-1]
cumulative_probs = np.cumsum(sorted_probs) cumulative_probs = np.cumsum(sorted_probs)
cutoff = float(sorted_probs[np.argmax(cumulative_probs > top_p)]) cutoff = float(sorted_probs[np.argmax(cumulative_probs >= top_p)])
probs[probs < cutoff] = 0 probs[probs < cutoff] = 0
if top_k < len(probs) and top_k > 0: if top_k < len(probs) and top_k > 0:
probs[sorted_ids[:-top_k]] = 0 probs[sorted_ids[:-top_k]] = 0
@@ -92,7 +101,7 @@ class PIPELINE:
sorted_probs = probs[sorted_ids] sorted_probs = probs[sorted_ids]
sorted_probs = torch.flip(sorted_probs, dims=(0,)) sorted_probs = torch.flip(sorted_probs, dims=(0,))
cumulative_probs = torch.cumsum(sorted_probs, dim=-1).cpu().numpy() cumulative_probs = torch.cumsum(sorted_probs, dim=-1).cpu().numpy()
cutoff = float(sorted_probs[np.argmax(cumulative_probs > top_p)]) cutoff = float(sorted_probs[np.argmax(cumulative_probs >= top_p)])
probs[probs < cutoff] = 0 probs[probs < cutoff] = 0
if top_k < len(probs) and top_k > 0: if top_k < len(probs) and top_k > 0:
probs[sorted_ids[:-top_k]] = 0 probs[sorted_ids[:-top_k]] = 0
@@ -127,10 +136,13 @@ class PIPELINE:
if token in args.token_stop: if token in args.token_stop:
break break
all_tokens += [token] all_tokens += [token]
for xxx in occurrence:
occurrence[xxx] *= args.alpha_decay
if token not in occurrence: if token not in occurrence:
occurrence[token] = 1 occurrence[token] = 1
else: else:
occurrence[token] += 1 occurrence[token] += 1
# print(occurrence) # debug
# output # output
tmp = self.decode(all_tokens[out_last:]) tmp = self.decode(all_tokens[out_last:])

BIN
backend-python/rwkv_pip/wkv_cuda.pyd vendored Normal file

Binary file not shown.

View File

@@ -10,7 +10,7 @@ logger = logging.getLogger()
logger.setLevel(logging.INFO) logger.setLevel(logging.INFO)
formatter = logging.Formatter("%(asctime)s - %(levelname)s\n%(message)s") formatter = logging.Formatter("%(asctime)s - %(levelname)s\n%(message)s")
fh = logging.handlers.RotatingFileHandler( fh = logging.handlers.RotatingFileHandler(
"api.log", mode="a", maxBytes=3 * 1024 * 1024, backupCount=3 "api.log", mode="a", maxBytes=3 * 1024 * 1024, backupCount=3, encoding="utf-8"
) )
fh.setFormatter(formatter) fh.setFormatter(formatter)
logger.addHandler(fh) logger.addHandler(fh)

View File

@@ -1,11 +1,13 @@
import os import os
import sys import global_var
def ngrok_connect(): def ngrok_connect():
from pyngrok import ngrok, conf from pyngrok import ngrok, conf
conf.set_default(conf.PyngrokConfig(ngrok_path="./ngrok")) conf.set_default(
conf.PyngrokConfig(ngrok_path="./ngrok.exe" if os.name == "nt" else "./ngrok")
)
ngrok.set_auth_token(os.environ["ngrok_token"]) ngrok.set_auth_token(os.environ["ngrok_token"])
http_tunnel = ngrok.connect(8000 if len(sys.argv) == 1 else int(sys.argv[1])) http_tunnel = ngrok.connect(global_var.get(global_var.Args).port)
print(http_tunnel.public_url) print(f"ngrok url: {http_tunnel.public_url}")

View File

@@ -4,7 +4,7 @@ import os
import pathlib import pathlib
import copy import copy
import re import re
from typing import Dict, Iterable, List, Tuple, Union from typing import Dict, Iterable, List, Tuple, Union, Type
from utils.log import quick_log from utils.log import quick_log
from fastapi import HTTPException from fastapi import HTTPException
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
@@ -21,33 +21,21 @@ os.environ["TORCH_EXTENSIONS_DIR"] = f"{pathlib.Path(__file__).parent.parent.res
class RWKVType(Enum): class RWKVType(Enum):
NoneType = auto()
Raven = auto() Raven = auto()
World = auto() World = auto()
Music = auto() Music = auto()
class AbstractRWKV(ABC): class AbstractRWKV(ABC):
def __init__(self, model: str, strategy: str, tokens_path: str): def __init__(self, model, pipeline):
rwkv_beta = global_var.get(global_var.Args).rwkv_beta self.name = "rwkv"
self.model = model
# dynamic import to make RWKV_CUDA_ON work self.pipeline = pipeline
if rwkv_beta:
from rwkv_pip.beta.model import (
RWKV as Model,
)
else:
from rwkv.model import (
RWKV as Model,
)
from rwkv_pip.utils import PIPELINE
filename, _ = os.path.splitext(os.path.basename(model))
self.name = filename
self.model = Model(model, strategy)
self.pipeline = PIPELINE(self.model, tokens_path)
self.model_state = None self.model_state = None
self.model_tokens = [] self.model_tokens = []
self.rwkv_type: RWKVType = None self.rwkv_type: RWKVType = RWKVType.NoneType
self.tokenizer_len = len(model.w["emb.weight"])
self.max_tokens_per_generation = 500 self.max_tokens_per_generation = 500
self.temperature = 1 self.temperature = 1
@@ -348,8 +336,8 @@ class AbstractRWKV(ABC):
class TextRWKV(AbstractRWKV): class TextRWKV(AbstractRWKV):
def __init__(self, model: str, strategy: str, tokens_path: str) -> None: def __init__(self, model, pipeline) -> None:
super().__init__(model, strategy, tokens_path) super().__init__(model, pipeline)
self.CHUNK_LEN = 256 self.CHUNK_LEN = 256
@@ -361,16 +349,16 @@ class TextRWKV(AbstractRWKV):
self.penalty_alpha_frequency = 1 self.penalty_alpha_frequency = 1
self.interface = ":" self.interface = ":"
if "world" in self.name.lower(): if self.tokenizer_len < 65536:
self.rwkv_type = RWKVType.World
self.user = "Question"
self.bot = "Answer"
self.END_OF_LINE = 11
else:
self.rwkv_type = RWKVType.Raven self.rwkv_type = RWKVType.Raven
self.user = "Bob" self.user = "Bob"
self.bot = "Alice" self.bot = "Alice"
self.END_OF_LINE = 187 self.END_OF_LINE = 187
else:
self.rwkv_type = RWKVType.World
self.user = "User"
self.bot = "Assistant"
self.END_OF_LINE = 11
self.AVOID_REPEAT_TOKENS = [] self.AVOID_REPEAT_TOKENS = []
AVOID_REPEAT = "" AVOID_REPEAT = ""
@@ -469,8 +457,8 @@ The following is a coherent verbose detailed conversation between a girl named {
class MusicRWKV(AbstractRWKV): class MusicRWKV(AbstractRWKV):
def __init__(self, model: str, strategy: str, tokens_path: str): def __init__(self, model, pipeline):
super().__init__(model, strategy, tokens_path) super().__init__(model, pipeline)
self.max_tokens_per_generation = 500 self.max_tokens_per_generation = 500
self.temperature = 1 self.temperature = 1
@@ -510,6 +498,52 @@ class MusicRWKV(AbstractRWKV):
return " " + delta return " " + delta
def get_tokenizer(tokenizer_len: int):
tokenizer_dir = f"{pathlib.Path(__file__).parent.parent.resolve()}/rwkv_pip/"
if tokenizer_len < 50277:
return tokenizer_dir + "tokenizer-midi.json"
elif tokenizer_len < 65536:
return tokenizer_dir + "20B_tokenizer.json"
else:
return "rwkv_vocab_v20230424"
def RWKV(model: str, strategy: str, tokenizer: Union[str, None]) -> AbstractRWKV:
rwkv_beta = global_var.get(global_var.Args).rwkv_beta
# dynamic import to make RWKV_CUDA_ON work
if rwkv_beta:
from rwkv_pip.beta.model import (
RWKV as Model,
)
else:
from rwkv_pip.model import (
RWKV as Model,
)
from rwkv_pip.utils import PIPELINE
filename, _ = os.path.splitext(os.path.basename(model))
model = Model(model, strategy)
if not tokenizer:
tokenizer = get_tokenizer(len(model.w["emb.weight"]))
pipeline = PIPELINE(model, tokenizer)
rwkv_map: dict[str, Type[AbstractRWKV]] = {
"20B_tokenizer": TextRWKV,
"rwkv_vocab_v20230424": TextRWKV,
"tokenizer-midi": MusicRWKV,
}
tokenizer_name = os.path.splitext(os.path.basename(tokenizer))[0]
rwkv: AbstractRWKV
if tokenizer_name in rwkv_map:
rwkv = rwkv_map[tokenizer_name](model, pipeline)
else:
rwkv = TextRWKV(model, pipeline)
rwkv.name = filename
return rwkv
class ModelConfigBody(BaseModel): class ModelConfigBody(BaseModel):
max_tokens: int = Field(default=None, gt=0, le=102400) max_tokens: int = Field(default=None, gt=0, le=102400)
temperature: float = Field(default=None, ge=0, le=2) temperature: float = Field(default=None, ge=0, le=2)
@@ -517,8 +551,8 @@ class ModelConfigBody(BaseModel):
presence_penalty: float = Field(default=None, ge=-2, le=2) presence_penalty: float = Field(default=None, ge=-2, le=2)
frequency_penalty: float = Field(default=None, ge=-2, le=2) frequency_penalty: float = Field(default=None, ge=-2, le=2)
class Config: model_config = {
schema_extra = { "json_schema_extra": {
"example": { "example": {
"max_tokens": 1000, "max_tokens": 1000,
"temperature": 1.2, "temperature": 1.2,
@@ -527,6 +561,7 @@ class ModelConfigBody(BaseModel):
"frequency_penalty": 0.4, "frequency_penalty": 0.4,
} }
} }
}
def set_rwkv_config(model: AbstractRWKV, body: ModelConfigBody): def set_rwkv_config(model: AbstractRWKV, body: ModelConfigBody):

View File

@@ -0,0 +1,14 @@
from fastapi import FastAPI
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.staticfiles import StaticFiles
import uvicorn
webui_server = FastAPI()
webui_server.add_middleware(GZipMiddleware, minimum_size=1000)
webui_server.mount(
"/", StaticFiles(directory="frontend/dist", html=True), name="static"
)
if __name__ == "__main__":
uvicorn.run("webui_server:webui_server")

Binary file not shown.

Binary file not shown.

View File

@@ -1,734 +0,0 @@
########################################################################################################
# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
########################################################################################################
import types, gc, os, time, re
import torch
from torch.nn import functional as F
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.allow_tf32 = True
torch.backends.cuda.matmul.allow_tf32 = True
current_path = os.path.dirname(os.path.abspath(__file__))
# https://zhuanlan.zhihu.com/p/612879065
def LoadPreCompileLibrary(file):
import importlib
import os
import torch
# load the custom_op_library and register the custom ops
lib_dir = os.path.dirname(__file__)
if os.name == "nt":
# Register the main torchvision library location on the default DLL path
import ctypes
import sys
kernel32 = ctypes.WinDLL("kernel32.dll", use_last_error=True)
with_load_library_flags = hasattr(kernel32, "AddDllDirectory")
prev_error_mode = kernel32.SetErrorMode(0x0001)
if with_load_library_flags:
kernel32.AddDllDirectory.restype = ctypes.c_void_p
if sys.version_info >= (3, 8):
os.add_dll_directory(lib_dir)
elif with_load_library_flags:
res = kernel32.AddDllDirectory(lib_dir)
if res is None:
err = ctypes.WinError(ctypes.get_last_error())
err.strerror += f' Error adding "{lib_dir}" to the DLL directories.'
raise ValueError(err)
kernel32.SetErrorMode(prev_error_mode)
loader_details = (
importlib.machinery.ExtensionFileLoader,
importlib.machinery.EXTENSION_SUFFIXES,
)
extfinder = importlib.machinery.FileFinder(lib_dir, loader_details)
ext_specs = extfinder.find_spec(file)
if ext_specs is None:
return False
try:
torch.ops.load_library(ext_specs.origin)
except OSError as exc:
return False
return True
########################################################################################################
if os.environ.get('RWKV_JIT_ON') != '0':
os.environ["RWKV_JIT_ON"] = '1'
MyModule = torch.jit.ScriptModule
MyFunction = torch.jit.script_method
MyStatic = torch.jit.script
else:
MyModule = torch.nn.Module
def __nop(ob):
return ob
MyFunction = __nop
MyStatic = __nop
if os.environ.get('RWKV_CUDA_ON') == '1':
if LoadPreCompileLibrary('wkv_cuda') is False:
from torch.utils.cpp_extension import load
load(
name=f"wkv_cuda",
sources=[f"{current_path}/cuda/wrapper.cpp", f"{current_path}/cuda/operators.cu"],
verbose=True,
extra_cuda_cflags=["-t 4", "-std=c++17", "--use_fast_math", "-O3", "--extra-device-vectorization"],
is_python_module=False)
@MyStatic
def cuda_wkv(T: int, C: int, w, u, k, v, aa, bb, pp):
assert 1 * C % min(C, 32) == 0
assert k.dtype == v.dtype == torch.float16 or k.dtype == v.dtype == torch.float32
assert w.dtype == u.dtype == aa.dtype == bb.dtype == pp.dtype == torch.float32
w = w.contiguous()
u = u.contiguous()
k = k.contiguous()
v = v.contiguous()
y = torch.empty((T, C), device=w.device, memory_format=torch.contiguous_format, dtype=k.dtype)
torch.ops.rwkv.wkv_forward(1, T, C, w, u, k, v, y, aa, bb, pp)
return y, aa, bb, pp
@MyStatic
def cuda_mm8_seq(B: int, N: int, M: int, x, w, mx, rx, my, ry):
assert x.dtype == mx.dtype == rx.dtype == my.dtype == ry.dtype
assert x.dtype == torch.float32 or x.dtype == torch.float16
assert w.dtype == torch.uint8
assert x.shape == [B, N]
assert w.shape == [N, M]
assert rx.shape == mx.shape == [M]
assert ry.shape == my.shape == [N, 1]
y = torch.empty((B, M), device=w.device, dtype=x.dtype)
torch.ops.rwkv.mm8_seq(B, N, M, x, w, mx, rx, my, ry, y)
return y
@MyStatic
def cuda_mm8_one(N: int, M: int, x, w, mx, rx, my, ry):
assert x.dtype == mx.dtype == rx.dtype == my.dtype == ry.dtype
assert x.dtype == torch.float32 or x.dtype == torch.float16
assert w.dtype == torch.uint8
assert x.shape == [N]
assert w.shape == [N, M]
assert rx.shape == mx.shape == [M]
assert ry.shape == my.shape == [N, 1]
y = torch.zeros((M,), device=w.device, dtype=torch.float32)
torch.ops.rwkv.mm8_one(N, M, x, w, mx, rx, my, ry, y)
return y.to(dtype=x.dtype)
else:
os.environ["RWKV_CUDA_ON"] = '0'
########################################################################################################
class RWKV(MyModule):
def __init__(self, model, strategy, verbose = True, convert_and_save_and_exit = None):
super().__init__()
if verbose:
prxxx = lambda *args, **kwargs: print(*args, **kwargs)
else:
prxxx = lambda *args, **kwargs: None
STRATEGY_REGEX = r"^(?:(?:^|->) *(?:cuda(?::[\d]+)?|cpu|mps) (?:fp(?:16|32)|bf16)(?:i8|i4|i3)?(?: \*[\d]+\+?)? *)+$"
if not re.match(STRATEGY_REGEX, strategy):
raise ValueError("Invalid strategy. Please read https://pypi.org/project/rwkv/")
strategy = ('->'.join([x.strip() for x in strategy.split('->')])).replace('->', ' -> ')
self.args = types.SimpleNamespace()
args = self.args
args.MODEL_NAME = model
args.strategy_string = strategy
# Rescale for fp16 mode: set x = x/2 every X layer (to avoid fp16 overflow)
self.RESCALE_LAYER = 6 if 'fp16' in strategy else 0
prxxx(f'RWKV_JIT_ON {os.environ["RWKV_JIT_ON"]} RWKV_CUDA_ON {os.environ["RWKV_CUDA_ON"]} RESCALE_LAYER {self.RESCALE_LAYER}\n')
args.MODEL_NAME = args.MODEL_NAME.strip()
if not args.MODEL_NAME.endswith('.pth'):
args.MODEL_NAME += '.pth'
prxxx(f'Loading {args.MODEL_NAME} ...')
with torch.no_grad():
self.w = torch.load(args.MODEL_NAME, map_location='cpu') # load model to CPU first
gc.collect()
w = self.w
ALREADY_CONVERTED = False
if '_strategy' in w:
ALREADY_CONVERTED = True
assert convert_and_save_and_exit == None # you should only convert a raw model
prxxx(f"Converted model: strategy {w['_strategy']}, version {w['_version']}\n")
assert w['_strategy'] == args.strategy_string # if you are using a new strategy, re-convert the model
assert float(w['_version']) >= 0.7 # sometimes you should re-convert using latest convert_model.py
assert w['_rescale_layer'] == self.RESCALE_LAYER
del w['_strategy']
del w['_version']
del w['_rescale_layer']
args.n_embd = w['emb.weight'].shape[1]
args.n_layer = 0
keys = list(w.keys())
for x in keys:
layer_id = int(x.split('.')[1]) if ('blocks.' in x) else 0
args.n_layer = max(args.n_layer, layer_id+1)
####################### Compute strategy
s = [x.strip().split(' ') for x in strategy.split('->')]
plan = [0] * len(s)
stream_i = -1
stream_count = 0
to_allocate = args.n_layer + 1
allocated = 0
free_slots = 0
for i in range(len(s)):
si = s[i]
si1 = si[1]
if si1.startswith('fp32'): si[1] = [torch.float]
elif si1.startswith('fp16'): si[1] = [torch.float16]
elif si1.startswith('bf16'): si[1] = [torch.bfloat16]
if si1.endswith('i8'): si[1] += [torch.uint8]
else: si[1] += [si[1][0]]
if len(si) > 2:
ss = si[2]
assert ss.startswith('*')
if ss.endswith('+'):
plan[i] = int(ss[1:-1])
stream_i = i
else:
plan[i] = int(ss[1:])
allocated += plan[i]
if allocated >= to_allocate:
plan[i] += to_allocate - allocated
break
else:
free_slots += 1
if stream_i < 0:
if free_slots > 0 and to_allocate > allocated:
for i in range(len(s)):
if plan[i] == 0:
plan[i] = (to_allocate - allocated) // free_slots
allocated += plan[i]
free_slots -= 1
if to_allocate > allocated:
plan[len(s)-1] += to_allocate - allocated
else:
if to_allocate > allocated:
stream_count = to_allocate - allocated
plan[stream_i] += stream_count
prxxx(f'Strategy: (total {args.n_layer}+1={args.n_layer+1} layers)')
for i in range(len(s)):
ss = s[i]
if i != stream_i:
prxxx(f'* {ss[0]} {str(ss[1]).replace("torch.","")}, store {plan[i]} layers')
else:
prxxx(f'* {ss[0]} {str(ss[1]).replace("torch.","")}, store {plan[i]-stream_count} layers, stream {stream_count} layers')
plan[i] += (0 if i == 0 else plan[i-1])
self.strategy = [None] * (args.n_layer + 1)
strategy = self.strategy
for n in range(args.n_layer + 1):
for i in range(len(s)):
if n < plan[i]:
strategy[n] = types.SimpleNamespace()
strategy[n].device = s[i][0]
strategy[n].atype = s[i][1][0]
strategy[n].wtype = s[i][1][1]
strategy[n].stream = False
if i == stream_i and n >= (plan[i] - stream_count):
strategy[n].stream = True
break
prxxx(f"{n}-{strategy[n].device}-{str(strategy[n].atype).replace('torch.','')}-{str(strategy[n].wtype).replace('torch.','')}{'-stream' if strategy[n].stream else ''}",end=' ')
prxxx()
####################### Load weights to self.w
if not ALREADY_CONVERTED:
try: # precompute embedding
w['emb.weight'] = F.layer_norm(w['emb.weight'], (args.n_embd,), weight=w['blocks.0.ln0.weight'], bias=w['blocks.0.ln0.bias'])
except:
w['emb.weight'] = F.layer_norm(w['emb.weight'].float(), (args.n_embd,), weight=w['blocks.0.ln0.weight'].float(), bias=w['blocks.0.ln0.bias'].float())
del w['blocks.0.ln0.weight']
del w['blocks.0.ln0.bias']
print_need_newline = False
keys = list(w.keys())
for x in keys:
w[x].requires_grad = False
layer_id = int(x.split('.')[1]) if ('blocks.' in x) else 0
if ('ln_out.' in x) or ('head.' in x):
layer_id = args.n_layer
dd = strategy[layer_id]
DEVICE = dd.device
ATYPE = dd.atype
WTYPE = dd.wtype
if not ALREADY_CONVERTED:
if self.RESCALE_LAYER > 0:
if 'att.output.weight' in x:
w[x] = w[x] / (2 ** int(layer_id // self.RESCALE_LAYER))
if 'ffn.value.weight' in x:
w[x] = w[x] / (2 ** int(layer_id // self.RESCALE_LAYER))
if '.time_' in x:
w[x] = w[x].squeeze()
if 'key.weight' in x or 'value.weight' in x or 'receptance.weight' in x or 'output.weight' in x or 'head.weight' in x:
w[x] = w[x].t()
if '.time_decay' in x: # need fp32 for this
w[x] = -torch.exp(w[x].float())
elif '.time_first' in x: # need fp32 for this
w[x] = w[x].float()
else:
if (len(w[x].shape) == 2) and ('emb' not in x):
if WTYPE != torch.uint8:
w[x] = w[x].to(dtype=WTYPE)
else:
w[x] = w[x].float()
if w[x].shape[0] > w[x].shape[1]:
w[x+'_my'] = torch.amin(w[x], dim=1).unsqueeze(1)
w[x] = w[x] - w[x+'_my']
w[x+'_mx'] = torch.amin(w[x], dim=0)
w[x] = w[x] - w[x+'_mx']
w[x+'_rx'] = torch.amax(w[x], dim=0)
w[x] = w[x] / w[x+'_rx']
w[x+'_ry'] = torch.amax(w[x], dim=1).unsqueeze(1)
w[x] = w[x] / w[x+'_ry']
else:
w[x+'_mx'] = torch.amin(w[x], dim=0)
w[x] = w[x] - w[x+'_mx']
w[x+'_my'] = torch.amin(w[x], dim=1).unsqueeze(1)
w[x] = w[x] - w[x+'_my']
w[x+'_rx'] = torch.amax(w[x], dim=0)
w[x] = w[x] / w[x+'_rx']
w[x+'_ry'] = torch.amax(w[x], dim=1).unsqueeze(1)
w[x] = w[x] / w[x+'_ry']
w[x] = torch.clip(torch.floor(w[x] * 256), min=0, max=255).to(dtype=torch.uint8)
w[x+'_mx'] = w[x+'_mx'].to(dtype=ATYPE).contiguous()
w[x+'_rx'] = (w[x+'_rx'] / 16).to(dtype=ATYPE).contiguous()
w[x+'_my'] = w[x+'_my'].to(dtype=ATYPE).contiguous()
w[x+'_ry'] = (w[x+'_ry'] / 16).to(dtype=ATYPE).contiguous()
else:
w[x] = w[x].to(dtype=ATYPE)
if convert_and_save_and_exit == None:
if 'emb.' in x:
w[x] = w[x].contiguous()
elif (dd.stream) and (x.endswith('key.weight') or x.endswith('value.weight') or x.endswith('receptance.weight') or x.endswith('output.weight')):
try:
w[x] = w[x].contiguous().pin_memory() # if you see "CUDA error: out of memory" here, that's out of CPU RAM, not VRAM. Get more RAM :)
except:
print('Note: You are running out of RAM. Get more CPU RAM. Now this will run much slower.')
elif DEVICE != 'cpu':
w[x] = w[x].to(device=DEVICE).contiguous()
if (dd.stream) or (DEVICE != 'cpu'):
try:
w[x+'_mx'] = w[x+'_mx'].to(device=DEVICE).contiguous()
w[x+'_rx'] = w[x+'_rx'].to(device=DEVICE).contiguous()
w[x+'_my'] = w[x+'_my'].to(device=DEVICE).contiguous()
w[x+'_ry'] = w[x+'_ry'].to(device=DEVICE).contiguous()
except:
pass
if 'ffn.value.weight' in x:
gc.collect()
if 'cuda' in args.strategy_string:
torch.cuda.empty_cache()
shape = [i for i in w[x].shape if i != 1]
if len(shape) > 1:
shape = f" {str(shape[0]).rjust(5)} {str(shape[1]).rjust(5)}"
else:
shape = f" {str(shape[0]).rjust(5)} "
if layer_id == 0 or layer_id >= args.n_layer-1:
if print_need_newline:
prxxx('\n', end = '')
print_need_newline = False
dt = str(w[x].dtype).replace('torch.', '')
dt = dt.replace('float32', 'f32').replace('bfloat16', 'bf16').replace('float16', 'f16').replace('uint8', 'i8')
prxxx(x.ljust(32), dt.rjust(4), str(w[x].device).rjust(8), shape, ' (pinned)' if w[x].is_pinned() else '')
else:
print_need_newline = True
prxxx('.', end = '', flush = True)
if convert_and_save_and_exit:
w['_strategy'] = args.strategy_string
w['_rescale_layer'] = self.RESCALE_LAYER
w['_version'] = '0.7'
if not convert_and_save_and_exit.endswith('.pth'):
convert_and_save_and_exit += '.pth'
prxxx(f'Saving to {convert_and_save_and_exit}...')
torch.save(w, convert_and_save_and_exit)
prxxx(f'Converted and saved. Now this will exit.')
exit(0)
gc.collect()
if 'cuda' in args.strategy_string:
torch.cuda.empty_cache()
@MyFunction
def torch_mm8_seq(self, x, w, mx, rx, my, ry):
return x @ ((w.to(dtype=x.dtype) + 0.5) * ry * rx + my + mx)
@MyFunction
def torch_mm8_one(self, x, w, mx, rx, my, ry):
return x @ ((w.to(dtype=x.dtype) + 0.5) * ry * rx + my + mx)
if os.environ.get('RWKV_CUDA_ON') == '1':
@MyFunction
def mm8_seq(self, x, w, mx, rx, my, ry):
if w.device.type == 'cuda' and x.dtype == torch.float16:
B, N, M = x.shape[0], w.shape[0], w.shape[1]
return cuda_mm8_seq(B, N, M, x, w, mx, rx, my, ry)
else:
return self.torch_mm8_seq(x, w, mx, rx, my, ry)
@MyFunction
def mm8_one(self, x, w, mx, rx, my, ry):
if w.device.type == 'cuda':
N, M = w.shape[0], w.shape[1]
return cuda_mm8_one(N, M, x, w, mx, rx, my, ry)
else:
return self.torch_mm8_one(x, w, mx, rx, my, ry)
else:
@MyFunction
def mm8_seq(self, x, w, mx, rx, my, ry):
return self.torch_mm8_seq(x, w, mx, rx, my, ry)
@MyFunction
def mm8_one(self, x, w, mx, rx, my, ry):
return self.torch_mm8_one(x, w, mx, rx, my, ry)
########################################################################################################
@MyFunction
def ffn_one(self, x, sx, ln_w, ln_b, k_mix, r_mix, kw, vw, rw, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
kx = xx * k_mix + sx * (1 - k_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(rx @ rw)
vx = torch.square(torch.relu(kx @ kw))
out = r * (vx @ vw)
return x + out, xx
@MyFunction
def ffn_one_i8(self, x, sx, ln_w, ln_b, k_mix, r_mix, kw, vw, rw, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
kx = xx * k_mix + sx * (1 - k_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(self.mm8_one(rx, rw, rmx, rrx, rmy, rry))
vx = torch.square(torch.relu(self.mm8_one(kx, kw, kmx, krx, kmy, kry)))
out = r * (self.mm8_one(vx, vw, vmx, vrx, vmy, vry))
return x + out, xx
########################################################################################################
@MyFunction
def ffn_seq(self, x, sx, ln_w, ln_b, k_mix, r_mix, kw, vw, rw, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(rx @ rw)
vx = torch.square(torch.relu(kx @ kw))
out = r * (vx @ vw)
return x + out, xx[-1,:]
@MyFunction
def ffn_seq_i8(self, x, sx, ln_w, ln_b, k_mix, r_mix, kw, vw, rw, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(self.mm8_seq(rx, rw, rmx, rrx, rmy, rry))
vx = torch.square(torch.relu(self.mm8_seq(kx, kw, kmx, krx, kmy, kry)))
out = r * (self.mm8_seq(vx, vw, vmx, vrx, vmy, vry))
return x + out, xx[-1,:]
########################################################################################################
@MyFunction
def att_one(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(rx @ rw)
k = (kx @ kw).float()
v = (vx @ vw).float()
ww = t_first + k
p = torch.maximum(pp, ww)
e1 = torch.exp(pp - p)
e2 = torch.exp(ww - p)
wkv = ((e1 * aa + e2 * v) / (e1 * bb + e2)).to(dtype=x.dtype)
ww = t_decay + pp
p = torch.maximum(ww, k)
e1 = torch.exp(ww - p)
e2 = torch.exp(k - p)
out = (r * wkv) @ ow
return x + out, xx, e1 * aa + e2 * v, e1 * bb + e2, p
@MyFunction
def att_one_i8(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(self.mm8_one(rx, rw, rmx, rrx, rmy, rry))
k = (self.mm8_one(kx, kw, kmx, krx, kmy, kry)).float()
v = (self.mm8_one(vx, vw, vmx, vrx, vmy, vry)).float()
ww = t_first + k
p = torch.maximum(pp, ww)
e1 = torch.exp(pp - p)
e2 = torch.exp(ww - p)
wkv = ((e1 * aa + e2 * v) / (e1 * bb + e2)).to(dtype=x.dtype)
ww = t_decay + pp
p = torch.maximum(ww, k)
e1 = torch.exp(ww - p)
e2 = torch.exp(k - p)
out = self.mm8_one(r * wkv, ow, omx, orx, omy, ory)
return x + out, xx, e1 * aa + e2 * v, e1 * bb + e2, p
########################################################################################################
@MyFunction
def att_seq(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(rx @ rw)
k = (kx @ kw).float()
v = (vx @ vw).float()
T = x.shape[0]
for t in range(T):
kk = k[t]
vv = v[t]
ww = t_first + kk
p = torch.maximum(pp, ww)
e1 = torch.exp(pp - p)
e2 = torch.exp(ww - p)
sx[t] = ((e1 * aa + e2 * vv) / (e1 * bb + e2)).to(dtype=x.dtype)
ww = t_decay + pp
p = torch.maximum(ww, kk)
e1 = torch.exp(ww - p)
e2 = torch.exp(kk - p)
aa = e1 * aa + e2 * vv
bb = e1 * bb + e2
pp = p
out = (r * sx) @ ow
return x + out, xx[-1,:], aa, bb, pp
@MyFunction
def att_seq_i8(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(self.mm8_seq(rx, rw, rmx, rrx, rmy, rry))
k = self.mm8_seq(kx, kw, kmx, krx, kmy, kry).float()
v = self.mm8_seq(vx, vw, vmx, vrx, vmy, vry).float()
T = x.shape[0]
for t in range(T):
kk = k[t]
vv = v[t]
ww = t_first + kk
p = torch.maximum(pp, ww)
e1 = torch.exp(pp - p)
e2 = torch.exp(ww - p)
sx[t] = ((e1 * aa + e2 * vv) / (e1 * bb + e2)).to(dtype=x.dtype)
ww = t_decay + pp
p = torch.maximum(ww, kk)
e1 = torch.exp(ww - p)
e2 = torch.exp(kk - p)
aa = e1 * aa + e2 * vv
bb = e1 * bb + e2
pp = p
out = self.mm8_seq(r * sx, ow, omx, orx, omy, ory)
return x + out, xx[-1,:], aa, bb, pp
########################################################################################################
if os.environ["RWKV_CUDA_ON"] == '1':
@MyFunction
def cuda_att_seq(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
T, C = x.size()
xx = F.layer_norm(x, (C,), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(rx @ rw)
k = kx @ kw
v = vx @ vw
y, aa, bb, pp = cuda_wkv(T, C, t_decay, t_first, k, v, aa, bb, pp)
out = (r * y) @ ow
return x + out, xx[-1,:], aa, bb, pp
@MyFunction
def cuda_att_seq_i8(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
T, C = x.size()
xx = F.layer_norm(x, (C,), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(self.mm8_seq(rx, rw, rmx, rrx, rmy, rry))
k = self.mm8_seq(kx, kw, kmx, krx, kmy, kry)
v = self.mm8_seq(vx, vw, vmx, vrx, vmy, vry)
y, aa, bb, pp = cuda_wkv(T, C, t_decay, t_first, k, v, aa, bb, pp)
out = self.mm8_seq(r * y, ow, omx, orx, omy, ory)
return x + out, xx[-1,:], aa, bb, pp
########################################################################################################
def forward(self, tokens, state, full_output=False):
with torch.no_grad():
w = self.w
args = self.args
if state == None:
state = [None] * args.n_layer * 5
for i in range(args.n_layer): # state: 0=att_xx 1=att_aa 2=att_bb 3=att_pp 4=ffn_xx
dd = self.strategy[i]
dev = dd.device
atype = dd.atype
state[i*5+0] = torch.zeros(args.n_embd, dtype=atype, requires_grad=False, device=dev).contiguous()
state[i*5+1] = torch.zeros(args.n_embd, dtype=torch.float, requires_grad=False, device=dev).contiguous()
state[i*5+2] = torch.zeros(args.n_embd, dtype=torch.float, requires_grad=False, device=dev).contiguous()
state[i*5+3] = torch.zeros(args.n_embd, dtype=torch.float, requires_grad=False, device=dev).contiguous() - 1e30
state[i*5+4] = torch.zeros(args.n_embd, dtype=atype, requires_grad=False, device=dev).contiguous()
seq_mode = len(tokens) > 1
x = w['emb.weight'][tokens if seq_mode else tokens[0]]
for i in range(args.n_layer):
bbb = f'blocks.{i}.'
att = f'blocks.{i}.att.'
ffn = f'blocks.{i}.ffn.'
dd = self.strategy[i]
dev = dd.device
atype = dd.atype
wtype = dd.wtype
if seq_mode:
if 'cuda' in str(dev) and os.environ["RWKV_CUDA_ON"] == '1':
ATT = self.cuda_att_seq if wtype != torch.uint8 else self.cuda_att_seq_i8
else:
ATT = self.att_seq if wtype != torch.uint8 else self.att_seq_i8
FFN = self.ffn_seq if wtype != torch.uint8 else self.ffn_seq_i8
else:
ATT = self.att_one if wtype != torch.uint8 else self.att_one_i8
FFN = self.ffn_one if wtype != torch.uint8 else self.ffn_one_i8
x = x.to(dtype=atype, device=dev)
kw = w[f'{att}key.weight']
vw = w[f'{att}value.weight']
rw = w[f'{att}receptance.weight']
ow = w[f'{att}output.weight']
if dd.stream:
kw = kw.to(device=dev, non_blocking=True)
vw = vw.to(device=dev, non_blocking=True)
rw = rw.to(device=dev, non_blocking=True)
ow = ow.to(device=dev, non_blocking=True)
kmx = w[f'{att}key.weight_mx'] if wtype == torch.uint8 else x
krx = w[f'{att}key.weight_rx'] if wtype == torch.uint8 else x
kmy = w[f'{att}key.weight_my'] if wtype == torch.uint8 else x
kry = w[f'{att}key.weight_ry'] if wtype == torch.uint8 else x
vmx = w[f'{att}value.weight_mx'] if wtype == torch.uint8 else x
vrx = w[f'{att}value.weight_rx'] if wtype == torch.uint8 else x
vmy = w[f'{att}value.weight_my'] if wtype == torch.uint8 else x
vry = w[f'{att}value.weight_ry'] if wtype == torch.uint8 else x
rmx = w[f'{att}receptance.weight_mx'] if wtype == torch.uint8 else x
rrx = w[f'{att}receptance.weight_rx'] if wtype == torch.uint8 else x
rmy = w[f'{att}receptance.weight_my'] if wtype == torch.uint8 else x
rry = w[f'{att}receptance.weight_ry'] if wtype == torch.uint8 else x
omx = w[f'{att}output.weight_mx'] if wtype == torch.uint8 else x
orx = w[f'{att}output.weight_rx'] if wtype == torch.uint8 else x
omy = w[f'{att}output.weight_my'] if wtype == torch.uint8 else x
ory = w[f'{att}output.weight_ry'] if wtype == torch.uint8 else x
x, state[i*5+0], state[i*5+1], state[i*5+2], state[i*5+3] = ATT(
x, state[i*5+0], state[i*5+1], state[i*5+2], state[i*5+3],
w[f'{bbb}ln1.weight'], w[f'{bbb}ln1.bias'],
w[f'{att}time_mix_k'], w[f'{att}time_mix_v'], w[f'{att}time_mix_r'],
w[f'{att}time_decay'], w[f'{att}time_first'],
kw, vw, rw, ow,
kmx, krx, kmy, kry,
vmx, vrx, vmy, vry,
rmx, rrx, rmy, rry,
omx, orx, omy, ory,
)
if dd.stream:
del kw, vw, rw, ow
kw = w[f'{ffn}key.weight']
vw = w[f'{ffn}value.weight']
rw = w[f'{ffn}receptance.weight']
if dd.stream:
kw = kw.to(device=dev, non_blocking=True)
vw = vw.to(device=dev, non_blocking=True)
rw = rw.to(device=dev, non_blocking=True)
kmx = w[f'{ffn}key.weight_mx'] if wtype == torch.uint8 else x
krx = w[f'{ffn}key.weight_rx'] if wtype == torch.uint8 else x
kmy = w[f'{ffn}key.weight_my'] if wtype == torch.uint8 else x
kry = w[f'{ffn}key.weight_ry'] if wtype == torch.uint8 else x
vmx = w[f'{ffn}value.weight_mx'] if wtype == torch.uint8 else x
vrx = w[f'{ffn}value.weight_rx'] if wtype == torch.uint8 else x
vmy = w[f'{ffn}value.weight_my'] if wtype == torch.uint8 else x
vry = w[f'{ffn}value.weight_ry'] if wtype == torch.uint8 else x
rmx = w[f'{ffn}receptance.weight_mx'] if wtype == torch.uint8 else x
rrx = w[f'{ffn}receptance.weight_rx'] if wtype == torch.uint8 else x
rmy = w[f'{ffn}receptance.weight_my'] if wtype == torch.uint8 else x
rry = w[f'{ffn}receptance.weight_ry'] if wtype == torch.uint8 else x
x, state[i*5+4] = FFN(
x, state[i*5+4],
w[f'{bbb}ln2.weight'], w[f'{bbb}ln2.bias'],
w[f'{ffn}time_mix_k'], w[f'{ffn}time_mix_r'],
kw, vw, rw,
kmx, krx, kmy, kry,
vmx, vrx, vmy, vry,
rmx, rrx, rmy, rry,
)
if dd.stream:
del kw, vw, rw
if self.RESCALE_LAYER > 0:
if (i+1) % self.RESCALE_LAYER == 0:
x = x / 2
dd = self.strategy[args.n_layer]
x = x[-1,:] if (seq_mode and (not full_output)) else x
x = x.to(dtype=dd.atype, device=dd.device)
x = F.layer_norm(x, (args.n_embd,), weight=w['ln_out.weight'], bias=w['ln_out.bias'])
if w['head.weight'].dtype != torch.uint8:
x = x @ w['head.weight']
else:
if seq_mode and full_output:
x = self.mm8_seq(x, w['head.weight'], w['head.weight_mx'], w['head.weight_rx'], w['head.weight_my'], w['head.weight_ry'])
else:
x = self.mm8_one(x, w['head.weight'], w['head.weight_mx'], w['head.weight_rx'], w['head.weight_my'], w['head.weight_ry'])
return x.float(), state

View File

@@ -1,7 +1,5 @@
{"text": "1:This is the first document."} {"text": "The following is an epic science fiction masterpiece that is immortalized, with delicate descriptions and grand depictions of interstellar civilization wars.\nChapter 1.\nAs I sit down to write here amidst the shadows of vine-leaves under the blue sky of southern Italy, it comes to me with a certain quality of astonishment that my participation in these amazing adventures of Mr. Cavor was, after all, the outcome of the purest accident. It might have been any one. I fell into these things at a time when I thought myself removed from the slightest possibility of disturbing experiences. I had gone to Lympne because I had imagined it the most uneventful place in the world. “Here, at any rate,” said I, “I shall find peace and a chance to work!”"}
{"text": "2:Hello\nWorld"} {"text": "Translate the following into Chinese.\n\nEnglish: What rooms do you have available?\nChinese: 你们有哪些房间可以提供"}
{"text": "3:1+1=2\n1+2=3\n2+2=4"} {"text": "User: Hello.\n\nAssistant: I'm here, meow~.\n\nUser: Can you tell some jokes?\n\nAssistant: Of course, master. What kind of jokes would you like to hear?"}
{"text": "4:You will be training the GPT version because it's paralleziable and faster to train."} {"text": "Instruction: Write a story using the following information\n\nInput: A man named Alex chops a tree down\n\nResponse: Once upon a time, there was a man named Alex who lived in the heart of the forest. He had always been fascinated by trees and spent most of his days exploring the forest and learning about its many wonders. One day, while wandering through the woods, he stumbled upon an old oak tree that stood tall and proud in the middle of a clearing."}
{"text": "5:Read the inference code in src/model.py and try using the final hidden state(.xx .aa .bb)"} {"text": "def get_args(args: Union[Sequence[str], None] = None):\n parser = argparse.ArgumentParser()\n group = parser.add_argument_group(title=\"server arguments\")\n group.add_argument(\n \"--port\",\n type=int,\n default=8000,\n help=\"port to run the server on (default: 8000)\",\n )\n group.add_argument(\n \"--host\",\n type=str,\n default=\"127.0.0.1\",\n help=\"host to run the server on (default: 127.0.0.1)\",\n )"}
{"text": "6:You can fine-tune the model with longer ctxLen and it can quickly adapt to longer ctxLens."}
{"text": "7:Consider RWKV 14B. The state has 200 vectors, that is, 5 vectors for each block: fp16 (xx), fp32 (aa), fp32 (bb), fp32 (pp), fp16 (xx)."}

View File

@@ -246,5 +246,6 @@ if __name__ == "__main__":
try: try:
main() main()
except Exception as e: except Exception as e:
print(e)
with open("error.txt", "w") as f: with open("error.txt", "w") as f:
f.write(str(e)) f.write(str(e))

View File

@@ -64,5 +64,6 @@ try:
torch.save(output_w, output) torch.save(output_w, output)
except Exception as e: except Exception as e:
print(e)
with open("error.txt", "w") as f: with open("error.txt", "w") as f:
f.write(str(e)) f.write(str(e))

View File

@@ -264,7 +264,7 @@ if __name__ == "__main__":
# #
# Data = {args.data_file} ({args.data_type}), ProjDir = {args.proj_dir} # Data = {args.data_file} ({args.data_type}), ProjDir = {args.proj_dir}
# #
# Epoch = {args.epoch_begin} to {args.epoch_begin + args.epoch_count - 1} (will continue afterwards), save every {args.epoch_save} epoch # Epoch = {args.epoch_begin} to {args.epoch_begin + args.epoch_count - 1}, save every {args.epoch_save} epoch
# #
# Each "epoch" = {args.epoch_steps} steps, {samples_per_epoch} samples, {tokens_per_epoch} tokens # Each "epoch" = {args.epoch_steps} steps, {samples_per_epoch} samples, {tokens_per_epoch} tokens
# #

View File

@@ -1,3 +1,3 @@
torch==1.13.1 torch==1.13.1
pytorch_lightning==1.9.5 pytorch_lightning==1.9.5
deepspeed deepspeed==0.11.2

View File

@@ -1,9 +1,10 @@
<!DOCTYPE html> <!DOCTYPE html>
<html lang="en"> <html lang="en">
<head> <head>
<meta charset="UTF-8"/> <meta charset="UTF-8" />
<meta content="width=device-width, initial-scale=1.0" name="viewport"/> <meta content="width=device-width, initial-scale=1.0" name="viewport" />
<title>RWKV-Runner</title> <title>RWKV-Runner</title>
<link href="./src/assets/images/logo.png" rel="icon" type="image/x-icon">
</head> </head>
<body> <body>
<div id="root"></div> <div id="root"></div>

View File

@@ -15,7 +15,7 @@
"@primer/octicons-react": "^19.1.0", "@primer/octicons-react": "^19.1.0",
"chart.js": "^4.3.0", "chart.js": "^4.3.0",
"classnames": "^2.3.2", "classnames": "^2.3.2",
"github-markdown-css": "^5.2.0", "file-saver": "^2.0.5",
"html-midi-player": "^1.5.0", "html-midi-player": "^1.5.0",
"i18next": "^22.4.15", "i18next": "^22.4.15",
"mobx": "^6.9.0", "mobx": "^6.9.0",
@@ -37,6 +37,7 @@
"uuid": "^9.0.0" "uuid": "^9.0.0"
}, },
"devDependencies": { "devDependencies": {
"@types/file-saver": "^2.0.7",
"@types/react": "^18.2.6", "@types/react": "^18.2.6",
"@types/react-beautiful-dnd": "^13.1.4", "@types/react-beautiful-dnd": "^13.1.4",
"@types/react-dom": "^18.2.4", "@types/react-dom": "^18.2.4",
@@ -74,12 +75,13 @@
} }
}, },
"node_modules/@babel/code-frame": { "node_modules/@babel/code-frame": {
"version": "7.22.5", "version": "7.22.13",
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.22.5.tgz", "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.22.13.tgz",
"integrity": "sha512-Xmwn266vad+6DAqEB2A6V/CcZVp62BbwVmcOJc2RPuwih1kw02TjQvWVWlcKGbBPd+8/0V5DEkOcizRGYsspYQ==", "integrity": "sha512-XktuhWlJ5g+3TJXc5upd9Ks1HutSArik6jf2eAjYFyIOf4ej3RN+184cZbzDvbPnuTJIUhPKKJE3cIsYTiAT3w==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"@babel/highlight": "^7.22.5" "@babel/highlight": "^7.22.13",
"chalk": "^2.4.2"
}, },
"engines": { "engines": {
"node": ">=6.9.0" "node": ">=6.9.0"
@@ -125,12 +127,12 @@
} }
}, },
"node_modules/@babel/generator": { "node_modules/@babel/generator": {
"version": "7.22.5", "version": "7.23.0",
"resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.22.5.tgz", "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.23.0.tgz",
"integrity": "sha512-+lcUbnTRhd0jOewtFSedLyiPsD5tswKkbgcezOqqWFUVNEwoUTlpPOBmvhG7OXWLR4jMdv0czPGH5XbflnD1EA==", "integrity": "sha512-lN85QRR+5IbYrMWM6Y4pE/noaQtg4pNiqeNGX60eqOfo6gtEj6uw/JagelB8vVztSd7R6M5n1+PQkDbHbBRU4g==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"@babel/types": "^7.22.5", "@babel/types": "^7.23.0",
"@jridgewell/gen-mapping": "^0.3.2", "@jridgewell/gen-mapping": "^0.3.2",
"@jridgewell/trace-mapping": "^0.3.17", "@jridgewell/trace-mapping": "^0.3.17",
"jsesc": "^2.5.1" "jsesc": "^2.5.1"
@@ -159,22 +161,22 @@
} }
}, },
"node_modules/@babel/helper-environment-visitor": { "node_modules/@babel/helper-environment-visitor": {
"version": "7.22.5", "version": "7.22.20",
"resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.5.tgz", "resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.20.tgz",
"integrity": "sha512-XGmhECfVA/5sAt+H+xpSg0mfrHq6FzNr9Oxh7PSEBBRUb/mL7Kz3NICXb194rCqAEdxkhPT1a88teizAFyvk8Q==", "integrity": "sha512-zfedSIzFhat/gFhWfHtgWvlec0nqB9YEIVrpuwjruLlXfUSnA8cJB0miHKwqDnQ7d32aKo2xt88/xZptwxbfhA==",
"dev": true, "dev": true,
"engines": { "engines": {
"node": ">=6.9.0" "node": ">=6.9.0"
} }
}, },
"node_modules/@babel/helper-function-name": { "node_modules/@babel/helper-function-name": {
"version": "7.22.5", "version": "7.23.0",
"resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.22.5.tgz", "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.23.0.tgz",
"integrity": "sha512-wtHSq6jMRE3uF2otvfuD3DIvVhOsSNshQl0Qrd7qC9oQJzHvOL4qQXlQn2916+CXGywIjpGuIkoyZRRxHPiNQQ==", "integrity": "sha512-OErEqsrxjZTJciZ4Oo+eoZqeW9UIiOcuYKRJA4ZAgV9myA+pOXhhmpfNCKjEH/auVfEYVFJ6y1Tc4r0eIApqiw==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"@babel/template": "^7.22.5", "@babel/template": "^7.22.15",
"@babel/types": "^7.22.5" "@babel/types": "^7.23.0"
}, },
"engines": { "engines": {
"node": ">=6.9.0" "node": ">=6.9.0"
@@ -245,9 +247,9 @@
} }
}, },
"node_modules/@babel/helper-split-export-declaration": { "node_modules/@babel/helper-split-export-declaration": {
"version": "7.22.5", "version": "7.22.6",
"resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.22.5.tgz", "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.22.6.tgz",
"integrity": "sha512-thqK5QFghPKWLhAV321lxF95yCg2K3Ob5yw+M3VHWfdia0IkPXUtoLH8x/6Fh486QUvzhb8YOWHChTVen2/PoQ==", "integrity": "sha512-AsUnxuLhRYsisFiaJwvp1QF+I3KjD5FOxut14q/GzovUe6orHLesW2C7d754kRm53h5gqrz6sFl6sxc4BVtE/g==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"@babel/types": "^7.22.5" "@babel/types": "^7.22.5"
@@ -266,9 +268,9 @@
} }
}, },
"node_modules/@babel/helper-validator-identifier": { "node_modules/@babel/helper-validator-identifier": {
"version": "7.22.5", "version": "7.22.20",
"resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.5.tgz", "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.20.tgz",
"integrity": "sha512-aJXu+6lErq8ltp+JhkJUfk1MTGyuA4v7f3pA+BJ5HLfNC6nAQ0Cpi9uOquUj8Hehg0aUiHzWQbOVJGao6ztBAQ==", "integrity": "sha512-Y4OZ+ytlatR8AI+8KZfKuL5urKp7qey08ha31L8b3BwewJAoJamTzyvxPR/5D+KkdJCGPq/+8TukHBlY10FX9A==",
"dev": true, "dev": true,
"engines": { "engines": {
"node": ">=6.9.0" "node": ">=6.9.0"
@@ -298,13 +300,13 @@
} }
}, },
"node_modules/@babel/highlight": { "node_modules/@babel/highlight": {
"version": "7.22.5", "version": "7.22.20",
"resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.22.5.tgz", "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.22.20.tgz",
"integrity": "sha512-BSKlD1hgnedS5XRnGOljZawtag7H1yPfQp0tdNJCHoH6AZ+Pcm9VvkrK59/Yy593Ypg0zMxH2BxD1VPYUQ7UIw==", "integrity": "sha512-dkdMCN3py0+ksCgYmGG8jKeGA/8Tk+gJwSYYlFGxG5lmhfKNoAy004YpLxpS1W2J8m/EK2Ew+yOs9pVRwO89mg==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"@babel/helper-validator-identifier": "^7.22.5", "@babel/helper-validator-identifier": "^7.22.20",
"chalk": "^2.0.0", "chalk": "^2.4.2",
"js-tokens": "^4.0.0" "js-tokens": "^4.0.0"
}, },
"engines": { "engines": {
@@ -312,9 +314,9 @@
} }
}, },
"node_modules/@babel/parser": { "node_modules/@babel/parser": {
"version": "7.22.5", "version": "7.23.0",
"resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.22.5.tgz", "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.23.0.tgz",
"integrity": "sha512-DFZMC9LJUG9PLOclRC32G63UXwzqS2koQC8dkx+PLdmt1xSePYpbT/NbsrJy8Q/muXz7o/h/d4A7Fuyixm559Q==", "integrity": "sha512-vvPKKdMemU85V9WE/l5wZEmImpCtLqbnTvqDS2U1fJ96KrxoW7KrXhNsNCblQlg8Ck4b85yxdTyelsMUgFUXiw==",
"dev": true, "dev": true,
"bin": { "bin": {
"parser": "bin/babel-parser.js" "parser": "bin/babel-parser.js"
@@ -365,33 +367,33 @@
} }
}, },
"node_modules/@babel/template": { "node_modules/@babel/template": {
"version": "7.22.5", "version": "7.22.15",
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.22.5.tgz", "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.22.15.tgz",
"integrity": "sha512-X7yV7eiwAxdj9k94NEylvbVHLiVG1nvzCV2EAowhxLTwODV1jl9UzZ48leOC0sH7OnuHrIkllaBgneUykIcZaw==", "integrity": "sha512-QPErUVm4uyJa60rkI73qneDacvdvzxshT3kksGqlGWYdOTIUOwJ7RDUL8sGqslY1uXWSL6xMFKEXDS3ox2uF0w==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"@babel/code-frame": "^7.22.5", "@babel/code-frame": "^7.22.13",
"@babel/parser": "^7.22.5", "@babel/parser": "^7.22.15",
"@babel/types": "^7.22.5" "@babel/types": "^7.22.15"
}, },
"engines": { "engines": {
"node": ">=6.9.0" "node": ">=6.9.0"
} }
}, },
"node_modules/@babel/traverse": { "node_modules/@babel/traverse": {
"version": "7.22.5", "version": "7.23.2",
"resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.22.5.tgz", "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.23.2.tgz",
"integrity": "sha512-7DuIjPgERaNo6r+PZwItpjCZEa5vyw4eJGufeLxrPdBXBoLcCJCIasvK6pK/9DVNrLZTLFhUGqaC6X/PA007TQ==", "integrity": "sha512-azpe59SQ48qG6nu2CzcMLbxUudtN+dOM9kDbUqGq3HXUJRlo7i8fvPoxQUzYgLZ4cMVmuZgm8vvBpNeRhd6XSw==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"@babel/code-frame": "^7.22.5", "@babel/code-frame": "^7.22.13",
"@babel/generator": "^7.22.5", "@babel/generator": "^7.23.0",
"@babel/helper-environment-visitor": "^7.22.5", "@babel/helper-environment-visitor": "^7.22.20",
"@babel/helper-function-name": "^7.22.5", "@babel/helper-function-name": "^7.23.0",
"@babel/helper-hoist-variables": "^7.22.5", "@babel/helper-hoist-variables": "^7.22.5",
"@babel/helper-split-export-declaration": "^7.22.5", "@babel/helper-split-export-declaration": "^7.22.6",
"@babel/parser": "^7.22.5", "@babel/parser": "^7.23.0",
"@babel/types": "^7.22.5", "@babel/types": "^7.23.0",
"debug": "^4.1.0", "debug": "^4.1.0",
"globals": "^11.1.0" "globals": "^11.1.0"
}, },
@@ -400,13 +402,13 @@
} }
}, },
"node_modules/@babel/types": { "node_modules/@babel/types": {
"version": "7.22.5", "version": "7.23.0",
"resolved": "https://registry.npmjs.org/@babel/types/-/types-7.22.5.tgz", "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.23.0.tgz",
"integrity": "sha512-zo3MIHGOkPOfoRXitsgHLjEXmlDaD/5KU1Uzuc9GNiZPhSqVxVRtxuPaSBZDsYZ9qV88AjtMtWW7ww98loJ9KA==", "integrity": "sha512-0oIyUfKoI3mSqMvsxBdclDwxXKXAUA8v/apZbc+iSyARYou1o8ZGDxbUYyLFoW2arqS2jDGqJuZvv1d/io1axg==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"@babel/helper-string-parser": "^7.22.5", "@babel/helper-string-parser": "^7.22.5",
"@babel/helper-validator-identifier": "^7.22.5", "@babel/helper-validator-identifier": "^7.22.20",
"to-fast-properties": "^2.0.0" "to-fast-properties": "^2.0.0"
}, },
"engines": { "engines": {
@@ -2279,6 +2281,12 @@
"@types/ms": "*" "@types/ms": "*"
} }
}, },
"node_modules/@types/file-saver": {
"version": "2.0.7",
"resolved": "https://registry.npmjs.org/@types/file-saver/-/file-saver-2.0.7.tgz",
"integrity": "sha512-dNKVfHd/jk0SkR/exKGj2ggkB45MAkzvWCaqLUUgkyjITkGNzH8H+yUwr+BLJUBjZOe9w8X3wgmXhZDRg1ED6A==",
"dev": true
},
"node_modules/@types/hast": { "node_modules/@types/hast": {
"version": "2.3.4", "version": "2.3.4",
"resolved": "https://registry.npmmirror.com/@types/hast/-/hast-2.3.4.tgz", "resolved": "https://registry.npmmirror.com/@types/hast/-/hast-2.3.4.tgz",
@@ -2288,9 +2296,9 @@
} }
}, },
"node_modules/@types/hoist-non-react-statics": { "node_modules/@types/hoist-non-react-statics": {
"version": "3.3.1", "version": "3.3.5",
"resolved": "https://registry.npmjs.org/@types/hoist-non-react-statics/-/hoist-non-react-statics-3.3.1.tgz", "resolved": "https://registry.npmjs.org/@types/hoist-non-react-statics/-/hoist-non-react-statics-3.3.5.tgz",
"integrity": "sha512-iMIqiko6ooLrTh1joXodJK5X9xeEALT1kM5G3ZLhD3hszxBdIEd5C75U834D9mLcINgD4OyZf5uQXjkuYydWvA==", "integrity": "sha512-SbcrWzkKBw2cdwRTwQAswfpB9g9LJWfjtUeW/jvNwbhC8cpmmNYVePa+ncbUe0rGTQ7G3Ff6mYUN2VMfLVr+Sg==",
"dependencies": { "dependencies": {
"@types/react": "*", "@types/react": "*",
"hoist-non-react-statics": "^3.3.0" "hoist-non-react-statics": "^3.3.0"
@@ -2371,9 +2379,9 @@
} }
}, },
"node_modules/@types/react-redux": { "node_modules/@types/react-redux": {
"version": "7.1.25", "version": "7.1.29",
"resolved": "https://registry.npmjs.org/@types/react-redux/-/react-redux-7.1.25.tgz", "resolved": "https://registry.npmjs.org/@types/react-redux/-/react-redux-7.1.29.tgz",
"integrity": "sha512-bAGh4e+w5D8dajd6InASVIyCo4pZLJ66oLb80F9OBLO1gKESbZcRCJpTT6uLXX+HAB57zw1WTdwJdAsewuTweg==", "integrity": "sha512-orHCOWqBBQ1LP1uD6JVdXL+ZRTEWhGGne+VOPcXef03rC+QYdzktLhxR3ozymPDyZK0CNCUuQs9tyQhfg1ku+w==",
"dependencies": { "dependencies": {
"@types/hoist-non-react-statics": "^3.3.0", "@types/hoist-non-react-statics": "^3.3.0",
"@types/react": "*", "@types/react": "*",
@@ -2649,10 +2657,24 @@
} }
}, },
"node_modules/caniuse-lite": { "node_modules/caniuse-lite": {
"version": "1.0.30001482", "version": "1.0.30001561",
"resolved": "https://registry.npmmirror.com/caniuse-lite/-/caniuse-lite-1.0.30001482.tgz", "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001561.tgz",
"integrity": "sha512-F1ZInsg53cegyjroxLNW9DmrEQ1SuGRTO1QlpA0o2/6OpQ0gFeDRoq1yFmnr8Sakn9qwwt9DmbxHB6w167OSuQ==", "integrity": "sha512-NTt0DNoKe958Q0BE0j0c1V9jbUzhBxHIEJy7asmGrpE0yG63KTV7PLHPnK2E1O9RsQrQ081I3NLuXGS6zht3cw==",
"dev": true "dev": true,
"funding": [
{
"type": "opencollective",
"url": "https://opencollective.com/browserslist"
},
{
"type": "tidelift",
"url": "https://tidelift.com/funding/github/npm/caniuse-lite"
},
{
"type": "github",
"url": "https://github.com/sponsors/ai"
}
]
}, },
"node_modules/ccount": { "node_modules/ccount": {
"version": "2.0.1", "version": "2.0.1",
@@ -3232,6 +3254,11 @@
"resolved": "https://registry.npmjs.org/fft.js/-/fft.js-4.0.4.tgz", "resolved": "https://registry.npmjs.org/fft.js/-/fft.js-4.0.4.tgz",
"integrity": "sha512-f9c00hphOgeQTlDyavwTtu6RiK8AIFjD6+jvXkNkpeQ7rirK3uFWVpalkoS4LAwbdX7mfZ8aoBfFVQX1Re/8aw==" "integrity": "sha512-f9c00hphOgeQTlDyavwTtu6RiK8AIFjD6+jvXkNkpeQ7rirK3uFWVpalkoS4LAwbdX7mfZ8aoBfFVQX1Re/8aw=="
}, },
"node_modules/file-saver": {
"version": "2.0.5",
"resolved": "https://registry.npmjs.org/file-saver/-/file-saver-2.0.5.tgz",
"integrity": "sha512-P9bmyZ3h/PRG+Nzga+rbdI4OEpNDzAVyy74uVO9ATgzLK6VtAsYybF/+TOCvrc0MO793d6+42lLyZTw7/ArVzA=="
},
"node_modules/fill-range": { "node_modules/fill-range": {
"version": "7.0.1", "version": "7.0.1",
"resolved": "https://registry.npmmirror.com/fill-range/-/fill-range-7.0.1.tgz", "resolved": "https://registry.npmmirror.com/fill-range/-/fill-range-7.0.1.tgz",
@@ -3316,11 +3343,6 @@
"node": "6.* || 8.* || >= 10.*" "node": "6.* || 8.* || >= 10.*"
} }
}, },
"node_modules/github-markdown-css": {
"version": "5.2.0",
"resolved": "https://registry.npmmirror.com/github-markdown-css/-/github-markdown-css-5.2.0.tgz",
"integrity": "sha512-hq5RaCInSUZ48bImOZpkppW2/MT44StRgsbsZ8YA4vJFwLKB/Vo3k7R2t+pUGqO+ThG0QDMi96TewV/B3vyItg=="
},
"node_modules/glob": { "node_modules/glob": {
"version": "7.1.6", "version": "7.1.6",
"resolved": "https://registry.npmmirror.com/glob/-/glob-7.1.6.tgz", "resolved": "https://registry.npmmirror.com/glob/-/glob-7.1.6.tgz",
@@ -4591,10 +4613,24 @@
} }
}, },
"node_modules/postcss": { "node_modules/postcss": {
"version": "8.4.23", "version": "8.4.31",
"resolved": "https://registry.npmmirror.com/postcss/-/postcss-8.4.23.tgz", "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
"integrity": "sha512-bQ3qMcpF6A/YjR55xtoTr0jGOlnPOKAIMdOWiv0EIT6HVPEaJiJB4NLljSbiHoC2RX7DN5Uvjtpbg1NPdwv1oA==", "integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
"dev": true, "dev": true,
"funding": [
{
"type": "opencollective",
"url": "https://opencollective.com/postcss/"
},
{
"type": "tidelift",
"url": "https://tidelift.com/funding/github/npm/postcss"
},
{
"type": "github",
"url": "https://github.com/sponsors/ai"
}
],
"dependencies": { "dependencies": {
"nanoid": "^3.3.6", "nanoid": "^3.3.6",
"picocolors": "^1.0.0", "picocolors": "^1.0.0",
@@ -4696,9 +4732,9 @@
"integrity": "sha512-kma4U7AFCTwpqq5twzC1YVIDXSqg6qQK6JN0smOw8fgRy1OkMi0CYSzFmsy6dnqSenamAtj0CyXMUJ1Mf6oROg==" "integrity": "sha512-kma4U7AFCTwpqq5twzC1YVIDXSqg6qQK6JN0smOw8fgRy1OkMi0CYSzFmsy6dnqSenamAtj0CyXMUJ1Mf6oROg=="
}, },
"node_modules/protobufjs": { "node_modules/protobufjs": {
"version": "6.11.3", "version": "6.11.4",
"resolved": "https://registry.npmjs.org/protobufjs/-/protobufjs-6.11.3.tgz", "resolved": "https://registry.npmjs.org/protobufjs/-/protobufjs-6.11.4.tgz",
"integrity": "sha512-xL96WDdCZYdU7Slin569tFX712BxsxslWwAfAhCYjQKGTq7dAU91Lomy6nLLhh/dyGhk/YH4TwTSRxTzhuHyZg==", "integrity": "sha512-5kQWPaJHi1WoCpjTGszzQ32PG2F4+wRY6BmAT4Vfw56Q2FZ4YZzK20xUYQH4YkfehY1e6QSICrJquM6xXZNcrw==",
"hasInstallScript": true, "hasInstallScript": true,
"dependencies": { "dependencies": {
"@protobufjs/aspromise": "^1.1.2", "@protobufjs/aspromise": "^1.1.2",

View File

@@ -16,7 +16,7 @@
"@primer/octicons-react": "^19.1.0", "@primer/octicons-react": "^19.1.0",
"chart.js": "^4.3.0", "chart.js": "^4.3.0",
"classnames": "^2.3.2", "classnames": "^2.3.2",
"github-markdown-css": "^5.2.0", "file-saver": "^2.0.5",
"html-midi-player": "^1.5.0", "html-midi-player": "^1.5.0",
"i18next": "^22.4.15", "i18next": "^22.4.15",
"mobx": "^6.9.0", "mobx": "^6.9.0",
@@ -38,6 +38,7 @@
"uuid": "^9.0.0" "uuid": "^9.0.0"
}, },
"devDependencies": { "devDependencies": {
"@types/file-saver": "^2.0.7",
"@types/react": "^18.2.6", "@types/react": "^18.2.6",
"@types/react-beautiful-dnd": "^13.1.4", "@types/react-beautiful-dnd": "^13.1.4",
"@types/react-dom": "^18.2.4", "@types/react-dom": "^18.2.4",

View File

@@ -26,18 +26,22 @@
import { FluentProvider, Tab, TabList, webDarkTheme, webLightTheme } from '@fluentui/react-components'; import { FluentProvider, Tab, TabList, webDarkTheme, webLightTheme } from '@fluentui/react-components';
import { FC, useEffect, useState } from 'react'; import { FC, useEffect, useState } from 'react';
import { Route, Routes, useLocation, useNavigate } from 'react-router'; import { Route, Routes, useLocation, useNavigate } from 'react-router';
import { pages } from './pages'; import { pages as clientPages } from './pages';
import { useMediaQuery } from 'usehooks-ts'; import { useMediaQuery } from 'usehooks-ts';
import commonStore from './stores/commonStore'; import commonStore from './stores/commonStore';
import { observer } from 'mobx-react-lite'; import { observer } from 'mobx-react-lite';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import { CustomToastContainer } from './components/CustomToastContainer'; import { CustomToastContainer } from './components/CustomToastContainer';
import { LazyImportComponent } from './components/LazyImportComponent';
const App: FC = observer(() => { const App: FC = observer(() => {
const { t } = useTranslation(); const { t } = useTranslation();
const navigate = useNavigate(); const navigate = useNavigate();
const location = useLocation(); const location = useLocation();
const mq = useMediaQuery('(min-width: 640px)'); const mq = useMediaQuery('(min-width: 640px)');
const pages = commonStore.platform === 'web' ? clientPages.filter(page =>
!['/configs', '/models', '/downloads', '/train', '/about'].some(path => page.path === path)
) : clientPages;
const [path, setPath] = useState<string>(pages[0].path); const [path, setPath] = useState<string>(pages[0].path);
@@ -82,7 +86,7 @@ const App: FC = observer(() => {
<div className="h-full w-full p-2 box-border overflow-y-hidden"> <div className="h-full w-full p-2 box-border overflow-y-hidden">
<Routes> <Routes>
{pages.map(({ path, element }, index) => ( {pages.map(({ path, element }, index) => (
<Route key={`${path}-${index}`} path={path} element={element} /> <Route key={`${path}-${index}`} path={path} element={<LazyImportComponent lazyChildren={element} />} />
))} ))}
</Routes> </Routes>
</div> </div>

View File

@@ -100,7 +100,7 @@
"Model Config Exception": "モデル設定例外", "Model Config Exception": "モデル設定例外",
"Use Gitee Updates Source": "Gitee更新ソースを使用", "Use Gitee Updates Source": "Gitee更新ソースを使用",
"Use Custom CUDA kernel to Accelerate": "カスタムCUDAカーネルを使用して加速", "Use Custom CUDA kernel to Accelerate": "カスタムCUDAカーネルを使用して加速",
"Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues. If it fails to start, please turn off this option.": "このオプションを有効にすると、推論速度が大幅に向上し、一部のVRAMを節約できますが、互換性の問題が生じる可能性があります。起動に失敗した場合は、このオプションをオフにしてください。", "Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues (output garbled). If it fails to start, please turn off this option, or try to upgrade your gpu driver.": "このオプションを有効にすると、推論速度が大幅に向上し、一部のVRAMを節約できますが、互換性の問題 (文字化けを出力する) が生じる可能性があります。起動に失敗した場合は、このオプションを無効にするか、GPUドライバーをアップグレードしてみてください。",
"Supported custom cuda file not found": "対応しているカスタムCUDAファイルが見つかりません", "Supported custom cuda file not found": "対応しているカスタムCUDAファイルが見つかりません",
"Failed to copy custom cuda file": "カスタムCUDAファイルのコピーに失敗しました", "Failed to copy custom cuda file": "カスタムCUDAファイルのコピーに失敗しました",
"Downloading update, please wait. If it is not completed, please manually download the program from GitHub and replace the original program.": "更新をダウンロード中です、お待ちください。完了しない場合は、GitHubから手動でプログラムをダウンロードし、元のプログラムを置き換えてください。", "Downloading update, please wait. If it is not completed, please manually download the program from GitHub and replace the original program.": "更新をダウンロード中です、お待ちください。完了しない場合は、GitHubから手動でプログラムをダウンロードし、元のプログラムを置き換えてください。",
@@ -226,14 +226,14 @@
"Please select a LoRA model": "LoRAモデルを選択してください", "Please select a LoRA model": "LoRAモデルを選択してください",
"You are using sample data for training. For formal training, please make sure to create your own jsonl file.": "トレーニングにはサンプルデータを使用しています。正式なトレーニングのためには、自身でjsonlファイルを作成してください。", "You are using sample data for training. For formal training, please make sure to create your own jsonl file.": "トレーニングにはサンプルデータを使用しています。正式なトレーニングのためには、自身でjsonlファイルを作成してください。",
"WSL is not running, please retry. If it keeps happening, it means you may be using an outdated version of WSL, run \"wsl --update\" to update.": "WSLが実行されていません、もう一度試してください。これが続く場合、古いバージョンのWSLを使用している可能性があります。\"wsl --update\"を実行して更新してください。", "WSL is not running, please retry. If it keeps happening, it means you may be using an outdated version of WSL, run \"wsl --update\" to update.": "WSLが実行されていません、もう一度試してください。これが続く場合、古いバージョンのWSLを使用している可能性があります。\"wsl --update\"を実行して更新してください。",
"Memory is not enough, try to increase the virtual memory or use a smaller base model.": "メモリが不足しています、仮想メモリを増やすか小さなベースモデルを使用してみてください。", "Memory is not enough, try to increase the virtual memory (Swap of WSL) or use a smaller base model.": "メモリが不足しています、仮想メモリ (WSL Swap) を増やすか小さなベースモデルを使用してみてください。",
"VRAM is not enough": "ビデオRAMが不足しています", "VRAM is not enough": "ビデオRAMが不足しています",
"Training data is not enough, reduce context length or add more data for training": "トレーニングデータが不足しています、コンテキストの長さを減らすか、トレーニング用のデータをさらに追加してください", "Training data is not enough, reduce context length or add more data for training": "トレーニングデータが不足しています、コンテキストの長さを減らすか、トレーニング用のデータをさらに追加してください",
"You are using WSL 1 for training, please upgrade to WSL 2. e.g. Run \"wsl --set-version Ubuntu-22.04 2\"": "トレーニングにWSL 1を使用しています、WSL 2にアップグレードしてください。例:\"wsl --set-version Ubuntu-22.04 2\"を実行する", "You are using WSL 1 for training, please upgrade to WSL 2. e.g. Run \"wsl --set-version Ubuntu-22.04 2\"": "トレーニングにWSL 1を使用しています、WSL 2にアップグレードしてください。例:\"wsl --set-version Ubuntu-22.04 2\"を実行する",
"Matched CUDA is not installed": "対応するCUDAがインストールされていません", "Matched CUDA is not installed": "対応するCUDAがインストールされていません",
"Failed to convert data": "データの変換に失敗しました", "Failed to convert data": "データの変換に失敗しました",
"Failed to merge model": "モデルのマージに失敗しました", "Failed to merge model": "モデルのマージに失敗しました",
"The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\nWhen you provide a directory path, all the txt files within that directory will be automatically converted into training data. This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\nThe jsonl format file can be referenced at https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl.\nYou can also write it similar to OpenAI's playground format, as shown in https://platform.openai.com/playground/p/default-chat.\nEven for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. If they are different dialogues or topics, they should be written in separate lines.": "データのパスはディレクトリまたはjsonl形式のファイルでなければなりません将来的にはより多くの形式がサポートされる予定です。ディレクトリパスを提供した場合、そのディレクトリ内のすべてのtxtファイルが自動的にトレーニングデータに変換されます。これは大規模なライティング、コード生成、または知識ベースのトレーニングで一般的に使用されます。jsonl形式のファイルは、https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl を参照してください。\nhttps://platform.openai.com/playground/p/default-chat のように、OpenAIのプレイグラウンド形式に似た形式で書くこともできます。複数ターンの対話であっても、一行で書く必要があり、行の区切りを示すために`\\n`を使用します。それらが異なる対話やトピックであれば、それらは別々の行に書かれるべきです。", "The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\nWhen you provide a directory path, all the txt files within that directory will be automatically converted into training data. This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\nThe jsonl format file can be referenced at https://github.com/josStorer/RWKV-Runner/blob/master/finetune/data/sample.jsonl.\nYou can also write it similar to OpenAI's playground format, as shown in https://platform.openai.com/playground/p/default-chat.\nEven for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. If they are different dialogues or topics, they should be written in separate lines.": "データのパスはディレクトリまたはjsonl形式のファイルでなければなりません将来的にはより多くの形式がサポートされる予定です。ディレクトリパスを提供した場合、そのディレクトリ内のすべてのtxtファイルが自動的にトレーニングデータに変換されます。これは大規模なライティング、コード生成、または知識ベースのトレーニングで一般的に使用されます。jsonl形式のファイルは、https://github.com/josStorer/RWKV-Runner/blob/master/finetune/data/sample.jsonl を参照してください。\nhttps://platform.openai.com/playground/p/default-chat のように、OpenAIのプレイグラウンド形式に似た形式で書くこともできます。複数ターンの対話であっても、一行で書く必要があり、行の区切りを示すために`\\n`を使用します。それらが異なる対話やトピックであれば、それらは別々の行に書かれるべきです。",
"Size mismatch for blocks. You are attempting to continue training from the LoRA model, but it does not match the base model. Please set LoRA model to None.": "ブロックのサイズが一致しません。LoRAモデルからトレーニングを続けようとしていますが、それはベースモデルと一致しません。LoRAモデルをNoneに設定してください。", "Size mismatch for blocks. You are attempting to continue training from the LoRA model, but it does not match the base model. Please set LoRA model to None.": "ブロックのサイズが一致しません。LoRAモデルからトレーニングを続けようとしていますが、それはベースモデルと一致しません。LoRAモデルをNoneに設定してください。",
"Instruction: Write a story using the following information\n\nInput: A man named Alex chops a tree down\n\nResponse:": "Instruction: Write a story using the following information\n\nInput: アレックスという男が木を切り倒す\n\nResponse:", "Instruction: Write a story using the following information\n\nInput: A man named Alex chops a tree down\n\nResponse:": "Instruction: Write a story using the following information\n\nInput: アレックスという男が木を切り倒す\n\nResponse:",
"Composition": "作曲", "Composition": "作曲",
@@ -248,5 +248,24 @@
"Preview Only": "プレビューのみ", "Preview Only": "プレビューのみ",
"RAM": "RAM", "RAM": "RAM",
"VRAM": "VRAM", "VRAM": "VRAM",
"GPU Usage": "GPU使用率" "GPU Usage": "GPU使用率",
"Use Custom Tokenizer": "カスタムトークナイザーを使用する",
"Tokenizer Path (e.g. backend-python/rwkv_pip/20B_tokenizer.json)": "トークナイザーパス (例: backend-python/rwkv_pip/20B_tokenizer.json)",
"User Name": "ユーザー名",
"Assistant Name": "アシスタント名",
"Insert default system prompt at the beginning": "最初にデフォルトのシステムプロンプトを挿入",
"Format Content": "内容フォーマットの規格化",
"Add An Attachment (Accepts pdf, txt)": "添付ファイルを追加 (pdf, txtを受け付けます)",
"Uploading Attachment": "添付ファイルアップロード中",
"Remove Attachment": "添付ファイルを削除",
"The content of file": "ファイル",
"is as follows. When replying to me, consider the file content and respond accordingly:": "の内容は以下の通りです。私に返信する際は、ファイルの内容を考慮して適切に返信してください:",
"What's the file name": "ファイル名は何ですか",
"The file name is: ": "ファイル名は次のとおりです: ",
"Port is occupied. Change it in Configs page or close the program that occupies the port.": "ポートが占有されています。設定ページで変更するか、ポートを占有しているプログラムを終了してください。",
"Loading...": "読み込み中...",
"Hello, what can I do for you?": "こんにちは、何かお手伝いできますか?",
"Enable WebUI": "WebUIを有効化",
"Server is working on deployment mode, please close the terminal window manually": "サーバーはデプロイモードで動作しています、ターミナルウィンドウを手動で閉じてください",
"Server is working on deployment mode, please exit the program manually to stop the server": "サーバーはデプロイモードで動作しています、サーバーを停止するにはプログラムを手動で終了してください"
} }

View File

@@ -100,7 +100,7 @@
"Model Config Exception": "模型配置异常", "Model Config Exception": "模型配置异常",
"Use Gitee Updates Source": "使用Gitee更新源", "Use Gitee Updates Source": "使用Gitee更新源",
"Use Custom CUDA kernel to Accelerate": "使用自定义CUDA算子加速", "Use Custom CUDA kernel to Accelerate": "使用自定义CUDA算子加速",
"Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues. If it fails to start, please turn off this option.": "开启这个选项能大大提升推理速度并节省显存,但可能存在兼容性问题,如果启动失败,请关闭此选项", "Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues (output garbled). If it fails to start, please turn off this option, or try to upgrade your gpu driver.": "开启这个选项能大大提升推理速度并节省显存,但可能存在兼容性(回复乱码)问题,如果发生相关问题,请关闭此选项。或更新你的显卡驱动",
"Supported custom cuda file not found": "没有找到支持的自定义cuda文件", "Supported custom cuda file not found": "没有找到支持的自定义cuda文件",
"Failed to copy custom cuda file": "自定义cuda文件复制失败", "Failed to copy custom cuda file": "自定义cuda文件复制失败",
"Downloading update, please wait. If it is not completed, please manually download the program from GitHub and replace the original program.": "正在下载更新请等待。如果一直未完成请从Github手动下载并覆盖原程序", "Downloading update, please wait. If it is not completed, please manually download the program from GitHub and replace the original program.": "正在下载更新请等待。如果一直未完成请从Github手动下载并覆盖原程序",
@@ -226,14 +226,14 @@
"Please select a LoRA model": "请选择一个LoRA模型", "Please select a LoRA model": "请选择一个LoRA模型",
"You are using sample data for training. For formal training, please make sure to create your own jsonl file.": "你正在使用示例数据训练对于正式训练场合请务必创建你自己的jsonl训练数据", "You are using sample data for training. For formal training, please make sure to create your own jsonl file.": "你正在使用示例数据训练对于正式训练场合请务必创建你自己的jsonl训练数据",
"WSL is not running, please retry. If it keeps happening, it means you may be using an outdated version of WSL, run \"wsl --update\" to update.": "WSL没有运行请重试。如果一直出现此错误意味着你可能正在使用旧版本的WSL请在cmd执行\"wsl --update\"以更新", "WSL is not running, please retry. If it keeps happening, it means you may be using an outdated version of WSL, run \"wsl --update\" to update.": "WSL没有运行请重试。如果一直出现此错误意味着你可能正在使用旧版本的WSL请在cmd执行\"wsl --update\"以更新",
"Memory is not enough, try to increase the virtual memory or use a smaller base model.": "内存不足,尝试增加虚拟内存,或使用一个更小规模的基底模型", "Memory is not enough, try to increase the virtual memory (Swap of WSL) or use a smaller base model.": "内存不足,尝试增加虚拟内存(WSL Swap),或使用一个更小规模的基底模型",
"VRAM is not enough": "显存不足", "VRAM is not enough": "显存不足",
"Training data is not enough, reduce context length or add more data for training": "训练数据不足,请减小上下文长度或增加训练数据", "Training data is not enough, reduce context length or add more data for training": "训练数据不足,请减小上下文长度或增加训练数据",
"You are using WSL 1 for training, please upgrade to WSL 2. e.g. Run \"wsl --set-version Ubuntu-22.04 2\"": "你正在使用WSL 1进行训练请升级到WSL 2。例如运行\"wsl --set-version Ubuntu-22.04 2\"", "You are using WSL 1 for training, please upgrade to WSL 2. e.g. Run \"wsl --set-version Ubuntu-22.04 2\"": "你正在使用WSL 1进行训练请升级到WSL 2。例如运行\"wsl --set-version Ubuntu-22.04 2\"",
"Matched CUDA is not installed": "未安装匹配的CUDA", "Matched CUDA is not installed": "未安装匹配的CUDA",
"Failed to convert data": "数据转换失败", "Failed to convert data": "数据转换失败",
"Failed to merge model": "合并模型失败", "Failed to merge model": "合并模型失败",
"The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\nWhen you provide a directory path, all the txt files within that directory will be automatically converted into training data. This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\nThe jsonl format file can be referenced at https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl.\nYou can also write it similar to OpenAI's playground format, as shown in https://platform.openai.com/playground/p/default-chat.\nEven for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. If they are different dialogues or topics, they should be written in separate lines.": "数据路径必须是一个文件夹或者jsonl格式文件 (未来会支持更多格式)\n\n当你填写的路径是一个文件夹时该文件夹内的所有txt文件会被自动转换为训练数据通常这用于大批量训练写作代码生成或知识库\n\njsonl文件的格式参考 https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl\n你也可以仿照openai的playground编写参考 https://platform.openai.com/playground/p/default-chat\n即使是多轮对话也必须写在一行用`\\n`表示换行,如果是不同对话或主题,则另起一行", "The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\nWhen you provide a directory path, all the txt files within that directory will be automatically converted into training data. This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\nThe jsonl format file can be referenced at https://github.com/josStorer/RWKV-Runner/blob/master/finetune/data/sample.jsonl.\nYou can also write it similar to OpenAI's playground format, as shown in https://platform.openai.com/playground/p/default-chat.\nEven for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. If they are different dialogues or topics, they should be written in separate lines.": "数据路径必须是一个文件夹或者jsonl格式文件 (未来会支持更多格式)\n\n当你填写的路径是一个文件夹时该文件夹内的所有txt文件会被自动转换为训练数据通常这用于大批量训练写作代码生成或知识库\n\njsonl文件的格式参考 https://github.com/josStorer/RWKV-Runner/blob/master/finetune/data/sample.jsonl 以及 https://zhuanlan.zhihu.com/p/643433851\n你也可以仿照openai的playground编写参考 https://platform.openai.com/playground/p/default-chat\n即使是多轮对话也必须写在一行用`\\n`表示换行,如果是不同对话或主题,则另起一行",
"Size mismatch for blocks. You are attempting to continue training from the LoRA model, but it does not match the base model. Please set LoRA model to None.": "尺寸不匹配块。你正在尝试从LoRA模型继续训练但该LoRA模型与基底模型不匹配请将LoRA模型设为空", "Size mismatch for blocks. You are attempting to continue training from the LoRA model, but it does not match the base model. Please set LoRA model to None.": "尺寸不匹配块。你正在尝试从LoRA模型继续训练但该LoRA模型与基底模型不匹配请将LoRA模型设为空",
"Instruction: Write a story using the following information\n\nInput: A man named Alex chops a tree down\n\nResponse:": "Instruction: Write a story using the following information\n\nInput: 艾利克斯砍倒了一棵树\n\nResponse:", "Instruction: Write a story using the following information\n\nInput: A man named Alex chops a tree down\n\nResponse:": "Instruction: Write a story using the following information\n\nInput: 艾利克斯砍倒了一棵树\n\nResponse:",
"Composition": "作曲", "Composition": "作曲",
@@ -248,5 +248,24 @@
"Preview Only": "仅预览", "Preview Only": "仅预览",
"RAM": "内存", "RAM": "内存",
"VRAM": "显存", "VRAM": "显存",
"GPU Usage": "GPU占用" "GPU Usage": "GPU占用",
"Use Custom Tokenizer": "使用自定义Tokenizer",
"Tokenizer Path (e.g. backend-python/rwkv_pip/20B_tokenizer.json)": "Tokenizer路径 (例如: backend-python/rwkv_pip/20B_tokenizer.json)",
"User Name": "用户名称",
"Assistant Name": "AI名称",
"Insert default system prompt at the beginning": "在开头自动插入默认系统提示",
"Format Content": "规范格式",
"Add An Attachment (Accepts pdf, txt)": "添加一个附件 (支持pdf, txt)",
"Uploading Attachment": "正在上传附件",
"Remove Attachment": "移除附件",
"The content of file": "文件",
"is as follows. When replying to me, consider the file content and respond accordingly:": "内容如下。回复时考虑文件内容并做出相应回复:",
"What's the file name": "文件名是什么",
"The file name is: ": "文件名是:",
"Port is occupied. Change it in Configs page or close the program that occupies the port.": "端口被占用。请在配置页面更改端口,或关闭占用端口的程序",
"Loading...": "加载中...",
"Hello, what can I do for you?": "你好,有什么要我帮忙的吗?",
"Enable WebUI": "启用WebUI",
"Server is working on deployment mode, please close the terminal window manually": "服务器正在部署模式下运行,请手动关闭终端窗口",
"Server is working on deployment mode, please exit the program manually to stop the server": "服务器正在部署模式下运行,请手动退出程序以停止服务器"
} }

View File

@@ -1,4 +1,4 @@
import { FC, ReactElement } from 'react'; import React, { FC, ReactElement } from 'react';
import { import {
Button, Button,
Dialog, Dialog,
@@ -11,7 +11,9 @@ import {
} from '@fluentui/react-components'; } from '@fluentui/react-components';
import { ToolTipButton } from './ToolTipButton'; import { ToolTipButton } from './ToolTipButton';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import MarkdownRender from './MarkdownRender'; import { LazyImportComponent } from './LazyImportComponent';
const MarkdownRender = React.lazy(() => import('./MarkdownRender'));
export const DialogButton: FC<{ export const DialogButton: FC<{
text?: string | null text?: string | null
@@ -45,7 +47,9 @@ export const DialogButton: FC<{
<DialogContent> <DialogContent>
{ {
markdown ? markdown ?
<MarkdownRender>{contentText}</MarkdownRender> : <LazyImportComponent lazyChildren={MarkdownRender}>
{contentText}
</LazyImportComponent> :
contentText contentText
} }
</DialogContent> </DialogContent>

View File

@@ -0,0 +1,20 @@
import { FC, LazyExoticComponent, ReactNode, Suspense } from 'react';
import { useTranslation } from 'react-i18next';
interface LazyImportComponentProps {
lazyChildren: LazyExoticComponent<FC<any>>;
lazyProps?: any;
children?: ReactNode;
}
export const LazyImportComponent: FC<LazyImportComponentProps> = (props) => {
const { t } = useTranslation();
return (
<Suspense fallback={<div>{t('Loading...')}</div>}>
<props.lazyChildren {...props.lazyProps}>
{props.children}
</props.lazyChildren>
</Suspense>
);
};

View File

@@ -21,7 +21,7 @@ const Hyperlink: FC<any> = ({ href, children }) => {
); );
}; };
export const MarkdownRender: FC<ReactMarkdownOptions> = (props) => { const MarkdownRender: FC<ReactMarkdownOptions> = (props) => {
return ( return (
<div dir="auto" className="markdown-body"> <div dir="auto" className="markdown-body">
<ReactMarkdown <ReactMarkdown

View File

@@ -40,6 +40,8 @@ export const ReadButton: FC<{
voice = voices.find((v) => v.name.toLowerCase().includes('microsoft aria')); voice = voices.find((v) => v.name.toLowerCase().includes('microsoft aria'));
else if (lang === 'zh') else if (lang === 'zh')
voice = voices.find((v) => v.name.toLowerCase().includes('xiaoyi')); voice = voices.find((v) => v.name.toLowerCase().includes('xiaoyi'));
else if (lang === 'ja')
voice = voices.find((v) => v.name.toLowerCase().includes('nanami'));
if (!voice) voice = voices.find((v) => v.lang.substring(0, 2) === lang); if (!voice) voice = voices.find((v) => v.lang.substring(0, 2) === lang);
if (!voice) voice = voices.find((v) => v.lang === navigator.language); if (!voice) voice = voices.find((v) => v.lang === navigator.language);

View File

@@ -2,8 +2,8 @@ import React, { FC, MouseEventHandler, ReactElement } from 'react';
import commonStore, { ModelStatus } from '../stores/commonStore'; import commonStore, { ModelStatus } from '../stores/commonStore';
import { import {
AddToDownloadList, AddToDownloadList,
CopyFile,
FileExists, FileExists,
IsPortAvailable,
StartServer, StartServer,
StartWebGPUServer StartWebGPUServer
} from '../../wailsjs/go/backend_golang/App'; } from '../../wailsjs/go/backend_golang/App';
@@ -11,12 +11,12 @@ import { Button } from '@fluentui/react-components';
import { observer } from 'mobx-react-lite'; import { observer } from 'mobx-react-lite';
import { exit, getStatus, readRoot, switchModel, updateConfig } from '../apis'; import { exit, getStatus, readRoot, switchModel, updateConfig } from '../apis';
import { toast } from 'react-toastify'; import { toast } from 'react-toastify';
import { checkDependencies, getStrategy, getSupportedCustomCudaFile, toastWithButton } from '../utils'; import { checkDependencies, getStrategy, toastWithButton } from '../utils';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import { ToolTipButton } from './ToolTipButton'; import { ToolTipButton } from './ToolTipButton';
import { Play16Regular, Stop16Regular } from '@fluentui/react-icons'; import { Play16Regular, Stop16Regular } from '@fluentui/react-icons';
import { useNavigate } from 'react-router'; import { useNavigate } from 'react-router';
import { WindowShow } from '../../wailsjs/runtime/runtime'; import { WindowShow } from '../../wailsjs/runtime';
const mainButtonText = { const mainButtonText = {
[ModelStatus.Offline]: 'Run', [ModelStatus.Offline]: 'Run',
@@ -113,15 +113,23 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
const port = modelConfig.apiParameters.apiPort; const port = modelConfig.apiParameters.apiPort;
await exit(1000).catch(() => { if (!await IsPortAvailable(port)) {
}); await exit(1000).catch(() => {
});
if (!await IsPortAvailable(port)) {
toast(t('Port is occupied. Change it in Configs page or close the program that occupies the port.'), { type: 'error' });
commonStore.setStatus({ status: ModelStatus.Offline });
return;
}
}
const startServer = webgpu ? const startServer = webgpu ?
(_: string, port: number, host: string) => StartWebGPUServer(port, host) (_: string, port: number, host: string) => StartWebGPUServer(port, host)
: StartServer; : StartServer;
const isUsingCudaBeta = modelConfig.modelParameters.device === 'CUDA-Beta';
startServer(commonStore.settings.customPythonPath, port, commonStore.settings.host !== '127.0.0.1' ? '0.0.0.0' : '127.0.0.1', startServer(commonStore.settings.customPythonPath, port, commonStore.settings.host !== '127.0.0.1' ? '0.0.0.0' : '127.0.0.1',
modelConfig.modelParameters.device === 'CUDA-Beta' !!modelConfig.enableWebUI, isUsingCudaBeta
).catch((e) => { ).catch((e) => {
const errMsg = e.message || e; const errMsg = e.message || e;
if (errMsg.includes('path contains space')) if (errMsg.includes('path contains space'))
@@ -162,22 +170,26 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
if ((modelConfig.modelParameters.device.includes('CUDA') || modelConfig.modelParameters.device === 'Custom') if ((modelConfig.modelParameters.device.includes('CUDA') || modelConfig.modelParameters.device === 'Custom')
&& modelConfig.modelParameters.useCustomCuda && !strategy.includes('fp32')) { && modelConfig.modelParameters.useCustomCuda && !strategy.includes('fp32')) {
if (commonStore.platform === 'windows') { if (commonStore.platform === 'windows') {
customCudaFile = getSupportedCustomCudaFile(); // this part is currently unused because there's no longer a need to use different kernels for different GPUs, but it might still be needed in the future
if (customCudaFile) { //
FileExists('./py310/Lib/site-packages/rwkv/model.py').then((exist) => { // customCudaFile = getSupportedCustomCudaFile(isUsingCudaBeta);
// defensive measure. As Python has already been launched, will only take effect the next time it runs. // if (customCudaFile) {
if (!exist) CopyFile('./backend-python/wkv_cuda_utils/wkv_cuda_model.py', './py310/Lib/site-packages/rwkv/model.py'); // let kernelTargetPath: string;
}); // if (isUsingCudaBeta)
await CopyFile(customCudaFile, './py310/Lib/site-packages/rwkv/wkv_cuda.pyd').catch(() => { // kernelTargetPath = './backend-python/rwkv_pip/beta/wkv_cuda.pyd';
FileExists('./py310/Lib/site-packages/rwkv/wkv_cuda.pyd').then((exist) => { // else
if (!exist) { // kernelTargetPath = './backend-python/rwkv_pip/wkv_cuda.pyd';
customCudaFile = ''; // await CopyFile(customCudaFile, kernelTargetPath).catch(() => {
toast(t('Failed to copy custom cuda file'), { type: 'error' }); // FileExists(kernelTargetPath).then((exist) => {
} // if (!exist) {
}); // customCudaFile = '';
}); // toast(t('Failed to copy custom cuda file'), { type: 'error' });
} else // }
toast(t('Supported custom cuda file not found'), { type: 'warning' }); // });
// });
// } else
// toast(t('Supported custom cuda file not found'), { type: 'warning' });
customCudaFile = 'any';
} else { } else {
customCudaFile = 'any'; customCudaFile = 'any';
} }
@@ -186,7 +198,9 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
switchModel({ switchModel({
model: modelPath, model: modelPath,
strategy: strategy, strategy: strategy,
customCuda: customCudaFile !== '' tokenizer: modelConfig.modelParameters.useCustomTokenizer ? modelConfig.modelParameters.customTokenizer : undefined,
customCuda: customCudaFile !== '',
deploy: modelConfig.enableWebUI
}).then(async (r) => { }).then(async (r) => {
if (r.ok) { if (r.ok) {
commonStore.setStatus({ status: ModelStatus.Working }); commonStore.setStatus({ status: ModelStatus.Working });
@@ -233,7 +247,13 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
}, 1000); }, 1000);
} else { } else {
commonStore.setStatus({ status: ModelStatus.Offline }); commonStore.setStatus({ status: ModelStatus.Offline });
exit(); exit().then(r => {
if (r.status === 403)
if (commonStore.platform !== 'linux')
toast(t('Server is working on deployment mode, please close the terminal window manually'), { type: 'info' });
else
toast(t('Server is working on deployment mode, please exit the program manually to stop the server'), { type: 'info' });
});
} }
}; };

View File

@@ -25,7 +25,8 @@ export const WorkHeader: FC = observer(() => {
const { t } = useTranslation(); const { t } = useTranslation();
const port = commonStore.getCurrentModelConfig().apiParameters.apiPort; const port = commonStore.getCurrentModelConfig().apiParameters.apiPort;
return ( return commonStore.platform === 'web' ?
<div /> :
<div className="flex flex-col gap-1"> <div className="flex flex-col gap-1">
<div className="flex justify-between items-center"> <div className="flex justify-between items-center">
<div className="flex items-center gap-2"> <div className="flex items-center gap-2">
@@ -42,5 +43,5 @@ export const WorkHeader: FC = observer(() => {
</Text> </Text>
<Divider style={{ flexGrow: 0 }} /> <Divider style={{ flexGrow: 0 }} />
</div> </div>
); ;
}); });

View File

@@ -1,3 +1,4 @@
import './webWails';
import React from 'react'; import React from 'react';
import { createRoot } from 'react-dom/client'; import { createRoot } from 'react-dom/client';
import './style.scss'; import './style.scss';
@@ -6,7 +7,6 @@ import App from './App';
import { HashRouter } from 'react-router-dom'; import { HashRouter } from 'react-router-dom';
import { startup } from './startup'; import { startup } from './startup';
import './_locales/i18n-react'; import './_locales/i18n-react';
import 'html-midi-player';
import { WindowShow } from '../wailsjs/runtime'; import { WindowShow } from '../wailsjs/runtime';
startup().then(() => { startup().then(() => {

View File

@@ -5,9 +5,7 @@ import MarkdownRender from '../components/MarkdownRender';
import { observer } from 'mobx-react-lite'; import { observer } from 'mobx-react-lite';
import commonStore from '../stores/commonStore'; import commonStore from '../stores/commonStore';
export type AboutContent = { [lang: string]: string } const About: FC = observer(() => {
export const About: FC = observer(() => {
const { t } = useTranslation(); const { t } = useTranslation();
const lang: string = commonStore.settings.language; const lang: string = commonStore.settings.language;
@@ -21,3 +19,5 @@ export const About: FC = observer(() => {
} /> } />
); );
}); });
export default About;

View File

@@ -10,56 +10,34 @@ import { KebabHorizontalIcon, PencilIcon, SyncIcon, TrashIcon } from '@primer/oc
import logo from '../assets/images/logo.png'; import logo from '../assets/images/logo.png';
import MarkdownRender from '../components/MarkdownRender'; import MarkdownRender from '../components/MarkdownRender';
import { ToolTipButton } from '../components/ToolTipButton'; import { ToolTipButton } from '../components/ToolTipButton';
import { ArrowCircleUp28Regular, Delete28Regular, RecordStop28Regular, Save28Regular } from '@fluentui/react-icons'; import {
ArrowCircleUp28Regular,
ArrowClockwise16Regular,
Attach16Regular,
Delete28Regular,
Dismiss16Regular,
RecordStop28Regular,
Save28Regular
} from '@fluentui/react-icons';
import { CopyButton } from '../components/CopyButton'; import { CopyButton } from '../components/CopyButton';
import { ReadButton } from '../components/ReadButton'; import { ReadButton } from '../components/ReadButton';
import { toast } from 'react-toastify'; import { toast } from 'react-toastify';
import { WorkHeader } from '../components/WorkHeader'; import { WorkHeader } from '../components/WorkHeader';
import { DialogButton } from '../components/DialogButton'; import { DialogButton } from '../components/DialogButton';
import { OpenFileFolder, OpenSaveFileDialog } from '../../wailsjs/go/backend_golang/App'; import { OpenFileFolder, OpenOpenFileDialog, OpenSaveFileDialog } from '../../wailsjs/go/backend_golang/App';
import { toastWithButton } from '../utils'; import { absPathAsset, bytesToReadable, getServerRoot, toastWithButton } from '../utils';
import { PresetsButton } from './PresetsManager/PresetsButton'; import { PresetsButton } from './PresetsManager/PresetsButton';
import { useMediaQuery } from 'usehooks-ts'; import { useMediaQuery } from 'usehooks-ts';
import { botName, ConversationMessage, MessageType, userName, welcomeUuid } from '../types/chat';
export const userName = 'M E'; let chatSseControllers: {
export const botName = 'A I'; [id: string]: AbortController
} = {};
export const welcomeUuid = 'welcome'; const MoreUtilsButton: FC<{
uuid: string,
export enum MessageType { setEditing: (editing: boolean) => void
Normal, }> = observer(({
Error
}
export type Side = 'left' | 'right'
export type Color = 'neutral' | 'brand' | 'colorful'
export type MessageItem = {
sender: string,
type: MessageType,
color: Color,
avatarImg?: string,
time: string,
content: string,
side: Side,
done: boolean
}
export type Conversation = {
[uuid: string]: MessageItem
}
export type Role = 'assistant' | 'user' | 'system';
export type ConversationMessage = {
role: Role;
content: string;
}
let chatSseController: AbortController | null = null;
const MoreUtilsButton: FC<{ uuid: string, setEditing: (editing: boolean) => void }> = observer(({
uuid, uuid,
setEditing setEditing
}) => { }) => {
@@ -83,13 +61,15 @@ const MoreUtilsButton: FC<{ uuid: string, setEditing: (editing: boolean) => void
onClick={() => { onClick={() => {
commonStore.conversationOrder.splice(commonStore.conversationOrder.indexOf(uuid), 1); commonStore.conversationOrder.splice(commonStore.conversationOrder.indexOf(uuid), 1);
delete commonStore.conversation[uuid]; delete commonStore.conversation[uuid];
commonStore.setAttachment(uuid, null);
}} /> }} />
</MenuPopover> </MenuPopover>
</Menu>; </Menu>;
}); });
const ChatMessageItem: FC<{ const ChatMessageItem: FC<{
uuid: string, onSubmit: (message: string | null, answerId: string | null, uuid: string,
onSubmit: (message: string | null, answerId: string | null,
startUuid: string | null, endUuid: string | null, includeEndUuid: boolean) => void startUuid: string | null, endUuid: string | null, includeEndUuid: boolean) => void
}> = observer(({ uuid, onSubmit }) => { }> = observer(({ uuid, onSubmit }) => {
const { t } = useTranslation(); const { t } = useTranslation();
@@ -114,6 +94,13 @@ const ChatMessageItem: FC<{
} }
}; };
let avatarImg: string | undefined;
if (commonStore.activePreset && messageItem.sender === botName) {
avatarImg = absPathAsset(commonStore.activePreset.avatarImg);
} else if (messageItem.avatarImg) {
avatarImg = messageItem.avatarImg;
}
return <div return <div
className={classnames( className={classnames(
'flex gap-2 mb-2 overflow-hidden', 'flex gap-2 mb-2 overflow-hidden',
@@ -131,7 +118,7 @@ const ChatMessageItem: FC<{
<Avatar <Avatar
color={messageItem.color} color={messageItem.color}
name={messageItem.sender} name={messageItem.sender}
image={(commonStore.activePreset && messageItem.sender === botName) ? { src: commonStore.activePreset.avatarImg } : messageItem.avatarImg ? { src: messageItem.avatarImg } : undefined} image={avatarImg ? { src: avatarImg } : undefined}
/> />
<div <div
className={classnames( className={classnames(
@@ -142,13 +129,31 @@ const ChatMessageItem: FC<{
)} )}
> >
{!editing ? {!editing ?
<MarkdownRender>{messageItem.content}</MarkdownRender> : <div className="flex flex-col">
<MarkdownRender>{messageItem.content}</MarkdownRender>
{uuid in commonStore.attachments &&
<div className="flex grow">
<div className="grow" />
<ToolTipButton className="whitespace-nowrap"
text={
commonStore.attachments[uuid][0].name.replace(
new RegExp('(^[^\\.]{5})[^\\.]+'), '$1...')
}
desc={`${commonStore.attachments[uuid][0].name} (${bytesToReadable(commonStore.attachments[uuid][0].size)})`}
size="small" shape="circular" appearance="secondary" />
</div>
}
</div> :
<Textarea ref={textareaRef} <Textarea ref={textareaRef}
className="grow" className="grow"
style={{ minWidth: 0 }} style={{ minWidth: 0 }}
value={messageItem.content} value={messageItem.content}
onChange={(e) => { onChange={(e) => {
messageItem.content = e.target.value; messageItem.content = e.target.value;
commonStore.conversation[uuid].type = MessageType.Normal;
commonStore.conversation[uuid].done = true;
commonStore.setConversation(commonStore.conversation);
commonStore.setConversationOrder([...commonStore.conversationOrder]);
}} }}
onBlur={() => { onBlur={() => {
setEditingInner(false); setEditingInner(false);
@@ -166,6 +171,10 @@ const ChatMessageItem: FC<{
messageItem.sender === botName && uuid !== welcomeUuid && messageItem.sender === botName && uuid !== welcomeUuid &&
<ToolTipButton desc={t('Retry')} size="small" appearance="subtle" <ToolTipButton desc={t('Retry')} size="small" appearance="subtle"
icon={<SyncIcon />} onClick={() => { icon={<SyncIcon />} onClick={() => {
if (uuid in chatSseControllers) {
chatSseControllers[uuid].abort();
delete chatSseControllers[uuid];
}
onSubmit(null, uuid, null, uuid, false); onSubmit(null, uuid, null, uuid, false);
}} /> }} />
} }
@@ -187,15 +196,7 @@ const ChatPanel: FC = observer(() => {
const currentConfig = commonStore.getCurrentModelConfig(); const currentConfig = commonStore.getCurrentModelConfig();
const apiParams = currentConfig.apiParameters; const apiParams = currentConfig.apiParameters;
const port = apiParams.apiPort; const port = apiParams.apiPort;
const generating: boolean = Object.keys(chatSseControllers).length > 0;
let lastMessageId: string;
let generating: boolean = false;
if (commonStore.conversationOrder.length > 0) {
lastMessageId = commonStore.conversationOrder[commonStore.conversationOrder.length - 1];
const lastMessage = commonStore.conversation[lastMessageId];
if (lastMessage.sender === botName)
generating = !lastMessage.done;
}
useEffect(() => { useEffect(() => {
if (inputRef.current) if (inputRef.current)
@@ -213,7 +214,7 @@ const ChatPanel: FC = observer(() => {
color: 'colorful', color: 'colorful',
avatarImg: logo, avatarImg: logo,
time: new Date().toISOString(), time: new Date().toISOString(),
content: t('Hello! I\'m RWKV, an open-source and commercially usable large language model.'), content: commonStore.platform === 'web' ? t('Hello, what can I do for you?') : t('Hello! I\'m RWKV, an open-source and commercially usable large language model.'),
side: 'left', side: 'left',
done: true done: true
} }
@@ -230,7 +231,7 @@ const ChatPanel: FC = observer(() => {
e.stopPropagation(); e.stopPropagation();
if (e.type === 'click' || (e.keyCode === 13 && !e.shiftKey)) { if (e.type === 'click' || (e.keyCode === 13 && !e.shiftKey)) {
e.preventDefault(); e.preventDefault();
if (commonStore.status.status === ModelStatus.Offline && !commonStore.settings.apiUrl) { if (commonStore.status.status === ModelStatus.Offline && !commonStore.settings.apiUrl && commonStore.platform !== 'web') {
toast(t('Please click the button in the top right corner to start the model'), { type: 'warning' }); toast(t('Please click the button in the top right corner to start the model'), { type: 'warning' });
return; return;
} }
@@ -260,6 +261,11 @@ const ChatPanel: FC = observer(() => {
commonStore.setConversation(commonStore.conversation); commonStore.setConversation(commonStore.conversation);
commonStore.conversationOrder.push(newId); commonStore.conversationOrder.push(newId);
commonStore.setConversationOrder(commonStore.conversationOrder); commonStore.setConversationOrder(commonStore.conversationOrder);
if (commonStore.currentTempAttachment) {
commonStore.setAttachment(newId, [commonStore.currentTempAttachment]);
commonStore.setCurrentTempAttachment(null);
}
} }
let startIndex = startUuid ? commonStore.conversationOrder.indexOf(startUuid) : 0; let startIndex = startUuid ? commonStore.conversationOrder.indexOf(startUuid) : 0;
@@ -271,6 +277,17 @@ const ChatPanel: FC = observer(() => {
if (uuid === welcomeUuid) if (uuid === welcomeUuid)
return; return;
const messageItem = commonStore.conversation[uuid]; const messageItem = commonStore.conversation[uuid];
if (uuid in commonStore.attachments) {
const attachment = commonStore.attachments[uuid][0];
messages.push({
role: 'user',
content: t('The content of file') + ` "${attachment.name}" `
+ t('is as follows. When replying to me, consider the file content and respond accordingly:')
+ '\n\n' + attachment.content
});
messages.push({ role: 'user', content: t('What\'s the file name') });
messages.push({ role: 'assistant', content: t('The file name is: ') + attachment.name });
}
if (messageItem.done && messageItem.type === MessageType.Normal && messageItem.sender === userName) { if (messageItem.done && messageItem.type === MessageType.Normal && messageItem.sender === userName) {
messages.push({ role: 'user', content: messageItem.content }); messages.push({ role: 'user', content: messageItem.content });
} else if (messageItem.done && messageItem.type === MessageType.Normal && messageItem.sender === botName) { } else if (messageItem.done && messageItem.type === MessageType.Normal && messageItem.sender === botName) {
@@ -296,11 +313,10 @@ const ChatPanel: FC = observer(() => {
commonStore.setConversationOrder(commonStore.conversationOrder); commonStore.setConversationOrder(commonStore.conversationOrder);
setTimeout(scrollToBottom); setTimeout(scrollToBottom);
let answer = ''; let answer = '';
chatSseController = new AbortController(); const chatSseController = new AbortController();
fetchEventSource( // https://api.openai.com/v1/chat/completions || http://127.0.0.1:${port}/chat/completions chatSseControllers[answerId] = chatSseController;
commonStore.settings.apiUrl ? fetchEventSource( // https://api.openai.com/v1/chat/completions || http://127.0.0.1:${port}/v1/chat/completions
commonStore.settings.apiUrl + '/v1/chat/completions' : getServerRoot(port) + '/v1/chat/completions',
`http://127.0.0.1:${port}/chat/completions`,
{ {
method: 'POST', method: 'POST',
headers: { headers: {
@@ -312,12 +328,17 @@ const ChatPanel: FC = observer(() => {
stream: true, stream: true,
model: commonStore.settings.apiChatModelName, // 'gpt-3.5-turbo' model: commonStore.settings.apiChatModelName, // 'gpt-3.5-turbo'
temperature: apiParams.temperature, temperature: apiParams.temperature,
top_p: apiParams.topP top_p: apiParams.topP,
user_name: commonStore.activePreset?.userName || undefined,
assistant_name: commonStore.activePreset?.assistantName || undefined,
presystem: commonStore.activePreset?.presystem && undefined
}), }),
signal: chatSseController?.signal, signal: chatSseController?.signal,
onmessage(e) { onmessage(e) {
scrollToBottom(); scrollToBottom();
if (e.data.trim() === '[DONE]') { if (e.data.trim() === '[DONE]') {
if (answerId! in chatSseControllers)
delete chatSseControllers[answerId!];
commonStore.conversation[answerId!].done = true; commonStore.conversation[answerId!].done = true;
commonStore.conversation[answerId!].content = commonStore.conversation[answerId!].content.trim(); commonStore.conversation[answerId!].content = commonStore.conversation[answerId!].content.trim();
commonStore.setConversation(commonStore.conversation); commonStore.setConversation(commonStore.conversation);
@@ -347,9 +368,13 @@ const ChatPanel: FC = observer(() => {
} }
}, },
onclose() { onclose() {
if (answerId! in chatSseControllers)
delete chatSseControllers[answerId!];
console.log('Connection closed'); console.log('Connection closed');
}, },
onerror(err) { onerror(err) {
if (answerId! in chatSseControllers)
delete chatSseControllers[answerId!];
commonStore.conversation[answerId!].type = MessageType.Error; commonStore.conversation[answerId!].type = MessageType.Error;
commonStore.conversation[answerId!].done = true; commonStore.conversation[answerId!].done = true;
err = err.message || err; err = err.message || err;
@@ -377,33 +402,141 @@ const ChatPanel: FC = observer(() => {
size={mq ? 'large' : 'small'} shape="circular" appearance="subtle" title={t('Clear')} size={mq ? 'large' : 'small'} shape="circular" appearance="subtle" title={t('Clear')}
contentText={t('Are you sure you want to clear the conversation? It cannot be undone.')} contentText={t('Are you sure you want to clear the conversation? It cannot be undone.')}
onConfirm={() => { onConfirm={() => {
if (generating) if (generating) {
chatSseController?.abort(); for (const id in chatSseControllers) {
chatSseControllers[id].abort();
}
chatSseControllers = {};
}
commonStore.setConversation({}); commonStore.setConversation({});
commonStore.setConversationOrder([]); commonStore.setConversationOrder([]);
}} /> }} />
<Textarea <div className="relative flex grow">
ref={inputRef} <Textarea
style={{ minWidth: 0 }} ref={inputRef}
className="grow" style={{ minWidth: 0 }}
resize="vertical" className="grow"
placeholder={t('Type your message here')!} resize="vertical"
value={commonStore.currentInput} placeholder={t('Type your message here')!}
onChange={(e) => commonStore.setCurrentInput(e.target.value)} value={commonStore.currentInput}
onKeyDown={handleKeyDownOrClick} onChange={(e) => commonStore.setCurrentInput(e.target.value)}
/> onKeyDown={handleKeyDownOrClick}
/>
<div className="absolute right-2 bottom-2">
{!commonStore.currentTempAttachment ?
<ToolTipButton
desc={commonStore.attachmentUploading ?
t('Uploading Attachment') :
t('Add An Attachment (Accepts pdf, txt)')}
icon={commonStore.attachmentUploading ?
<ArrowClockwise16Regular className="animate-spin" />
: <Attach16Regular />}
size="small" shape="circular" appearance="secondary"
onClick={() => {
if (commonStore.status.status === ModelStatus.Offline && !commonStore.settings.apiUrl && commonStore.platform !== 'web') {
toast(t('Please click the button in the top right corner to start the model'), { type: 'warning' });
return;
}
if (commonStore.attachmentUploading)
return;
OpenOpenFileDialog('*.txt;*.pdf').then(async filePath => {
if (!filePath)
return;
commonStore.setAttachmentUploading(true);
let blob: Blob;
let attachmentName: string | undefined;
let attachmentContent: string | undefined;
if (commonStore.platform === 'web') {
const webReturn = filePath as any;
blob = webReturn.blob;
attachmentName = blob.name;
attachmentContent = webReturn.content;
} else {
// Both are slow. Communication between frontend and backend is slow. Use AssetServer Handler to read the file.
// const blob = new Blob([atob(info.content as unknown as string)]); // await fetch(`data:application/octet-stream;base64,${info.content}`).then(r => r.blob());
blob = await fetch(absPathAsset(filePath)).then(r => r.blob());
attachmentName = filePath.split(/[\\/]/).pop();
}
if (attachmentContent) {
commonStore.setCurrentTempAttachment(
{
name: attachmentName!,
size: blob.size,
content: attachmentContent
});
commonStore.setAttachmentUploading(false);
} else {
const urlPath = `/file-to-text?file_name=${attachmentName}`;
const bodyForm = new FormData();
bodyForm.append('file_data', blob, attachmentName);
fetch(getServerRoot(port) + urlPath, {
method: 'POST',
body: bodyForm
}).then(async r => {
if (r.status === 200) {
const pages = (await r.json()).pages as any[];
if (pages.length === 1)
attachmentContent = pages[0].page_content;
else
attachmentContent = pages.map((p, i) => `Page ${i + 1}:\n${p.page_content}`).join('\n\n');
commonStore.setCurrentTempAttachment(
{
name: attachmentName!,
size: blob.size,
content: attachmentContent!
});
} else {
toast(r.statusText + '\n' + (await r.text()), {
type: 'error'
});
}
commonStore.setAttachmentUploading(false);
}
).catch(e => {
commonStore.setAttachmentUploading(false);
toast(t('Error') + ' - ' + (e.message || e), { type: 'error', autoClose: 2500 });
});
}
}).catch(e => {
toast(t('Error') + ' - ' + (e.message || e), { type: 'error', autoClose: 2500 });
});
}}
/> :
<div>
<ToolTipButton
text={
commonStore.currentTempAttachment.name.replace(
new RegExp('(^[^\\.]{5})[^\\.]+'), '$1...')
}
desc={`${commonStore.currentTempAttachment.name} (${bytesToReadable(commonStore.currentTempAttachment.size)})`}
size="small" shape="circular" appearance="secondary" />
<ToolTipButton desc={t('Remove Attachment')}
icon={<Dismiss16Regular />}
size="small" shape="circular" appearance="subtle"
onClick={() => {
commonStore.setCurrentTempAttachment(null);
}} />
</div>
}
</div>
</div>
<ToolTipButton desc={generating ? t('Stop') : t('Send')} <ToolTipButton desc={generating ? t('Stop') : t('Send')}
icon={generating ? <RecordStop28Regular /> : <ArrowCircleUp28Regular />} icon={generating ? <RecordStop28Regular /> : <ArrowCircleUp28Regular />}
size={mq ? 'large' : 'small'} shape="circular" appearance="subtle" size={mq ? 'large' : 'small'} shape="circular" appearance="subtle"
onClick={(e) => { onClick={(e) => {
if (generating) { if (generating) {
chatSseController?.abort(); for (const id in chatSseControllers) {
if (lastMessageId) { chatSseControllers[id].abort();
commonStore.conversation[lastMessageId].type = MessageType.Error; commonStore.conversation[id].type = MessageType.Error;
commonStore.conversation[lastMessageId].done = true; commonStore.conversation[id].done = true;
commonStore.setConversation(commonStore.conversation);
commonStore.setConversationOrder([...commonStore.conversationOrder]);
} }
chatSseControllers = {};
commonStore.setConversation(commonStore.conversation);
commonStore.setConversationOrder([...commonStore.conversationOrder]);
} else { } else {
handleKeyDownOrClick(e); handleKeyDownOrClick(e);
} }
@@ -414,8 +547,8 @@ const ChatPanel: FC = observer(() => {
onClick={() => { onClick={() => {
let savedContent: string = ''; let savedContent: string = '';
const isWorldModel = commonStore.getCurrentModelConfig().modelParameters.modelName.toLowerCase().includes('world'); const isWorldModel = commonStore.getCurrentModelConfig().modelParameters.modelName.toLowerCase().includes('world');
const user = isWorldModel ? 'Question' : 'Bob'; const user = isWorldModel ? 'User' : 'Bob';
const bot = isWorldModel ? 'Answer' : 'Alice'; const bot = isWorldModel ? 'Assistant' : 'Alice';
commonStore.conversationOrder.forEach((uuid) => { commonStore.conversationOrder.forEach((uuid) => {
if (uuid === welcomeUuid) if (uuid === welcomeUuid)
return; return;
@@ -439,7 +572,7 @@ const ChatPanel: FC = observer(() => {
); );
}); });
export const Chat: FC = observer(() => { const Chat: FC = observer(() => {
return ( return (
<div className="flex flex-col gap-1 p-2 h-full overflow-hidden"> <div className="flex flex-col gap-1 p-2 h-full overflow-hidden">
<WorkHeader /> <WorkHeader />
@@ -447,3 +580,5 @@ export const Chat: FC = observer(() => {
</div> </div>
); );
}); });
export default Chat;

View File

@@ -5,7 +5,6 @@ import { Button, Dropdown, Input, Option, Textarea } from '@fluentui/react-compo
import { Labeled } from '../components/Labeled'; import { Labeled } from '../components/Labeled';
import { ValuedSlider } from '../components/ValuedSlider'; import { ValuedSlider } from '../components/ValuedSlider';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import { ApiParameters } from './Configs';
import commonStore, { ModelStatus } from '../stores/commonStore'; import commonStore, { ModelStatus } from '../stores/commonStore';
import { fetchEventSource } from '@microsoft/fetch-event-source'; import { fetchEventSource } from '@microsoft/fetch-event-source';
import { toast } from 'react-toastify'; import { toast } from 'react-toastify';
@@ -14,18 +13,8 @@ import { PresetsButton } from './PresetsManager/PresetsButton';
import { ToolTipButton } from '../components/ToolTipButton'; import { ToolTipButton } from '../components/ToolTipButton';
import { ArrowSync20Regular } from '@fluentui/react-icons'; import { ArrowSync20Regular } from '@fluentui/react-icons';
import { defaultPresets } from './defaultConfigs'; import { defaultPresets } from './defaultConfigs';
import { CompletionParams, CompletionPreset } from '../types/completion';
export type CompletionParams = Omit<ApiParameters, 'apiPort'> & { import { getServerRoot } from '../utils';
stop: string,
injectStart: string,
injectEnd: string
};
export type CompletionPreset = {
name: string,
prompt: string,
params: CompletionParams
}
let completionSseController: AbortController | null = null; let completionSseController: AbortController | null = null;
@@ -80,7 +69,7 @@ const CompletionPanel: FC = observer(() => {
const onSubmit = (prompt: string) => { const onSubmit = (prompt: string) => {
commonStore.setCompletionSubmittedPrompt(prompt); commonStore.setCompletionSubmittedPrompt(prompt);
if (commonStore.status.status === ModelStatus.Offline && !commonStore.settings.apiUrl) { if (commonStore.status.status === ModelStatus.Offline && !commonStore.settings.apiUrl && commonStore.platform !== 'web') {
toast(t('Please click the button in the top right corner to start the model'), { type: 'warning' }); toast(t('Please click the button in the top right corner to start the model'), { type: 'warning' });
commonStore.setCompletionGenerating(false); commonStore.setCompletionGenerating(false);
return; return;
@@ -90,10 +79,8 @@ const CompletionPanel: FC = observer(() => {
let answer = ''; let answer = '';
completionSseController = new AbortController(); completionSseController = new AbortController();
fetchEventSource( // https://api.openai.com/v1/completions || http://127.0.0.1:${port}/completions fetchEventSource( // https://api.openai.com/v1/completions || http://127.0.0.1:${port}/v1/completions
commonStore.settings.apiUrl ? getServerRoot(port) + '/v1/completions',
commonStore.settings.apiUrl + '/v1/completions' :
`http://127.0.0.1:${port}/completions`,
{ {
method: 'POST', method: 'POST',
headers: { headers: {
@@ -269,6 +256,13 @@ const CompletionPanel: FC = observer(() => {
} /> } />
</div> </div>
<div className="grow" /> <div className="grow" />
<div className="hidden justify-between gap-2 sm:flex">
<Button className="grow" onClick={() => {
const newPrompt = prompt.replace(/\n+\ /g, '\n').split('\n').map((line) => line.trim()).join('\n');
setPrompt(newPrompt);
commonStore.setCompletionSubmittedPrompt(newPrompt);
}}>{t('Format Content')}</Button>
</div>
<div className="flex justify-between gap-2"> <div className="flex justify-between gap-2">
<ToolTipButton desc={t('Regenerate')} icon={<ArrowSync20Regular />} onClick={() => { <ToolTipButton desc={t('Regenerate')} icon={<ArrowSync20Regular />} onClick={() => {
completionSseController?.abort(); completionSseController?.abort();
@@ -296,7 +290,7 @@ const CompletionPanel: FC = observer(() => {
); );
}); });
export const Completion: FC = observer(() => { const Completion: FC = observer(() => {
return ( return (
<div className="flex flex-col gap-1 p-2 h-full overflow-hidden"> <div className="flex flex-col gap-1 p-2 h-full overflow-hidden">
<WorkHeader /> <WorkHeader />
@@ -304,3 +298,5 @@ export const Completion: FC = observer(() => {
</div> </div>
); );
}); });
export default Completion;

View File

@@ -1,3 +1,4 @@
import 'html-midi-player';
import React, { FC, useEffect, useRef } from 'react'; import React, { FC, useEffect, useRef } from 'react';
import { observer } from 'mobx-react-lite'; import { observer } from 'mobx-react-lite';
import { WorkHeader } from '../components/WorkHeader'; import { WorkHeader } from '../components/WorkHeader';
@@ -16,18 +17,8 @@ import * as mm from '@magenta/music/esm/core.js';
import { NoteSequence } from '@magenta/music/esm/protobuf.js'; import { NoteSequence } from '@magenta/music/esm/protobuf.js';
import { defaultCompositionPrompt } from './defaultConfigs'; import { defaultCompositionPrompt } from './defaultConfigs';
import { FileExists, OpenFileFolder, OpenSaveFileDialogBytes } from '../../wailsjs/go/backend_golang/App'; import { FileExists, OpenFileFolder, OpenSaveFileDialogBytes } from '../../wailsjs/go/backend_golang/App';
import { toastWithButton } from '../utils'; import { getServerRoot, toastWithButton } from '../utils';
import { CompositionParams } from '../types/composition';
export type CompositionParams = {
prompt: string,
maxResponseToken: number,
temperature: number,
topP: number,
autoPlay: boolean,
useLocalSoundFont: boolean,
midi: ArrayBuffer | null,
ns: NoteSequence | null
}
let compositionSseController: AbortController | null = null; let compositionSseController: AbortController | null = null;
@@ -109,9 +100,7 @@ const CompositionPanel: FC = observer(() => {
}, []); }, []);
const generateNs = (autoPlay: boolean) => { const generateNs = (autoPlay: boolean) => {
fetch(commonStore.settings.apiUrl ? fetch(getServerRoot(port) + '/text-to-midi', {
commonStore.settings.apiUrl + '/text-to-midi' :
`http://127.0.0.1:${port}/text-to-midi`, {
method: 'POST', method: 'POST',
headers: { headers: {
'Content-Type': 'application/json' 'Content-Type': 'application/json'
@@ -137,7 +126,7 @@ const CompositionPanel: FC = observer(() => {
const onSubmit = (prompt: string) => { const onSubmit = (prompt: string) => {
commonStore.setCompositionSubmittedPrompt(prompt); commonStore.setCompositionSubmittedPrompt(prompt);
if (commonStore.status.status === ModelStatus.Offline && !commonStore.settings.apiUrl) { if (commonStore.status.status === ModelStatus.Offline && !commonStore.settings.apiUrl && commonStore.platform !== 'web') {
toast(t('Please click the button in the top right corner to start the model'), { type: 'warning' }); toast(t('Please click the button in the top right corner to start the model'), { type: 'warning' });
commonStore.setCompositionGenerating(false); commonStore.setCompositionGenerating(false);
return; return;
@@ -145,10 +134,8 @@ const CompositionPanel: FC = observer(() => {
let answer = ''; let answer = '';
compositionSseController = new AbortController(); compositionSseController = new AbortController();
fetchEventSource( // https://api.openai.com/v1/completions || http://127.0.0.1:${port}/completions fetchEventSource( // https://api.openai.com/v1/completions || http://127.0.0.1:${port}/v1/completions
commonStore.settings.apiUrl ? getServerRoot(port) + '/v1/completions',
commonStore.settings.apiUrl + '/v1/completions' :
`http://127.0.0.1:${port}/completions`,
{ {
method: 'POST', method: 'POST',
headers: { headers: {
@@ -319,7 +306,7 @@ const CompositionPanel: FC = observer(() => {
toastWithButton(t('File Saved'), t('Open'), () => { toastWithButton(t('File Saved'), t('Open'), () => {
OpenFileFolder(path, false); OpenFileFolder(path, false);
}); });
}).catch((e: any) => { }).catch((e) => {
toast(t('Error') + ' - ' + (e.message || e), { type: 'error', autoClose: 2500 }); toast(t('Error') + ' - ' + (e.message || e), { type: 'error', autoClose: 2500 });
}); });
} else { } else {
@@ -335,7 +322,7 @@ const CompositionPanel: FC = observer(() => {
); );
}); });
export const Composition: FC = observer(() => { const Composition: FC = observer(() => {
return ( return (
<div className="flex flex-col gap-1 p-2 h-full overflow-hidden"> <div className="flex flex-col gap-1 p-2 h-full overflow-hidden">
<WorkHeader /> <WorkHeader />
@@ -343,3 +330,5 @@ export const Composition: FC = observer(() => {
</div> </div>
); );
}); });
export default Composition;

View File

@@ -1,6 +1,19 @@
import { Dropdown, Input, Label, Option, Select, Switch, Text } from '@fluentui/react-components'; import {
Accordion,
AccordionHeader,
AccordionItem,
AccordionPanel,
Checkbox,
Dropdown,
Input,
Label,
Option,
Select,
Switch,
Text
} from '@fluentui/react-components';
import { AddCircle20Regular, DataUsageSettings20Regular, Delete20Regular, Save20Regular } from '@fluentui/react-icons'; import { AddCircle20Regular, DataUsageSettings20Regular, Delete20Regular, Save20Regular } from '@fluentui/react-icons';
import React, { FC } from 'react'; import React, { FC, useEffect, useRef } from 'react';
import { Section } from '../components/Section'; import { Section } from '../components/Section';
import { Labeled } from '../components/Labeled'; import { Labeled } from '../components/Labeled';
import { ToolTipButton } from '../components/ToolTipButton'; import { ToolTipButton } from '../components/ToolTipButton';
@@ -16,51 +29,28 @@ import { updateConfig } from '../apis';
import { ConvertModel, ConvertSafetensors, FileExists, GetPyError } from '../../wailsjs/go/backend_golang/App'; import { ConvertModel, ConvertSafetensors, FileExists, GetPyError } from '../../wailsjs/go/backend_golang/App';
import { checkDependencies, getStrategy } from '../utils'; import { checkDependencies, getStrategy } from '../utils';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import { WindowShow } from '../../wailsjs/runtime/runtime'; import { WindowShow } from '../../wailsjs/runtime';
import strategyImg from '../assets/images/strategy.jpg'; import strategyImg from '../assets/images/strategy.jpg';
import strategyZhImg from '../assets/images/strategy_zh.jpg'; import strategyZhImg from '../assets/images/strategy_zh.jpg';
import { ResetConfigsButton } from '../components/ResetConfigsButton'; import { ResetConfigsButton } from '../components/ResetConfigsButton';
import { useMediaQuery } from 'usehooks-ts'; import { useMediaQuery } from 'usehooks-ts';
import { ApiParameters, Device, ModelParameters, Precision } from '../types/configs';
export type ApiParameters = { const Configs: FC = observer(() => {
apiPort: number
maxResponseToken: number;
temperature: number;
topP: number;
presencePenalty: number;
frequencyPenalty: number;
}
export type Device = 'CPU' | 'CUDA' | 'CUDA-Beta' | 'WebGPU' | 'MPS' | 'Custom';
export type Precision = 'fp16' | 'int8' | 'fp32';
export type ModelParameters = {
// different models can not have the same name
modelName: string;
device: Device;
precision: Precision;
storedLayers: number;
maxStoredLayers: number;
useCustomCuda?: boolean;
customStrategy?: string;
}
export type ModelConfig = {
// different configs can have the same name
name: string;
apiParameters: ApiParameters
modelParameters: ModelParameters
}
export const Configs: FC = observer(() => {
const { t } = useTranslation(); const { t } = useTranslation();
const [selectedIndex, setSelectedIndex] = React.useState(commonStore.currentModelConfigIndex); const [selectedIndex, setSelectedIndex] = React.useState(commonStore.currentModelConfigIndex);
const [selectedConfig, setSelectedConfig] = React.useState(commonStore.modelConfigs[selectedIndex]); const [selectedConfig, setSelectedConfig] = React.useState(commonStore.modelConfigs[selectedIndex]);
const [displayStrategyImg, setDisplayStrategyImg] = React.useState(false); const [displayStrategyImg, setDisplayStrategyImg] = React.useState(false);
const advancedHeaderRef = useRef<HTMLDivElement>(null);
const mq = useMediaQuery('(min-width: 640px)'); const mq = useMediaQuery('(min-width: 640px)');
const navigate = useNavigate(); const navigate = useNavigate();
const port = selectedConfig.apiParameters.apiPort; const port = selectedConfig.apiParameters.apiPort;
useEffect(() => {
if (advancedHeaderRef.current)
(advancedHeaderRef.current.firstElementChild as HTMLElement).style.padding = '0';
}, []);
const updateSelectedIndex = (newIndex: number) => { const updateSelectedIndex = (newIndex: number) => {
setSelectedIndex(newIndex); setSelectedIndex(newIndex);
setSelectedConfig(commonStore.modelConfigs[newIndex]); setSelectedConfig(commonStore.modelConfigs[newIndex]);
@@ -402,7 +392,7 @@ export const Configs: FC = observer(() => {
{ {
(selectedConfig.modelParameters.device.includes('CUDA') || selectedConfig.modelParameters.device === 'Custom') && (selectedConfig.modelParameters.device.includes('CUDA') || selectedConfig.modelParameters.device === 'Custom') &&
<Labeled label={t('Use Custom CUDA kernel to Accelerate')} <Labeled label={t('Use Custom CUDA kernel to Accelerate')}
desc={t('Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues. If it fails to start, please turn off this option.')} desc={t('Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues (output garbled). If it fails to start, please turn off this option, or try to upgrade your gpu driver.')}
content={ content={
<Switch checked={selectedConfig.modelParameters.useCustomCuda} <Switch checked={selectedConfig.modelParameters.useCustomCuda}
onChange={(e, data) => { onChange={(e, data) => {
@@ -412,14 +402,62 @@ export const Configs: FC = observer(() => {
}} /> }} />
} /> } />
} }
{selectedConfig.modelParameters.device !== 'WebGPU' &&
<Accordion className="sm:col-span-2" collapsible
openItems={!commonStore.modelParamsCollapsed && 'advanced'}
onToggle={(e, data) => {
if (data.value === 'advanced')
commonStore.setModelParamsCollapsed(!commonStore.modelParamsCollapsed);
}}>
<AccordionItem value="advanced">
<AccordionHeader ref={advancedHeaderRef} size="small">{t('Advanced')}</AccordionHeader>
<AccordionPanel>
<div className="flex flex-col">
<div className="flex grow">
<Checkbox className="select-none"
size="large" label={t('Use Custom Tokenizer')}
checked={selectedConfig.modelParameters.useCustomTokenizer}
onChange={(_, data) => {
setSelectedConfigModelParams({
useCustomTokenizer: data.checked as boolean
});
}} />
<Input className="grow"
placeholder={t('Tokenizer Path (e.g. backend-python/rwkv_pip/20B_tokenizer.json)')!}
value={selectedConfig.modelParameters.customTokenizer}
onChange={(e, data) => {
setSelectedConfigModelParams({
customTokenizer: data.value
});
}} />
</div>
</div>
</AccordionPanel>
</AccordionItem>
</Accordion>
}
</div> </div>
} }
/> />
</div> </div>
<div className="flex flex-row-reverse sm:fixed bottom-2 right-2"> <div className="flex flex-row-reverse sm:fixed bottom-2 right-2">
<RunButton onClickRun={onClickSave} /> <div className="flex gap-2">
{selectedConfig.modelParameters.device !== 'WebGPU'
&& <Checkbox className="select-none"
size="large" label={t('Enable WebUI')}
checked={selectedConfig.enableWebUI}
onChange={(_, data) => {
setSelectedConfig({
...selectedConfig,
enableWebUI: data.checked as boolean
});
}} />}
<RunButton onClickRun={onClickSave} />
</div>
</div> </div>
</div> </div>
} /> } />
); );
}); });
export default Configs;

View File

@@ -1,28 +1,22 @@
import React, { FC } from 'react'; import React, { FC, useEffect } from 'react';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import { Page } from '../components/Page'; import { Page } from '../components/Page';
import { observer } from 'mobx-react-lite'; import { observer } from 'mobx-react-lite';
import commonStore from '../stores/commonStore'; import commonStore from '../stores/commonStore';
import { Divider, Field, ProgressBar } from '@fluentui/react-components'; import { Divider, Field, ProgressBar } from '@fluentui/react-components';
import { bytesToGb, bytesToKb, bytesToMb } from '../utils'; import { bytesToGb, bytesToKb, bytesToMb, refreshLocalModels } from '../utils';
import { ToolTipButton } from '../components/ToolTipButton'; import { ToolTipButton } from '../components/ToolTipButton';
import { Folder20Regular, Pause20Regular, Play20Regular } from '@fluentui/react-icons'; import { Folder20Regular, Pause20Regular, Play20Regular } from '@fluentui/react-icons';
import { AddToDownloadList, OpenFileFolder, PauseDownload } from '../../wailsjs/go/backend_golang/App'; import { AddToDownloadList, OpenFileFolder, PauseDownload } from '../../wailsjs/go/backend_golang/App';
export type DownloadStatus = { const Downloads: FC = observer(() => {
name: string;
path: string;
url: string;
transferred: number;
size: number;
speed: number;
progress: number;
downloading: boolean;
done: boolean;
}
export const Downloads: FC = observer(() => {
const { t } = useTranslation(); const { t } = useTranslation();
const finishedModelsLen = commonStore.downloadList.filter((status) => status.done && status.name.endsWith('.pth')).length;
useEffect(() => {
if (finishedModelsLen > 0)
refreshLocalModels({ models: commonStore.modelSourceList }, false);
console.log('finishedModelsLen:', finishedModelsLen);
}, [finishedModelsLen]);
let displayList = commonStore.downloadList.slice(); let displayList = commonStore.downloadList.slice();
const downloadListNames = displayList.map(s => s.name); const downloadListNames = displayList.map(s => s.name);
@@ -85,3 +79,5 @@ export const Downloads: FC = observer(() => {
} /> } />
); );
}); });
export default Downloads;

View File

@@ -1,11 +1,13 @@
import { CompoundButton, Link, Text } from '@fluentui/react-components'; import { CompoundButton, Link, Text } from '@fluentui/react-components';
import React, { FC, ReactElement } from 'react'; import React, { FC } from 'react';
import banner from '../assets/images/banner.jpg'; import banner from '../assets/images/banner.jpg';
import { import {
Chat20Regular, Chat20Regular,
ClipboardEdit20Regular, ClipboardEdit20Regular,
DataUsageSettings20Regular, DataUsageSettings20Regular,
DocumentSettings20Regular DocumentSettings20Regular,
MusicNote220Regular,
Settings20Regular
} from '@fluentui/react-icons'; } from '@fluentui/react-icons';
import { useNavigate } from 'react-router'; import { useNavigate } from 'react-router';
import { observer } from 'mobx-react-lite'; import { observer } from 'mobx-react-lite';
@@ -14,21 +16,13 @@ import manifest from '../../../manifest.json';
import { BrowserOpenURL } from '../../wailsjs/runtime'; import { BrowserOpenURL } from '../../wailsjs/runtime';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import { ConfigSelector } from '../components/ConfigSelector'; import { ConfigSelector } from '../components/ConfigSelector';
import MarkdownRender from '../components/MarkdownRender';
import commonStore from '../stores/commonStore'; import commonStore from '../stores/commonStore';
import { Completion } from './Completion';
import { ResetConfigsButton } from '../components/ResetConfigsButton'; import { ResetConfigsButton } from '../components/ResetConfigsButton';
import { AdvancedGeneralSettings } from './Settings';
import { NavCard } from '../types/home';
import { LazyImportComponent } from '../components/LazyImportComponent';
export type IntroductionContent = { [lang: string]: string } const clientNavCards: NavCard[] = [
type NavCard = {
label: string;
desc: string;
path: string;
icon: ReactElement;
};
const navCards: NavCard[] = [
{ {
label: 'Chat', label: 'Chat',
desc: 'Go to chat page', desc: 'Go to chat page',
@@ -55,7 +49,36 @@ const navCards: NavCard[] = [
} }
]; ];
export const Home: FC = observer(() => { const webNavCards: NavCard[] = [
{
label: 'Chat',
desc: 'Go to chat page',
path: '/chat',
icon: <Chat20Regular />
},
{
label: 'Completion',
desc: 'Writer, Translator, Role-playing',
path: '/completion',
icon: <ClipboardEdit20Regular />
},
{
label: 'Composition',
desc: '',
path: '/composition',
icon: <MusicNote220Regular />
},
{
label: 'Settings',
desc: '',
path: '/settings',
icon: <Settings20Regular />
}
];
const MarkdownRender = React.lazy(() => import('../components/MarkdownRender'));
const Home: FC = observer(() => {
const { t } = useTranslation(); const { t } = useTranslation();
const navigate = useNavigate(); const navigate = useNavigate();
const lang: string = commonStore.settings.language; const lang: string = commonStore.settings.language;
@@ -64,39 +87,64 @@ export const Home: FC = observer(() => {
navigate({ pathname: path }); navigate({ pathname: path });
}; };
return ( return commonStore.platform === 'web' ?
<div className="flex flex-col justify-between h-full"> (
<img className="rounded-xl select-none hidden sm:block" <div className="flex flex-col gap-2 h-full">
style={{ maxHeight: '40%', margin: '0 auto' }} src={banner} /> <img className="rounded-xl select-none object-cover grow"
<div className="flex flex-col gap-2"> style={{ maxHeight: '40%' }} src={banner} />
<Text size={600} weight="medium">{t('Introduction')}</Text> <div className="grow"></div>
<div className="h-40 overflow-y-auto overflow-x-hidden p-1"> <div className="grid grid-cols-2 sm:grid-cols-4 gap-5">
<MarkdownRender> {webNavCards.map(({ label, path, icon, desc }, index) => (
{lang in commonStore.introduction ? commonStore.introduction[lang] : commonStore.introduction['en']} <CompoundButton icon={icon} secondaryContent={t(desc)} key={`${path}-${index}`} value={path}
</MarkdownRender> size="large" onClick={() => onClickNavCard(path)}>
{t(label)}
</CompoundButton>
))}
</div> </div>
</div> <div className="flex flex-col gap-2">
<div className="grid grid-cols-2 sm:grid-cols-4 gap-5"> <AdvancedGeneralSettings />
{navCards.map(({ label, path, icon, desc }, index) => ( <div className="flex gap-4 items-end">
<CompoundButton icon={icon} secondaryContent={t(desc)} key={`${path}-${index}`} value={path} {t('Version')}: {manifest.version}
size="large" onClick={() => onClickNavCard(path)}> <Link onClick={() => BrowserOpenURL('https://github.com/josStorer/RWKV-Runner')}>{t('Help')}</Link>
{t(label)}
</CompoundButton>
))}
</div>
<div className="flex flex-col gap-2">
<div className="flex flex-row-reverse sm:fixed bottom-2 right-2">
<div className="flex gap-3">
<ResetConfigsButton />
<ConfigSelector />
<RunButton />
</div> </div>
</div> </div>
<div className="flex gap-4 items-end"> </div>
{t('Version')}: {manifest.version} )
<Link onClick={() => BrowserOpenURL('https://github.com/josStorer/RWKV-Runner')}>{t('Help')}</Link> : (
<div className="flex flex-col justify-between h-full">
<img className="rounded-xl select-none object-cover hidden sm:block"
style={{ maxHeight: '40%' }} src={banner} />
<div className="flex flex-col gap-2">
<Text size={600} weight="medium">{t('Introduction')}</Text>
<div className="h-40 overflow-y-auto overflow-x-hidden p-1">
<LazyImportComponent lazyChildren={MarkdownRender}>
{lang in commonStore.introduction ? commonStore.introduction[lang] : commonStore.introduction['en']}
</LazyImportComponent>
</div>
</div>
<div className="grid grid-cols-2 sm:grid-cols-4 gap-5">
{clientNavCards.map(({ label, path, icon, desc }, index) => (
<CompoundButton icon={icon} secondaryContent={t(desc)} key={`${path}-${index}`} value={path}
size="large" onClick={() => onClickNavCard(path)}>
{t(label)}
</CompoundButton>
))}
</div>
<div className="flex flex-col gap-2">
<div className="flex flex-row-reverse sm:fixed bottom-2 right-2">
<div className="flex gap-3">
<ResetConfigsButton />
<ConfigSelector />
<RunButton />
</div>
</div>
<div className="flex gap-4 items-end">
{t('Version')}: {manifest.version}
<Link onClick={() => BrowserOpenURL('https://github.com/josStorer/RWKV-Runner')}>{t('Help')}</Link>
</div>
</div> </div>
</div> </div>
</div> );
);
}); });
export default Home;

View File

@@ -22,21 +22,7 @@ import { Page } from '../components/Page';
import { bytesToGb, refreshModels, saveConfigs, toastWithButton } from '../utils'; import { bytesToGb, refreshModels, saveConfigs, toastWithButton } from '../utils';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import { useNavigate } from 'react-router'; import { useNavigate } from 'react-router';
import { ModelSourceItem } from '../types/models';
export type ModelSourceItem = {
name: string;
size: number;
lastUpdated: string;
desc?: { [lang: string]: string | undefined; };
SHA256?: string;
url?: string;
downloadUrl?: string;
isComplete?: boolean;
isLocal?: boolean;
localSize?: number;
lastUpdatedMs?: number;
hide?: boolean;
};
const columns: TableColumnDefinition<ModelSourceItem>[] = [ const columns: TableColumnDefinition<ModelSourceItem>[] = [
createTableColumn<ModelSourceItem>({ createTableColumn<ModelSourceItem>({
@@ -165,7 +151,7 @@ const columns: TableColumnDefinition<ModelSourceItem>[] = [
}) })
]; ];
export const Models: FC = observer(() => { const Models: FC = observer(() => {
const { t } = useTranslation(); const { t } = useTranslation();
return ( return (
@@ -220,3 +206,5 @@ export const Models: FC = observer(() => {
} /> } />
); );
}); });
export default Models;

View File

@@ -1,14 +1,14 @@
import React, { FC, useState } from 'react'; import React, { FC, useState } from 'react';
import { DragDropContext, Draggable, Droppable, DropResult } from 'react-beautiful-dnd'; import { DragDropContext, Draggable, Droppable, DropResult } from 'react-beautiful-dnd';
import commonStore from '../../stores/commonStore'; import commonStore from '../../stores/commonStore';
import { Preset } from './PresetsButton';
import { observer } from 'mobx-react-lite'; import { observer } from 'mobx-react-lite';
import { v4 as uuid } from 'uuid'; import { v4 as uuid } from 'uuid';
import { Button, Card, Dropdown, Option, Textarea } from '@fluentui/react-components'; import { Button, Card, Dropdown, Option, Textarea } from '@fluentui/react-components';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import { ToolTipButton } from '../../components/ToolTipButton'; import { ToolTipButton } from '../../components/ToolTipButton';
import { Delete20Regular, ReOrderDotsVertical20Regular } from '@fluentui/react-icons'; import { Delete20Regular, ReOrderDotsVertical20Regular } from '@fluentui/react-icons';
import { ConversationMessage, Role } from '../Chat'; import { Preset } from '../../types/presets';
import { ConversationMessage, Role } from '../../types/chat';
type Item = { type Item = {
id: string; id: string;
@@ -31,7 +31,7 @@ const reorder = (list: Item[], startIndex: number, endIndex: number) => {
return result; return result;
}; };
export const MessagesEditor: FC = observer(() => { const MessagesEditor: FC = observer(() => {
const { t } = useTranslation(); const { t } = useTranslation();
const editingPreset = commonStore.editingPreset!; const editingPreset = commonStore.editingPreset!;
@@ -152,3 +152,5 @@ export const MessagesEditor: FC = observer(() => {
</div> </div>
); );
}); });
export default MessagesEditor;

View File

@@ -1,6 +1,6 @@
// TODO refactor // TODO refactor
import React, { FC, PropsWithChildren, ReactElement, useState } from 'react'; import React, { FC, lazy, PropsWithChildren, ReactElement, useState } from 'react';
import { import {
Button, Button,
Dialog, Dialog,
@@ -25,40 +25,21 @@ import {
} from '@fluentui/react-icons'; } from '@fluentui/react-icons';
import { ToolTipButton } from '../../components/ToolTipButton'; import { ToolTipButton } from '../../components/ToolTipButton';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import { botName, Conversation, ConversationMessage, MessageType, userName } from '../Chat';
import { SelectTabEventHandler } from '@fluentui/react-tabs'; import { SelectTabEventHandler } from '@fluentui/react-tabs';
import { Labeled } from '../../components/Labeled'; import { Labeled } from '../../components/Labeled';
import commonStore from '../../stores/commonStore'; import commonStore from '../../stores/commonStore';
import logo from '../../assets/images/logo.png'; import logo from '../../assets/images/logo.png';
import { observer } from 'mobx-react-lite'; import { observer } from 'mobx-react-lite';
import { MessagesEditor } from './MessagesEditor';
import { ClipboardGetText, ClipboardSetText } from '../../../wailsjs/runtime'; import { ClipboardGetText, ClipboardSetText } from '../../../wailsjs/runtime';
import { toast } from 'react-toastify'; import { toast } from 'react-toastify';
import { CustomToastContainer } from '../../components/CustomToastContainer'; import { CustomToastContainer } from '../../components/CustomToastContainer';
import { v4 as uuid } from 'uuid'; import { v4 as uuid } from 'uuid';
import { absPathAsset } from '../../utils';
import { Preset, PresetsNavigationItem } from '../../types/presets';
import { botName, Conversation, MessageType, userName } from '../../types/chat';
import { LazyImportComponent } from '../../components/LazyImportComponent';
export type PresetType = 'chat' | 'completion' | 'chatInCompletion' const defaultPreset: Preset = {
export type Preset = {
name: string,
tag: string,
// if name and sourceUrl are same, it will be overridden when importing
sourceUrl: string,
desc: string,
avatarImg: string,
type: PresetType,
// chat
welcomeMessage: string,
messages: ConversationMessage[],
displayPresetMessages: boolean,
// completion
prompt: string,
stop: string,
injectStart: string,
injectEnd: string,
}
export const defaultPreset: Preset = {
name: 'RWKV', name: 'RWKV',
tag: 'default', tag: 'default',
sourceUrl: '', sourceUrl: '',
@@ -74,6 +55,8 @@ export const defaultPreset: Preset = {
injectEnd: '' injectEnd: ''
}; };
const MessagesEditor = lazy(() => import('./MessagesEditor'));
const setActivePreset = (preset: Preset) => { const setActivePreset = (preset: Preset) => {
commonStore.setActivePreset(preset); commonStore.setActivePreset(preset);
//TODO if (preset.displayPresetMessages) { //TODO if (preset.displayPresetMessages) {
@@ -97,7 +80,7 @@ const setActivePreset = (preset: Preset) => {
//} //}
}; };
export const PresetCardFrame: FC<PropsWithChildren & { onClick?: () => void }> = (props) => { const PresetCardFrame: FC<PropsWithChildren & { onClick?: () => void }> = (props) => {
return <Button return <Button
className="flex flex-col gap-1 w-32 h-56 break-all" className="flex flex-col gap-1 w-32 h-56 break-all"
style={{ minWidth: 0, borderRadius: '0.75rem', justifyContent: 'unset' }} style={{ minWidth: 0, borderRadius: '0.75rem', justifyContent: 'unset' }}
@@ -107,7 +90,7 @@ export const PresetCardFrame: FC<PropsWithChildren & { onClick?: () => void }> =
</Button>; </Button>;
}; };
export const PresetCard: FC<{ const PresetCard: FC<{
avatarImg: string, avatarImg: string,
name: string, name: string,
desc: string, desc: string,
@@ -121,7 +104,7 @@ export const PresetCard: FC<{
const { t } = useTranslation(); const { t } = useTranslation();
return <PresetCardFrame onClick={onClick}> return <PresetCardFrame onClick={onClick}>
<img src={avatarImg} className="rounded-xl select-none ml-auto mr-auto h-28" /> <img src={absPathAsset(avatarImg)} className="rounded-xl select-none ml-auto mr-auto h-28" />
<Text size={400}>{name}</Text> <Text size={400}>{name}</Text>
<Text size={200} style={{ <Text size={200} style={{
overflow: 'hidden', textOverflow: 'ellipsis', overflow: 'hidden', textOverflow: 'ellipsis',
@@ -143,7 +126,7 @@ export const PresetCard: FC<{
</PresetCardFrame>; </PresetCardFrame>;
}); });
export const ChatPresetEditor: FC<{ const ChatPresetEditor: FC<{
triggerButton: ReactElement, triggerButton: ReactElement,
presetIndex: number presetIndex: number
}> = observer(({ triggerButton, presetIndex }) => { }> = observer(({ triggerButton, presetIndex }) => {
@@ -164,8 +147,14 @@ export const ChatPresetEditor: FC<{
const importPreset = () => { const importPreset = () => {
ClipboardGetText().then((text) => { ClipboardGetText().then((text) => {
try { try {
if (!text.trim().startsWith('{'))
text = new TextDecoder().decode(
new Uint8Array(atob(text)
.split('')
.map((c) => c.charCodeAt(0))));
const preset = JSON.parse(text); const preset = JSON.parse(text);
setEditingPreset(preset); setEditingPreset(preset);
setEditingMessages(false);
toast(t('Imported successfully'), { toast(t('Imported successfully'), {
type: 'success', type: 'success',
autoClose: 1000 autoClose: 1000
@@ -239,7 +228,7 @@ export const ChatPresetEditor: FC<{
<Button appearance="subtle" icon={<Dismiss20Regular />} /> <Button appearance="subtle" icon={<Dismiss20Regular />} />
</DialogTrigger> </DialogTrigger>
</div> </div>
<img src={editingPreset.avatarImg} className="rounded-xl select-none ml-auto mr-auto h-28" /> <img src={absPathAsset(editingPreset.avatarImg)} className="rounded-xl select-none ml-auto mr-auto h-28" />
<Labeled flex breakline label={t('Name')} <Labeled flex breakline label={t('Name')}
content={ content={
<div className="flex gap-2"> <div className="flex gap-2">
@@ -255,9 +244,36 @@ export const ChatPresetEditor: FC<{
} /> } />
{ {
editingMessages ? editingMessages ?
<MessagesEditor /> : <div className="flex flex-col gap-1">
<Labeled flex spaceBetween label={t('Insert default system prompt at the beginning')}
content={
<Switch checked={editingPreset.presystem === undefined ? true : editingPreset.presystem}
onChange={(e, data) => {
setEditingPreset({
presystem: data.checked
});
}} />
} />
<Labeled flex breakline label={t('User Name')}
content={
<Input placeholder="User" value={editingPreset.userName} onChange={(e, data) => {
setEditingPreset({
userName: data.value
});
}} />
} />
<Labeled flex breakline label={t('Assistant Name')}
content={
<Input placeholder="Assistant" value={editingPreset.assistantName} onChange={(e, data) => {
setEditingPreset({
assistantName: data.value
});
}} />
} />
<LazyImportComponent lazyChildren={MessagesEditor} />
</div> :
<div className="flex flex-col gap-1 p-2 overflow-x-hidden overflow-y-auto"> <div className="flex flex-col gap-1 p-2 overflow-x-hidden overflow-y-auto">
<Labeled flex breakline label={`${t('Description')} (${t("Preview Only")})`} <Labeled flex breakline label={`${t('Description')} (${t('Preview Only')})`}
content={ content={
<Input value={editingPreset.desc} onChange={(e, data) => { <Input value={editingPreset.desc} onChange={(e, data) => {
setEditingPreset({ setEditingPreset({
@@ -319,7 +335,7 @@ export const ChatPresetEditor: FC<{
</Dialog>; </Dialog>;
}); });
export const ChatPresets: FC = observer(() => { const ChatPresets: FC = observer(() => {
const { t } = useTranslation(); const { t } = useTranslation();
return <div className="flex flex-wrap gap-2"> return <div className="flex flex-wrap gap-2">
@@ -355,11 +371,6 @@ export const ChatPresets: FC = observer(() => {
</div>; </div>;
}); });
type PresetsNavigationItem = {
icon: ReactElement;
element: ReactElement;
};
const pages: { [label: string]: PresetsNavigationItem } = { const pages: { [label: string]: PresetsNavigationItem } = {
Chat: { Chat: {
icon: <Chat20Regular />, icon: <Chat20Regular />,
@@ -375,7 +386,7 @@ const pages: { [label: string]: PresetsNavigationItem } = {
} }
}; };
export const PresetsManager: FC<{ initTab: string }> = ({ initTab }) => { const PresetsManager: FC<{ initTab: string }> = ({ initTab }) => {
const { t } = useTranslation(); const { t } = useTranslation();
const [tab, setTab] = useState(initTab); const [tab, setTab] = useState(initTab);

View File

@@ -16,33 +16,171 @@ import { observer } from 'mobx-react-lite';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import { checkUpdate, toastWithButton } from '../utils'; import { checkUpdate, toastWithButton } from '../utils';
import { RestartApp } from '../../wailsjs/go/backend_golang/App'; import { RestartApp } from '../../wailsjs/go/backend_golang/App';
import { Language, Languages } from '../types/settings';
export const Languages = { export const GeneralSettings: FC = observer(() => {
dev: 'English', // i18n default const { t } = useTranslation();
zh: '简体中文',
ja: '日本語'
};
export type Language = keyof typeof Languages; return <div className="flex flex-col gap-2">
<Labeled label={t('Language')} flex spaceBetween content={
<Dropdown style={{ minWidth: 0 }} listbox={{ style: { minWidth: 0 } }}
value={Languages[commonStore.settings.language]}
selectedOptions={[commonStore.settings.language]}
onOptionSelect={(_, data) => {
if (data.optionValue) {
const lang = data.optionValue as Language;
commonStore.setSettings({
language: lang
});
}
}}>
{
Object.entries(Languages).map(([langKey, desc]) =>
<Option key={langKey} value={langKey}>{desc}</Option>)
}
</Dropdown>
} />
{
commonStore.platform === 'windows' &&
<Labeled label={t('DPI Scaling')} flex spaceBetween content={
<Dropdown style={{ minWidth: 0 }} listbox={{ style: { minWidth: 0 } }}
value={commonStore.settings.dpiScaling + '%'}
selectedOptions={[commonStore.settings.dpiScaling.toString()]}
onOptionSelect={(_, data) => {
if (data.optionValue) {
commonStore.setSettings({
dpiScaling: Number(data.optionValue)
});
toastWithButton(t('Restart the app to apply DPI Scaling.'), t('Restart'), () => {
RestartApp();
}, {
autoClose: 5000
});
}
}}>
{
Array.from({ length: 7 }, (_, i) => (i + 2) * 25).map((v, i) =>
<Option key={i} value={v.toString()}>{v + '%'}</Option>)
}
</Dropdown>
} />
}
<Labeled label={t('Dark Mode')} flex spaceBetween content={
<Switch checked={commonStore.settings.darkMode}
onChange={(e, data) => {
commonStore.setSettings({
darkMode: data.checked
});
}} />
} />
</div>;
});
export type SettingsType = { export const AdvancedGeneralSettings: FC = observer(() => {
language: Language const { t } = useTranslation();
darkMode: boolean
autoUpdatesCheck: boolean
giteeUpdatesSource: boolean
cnMirror: boolean
host: string
dpiScaling: number
customModelsPath: string
customPythonPath: string
apiUrl: string
apiKey: string
apiChatModelName: string
apiCompletionModelName: string
}
export const Settings: FC = observer(() => { return <div className="flex flex-col gap-2">
const { t, i18n } = useTranslation(); <Labeled label={'API URL'}
content={
<div className="flex gap-2">
<Input style={{ minWidth: 0 }} className="grow" value={commonStore.settings.apiUrl}
onChange={(e, data) => {
commonStore.setSettings({
apiUrl: data.value
});
}} />
<Dropdown style={{ minWidth: '33px' }} listbox={{ style: { minWidth: 0 } }}
value="..." selectedOptions={[]} expandIcon={null}
onOptionSelect={(_, data) => {
commonStore.setSettings({
apiUrl: data.optionValue
});
if (data.optionText === 'OpenAI') {
if (commonStore.settings.apiChatModelName === 'rwkv')
commonStore.setSettings({
apiChatModelName: 'gpt-3.5-turbo'
});
if (commonStore.settings.apiCompletionModelName === 'rwkv')
commonStore.setSettings({
apiCompletionModelName: 'text-davinci-003'
});
}
}}>
<Option value="">{t('Localhost')!}</Option>
<Option value="https://api.openai.com">OpenAI</Option>
</Dropdown>
</div>
} />
<Labeled label={'API Key'}
content={
<Input type="password" className="grow" placeholder="sk-" value={commonStore.settings.apiKey}
onChange={(e, data) => {
commonStore.setSettings({
apiKey: data.value
});
}} />
} />
<Labeled label={t('API Chat Model Name')}
content={
<div className="flex gap-2">
<Input style={{ minWidth: 0 }} className="grow" placeholder="rwkv"
value={commonStore.settings.apiChatModelName}
onChange={(e, data) => {
commonStore.setSettings({
apiChatModelName: data.value
});
}} />
<Dropdown style={{ minWidth: '33px' }} listbox={{ style: { minWidth: 0 } }}
value="..." selectedOptions={[]} expandIcon={null}
onOptionSelect={(_, data) => {
if (data.optionValue) {
commonStore.setSettings({
apiChatModelName: data.optionValue
});
}
}}>
{
['rwkv', 'gpt-4', 'gpt-4-0613', 'gpt-4-32k', 'gpt-4-32k-0613', 'gpt-3.5-turbo', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613']
.map((v, i) =>
<Option key={i} value={v}>{v}</Option>
)
}
</Dropdown>
</div>
} />
<Labeled label={t('API Completion Model Name')}
content={
<div className="flex gap-2">
<Input style={{ minWidth: 0 }} className="grow" placeholder="rwkv"
value={commonStore.settings.apiCompletionModelName}
onChange={(e, data) => {
commonStore.setSettings({
apiCompletionModelName: data.value
});
}} />
<Dropdown style={{ minWidth: '33px' }} listbox={{ style: { minWidth: 0 } }}
value="..." selectedOptions={[]} expandIcon={null}
onOptionSelect={(_, data) => {
if (data.optionValue) {
commonStore.setSettings({
apiCompletionModelName: data.optionValue
});
}
}}>
{
['rwkv', 'text-davinci-003', 'text-davinci-002', 'text-curie-001', 'text-babbage-001', 'text-ada-001']
.map((v, i) =>
<Option key={i} value={v}>{v}</Option>
)
}
</Dropdown>
</div>
} />
</div>;
});
const Settings: FC = observer(() => {
const { t } = useTranslation();
const advancedHeaderRef = useRef<HTMLDivElement>(null); const advancedHeaderRef = useRef<HTMLDivElement>(null);
useEffect(() => { useEffect(() => {
@@ -53,227 +191,101 @@ export const Settings: FC = observer(() => {
return ( return (
<Page title={t('Settings')} content={ <Page title={t('Settings')} content={
<div className="flex flex-col gap-2 overflow-y-auto overflow-x-hidden p-1"> <div className="flex flex-col gap-2 overflow-y-auto overflow-x-hidden p-1">
<Labeled label={t('Language')} flex spaceBetween content={
<Dropdown style={{ minWidth: 0 }} listbox={{ style: { minWidth: 0 } }}
value={Languages[commonStore.settings.language]}
selectedOptions={[commonStore.settings.language]}
onOptionSelect={(_, data) => {
if (data.optionValue) {
const lang = data.optionValue as Language;
commonStore.setSettings({
language: lang
});
}
}}>
{
Object.entries(Languages).map(([langKey, desc]) =>
<Option key={langKey} value={langKey}>{desc}</Option>)
}
</Dropdown>
} />
{ {
commonStore.platform === 'windows' && commonStore.platform === 'web' ?
<Labeled label={t('DPI Scaling')} flex spaceBetween content={ (
<Dropdown style={{ minWidth: 0 }} listbox={{ style: { minWidth: 0 } }} <div className="flex flex-col gap-2">
value={commonStore.settings.dpiScaling + '%'} <GeneralSettings />
selectedOptions={[commonStore.settings.dpiScaling.toString()]} <AdvancedGeneralSettings />
onOptionSelect={(_, data) => {
if (data.optionValue) {
commonStore.setSettings({
dpiScaling: Number(data.optionValue)
});
toastWithButton(t('Restart the app to apply DPI Scaling.'), t('Restart'), () => {
RestartApp();
}, {
autoClose: 5000
});
}
}}>
{
Array.from({ length: 7 }, (_, i) => (i + 2) * 25).map((v, i) =>
<Option key={i} value={v.toString()}>{v + '%'}</Option>)
}
</Dropdown>
} />
}
<Labeled label={t('Dark Mode')} flex spaceBetween content={
<Switch checked={commonStore.settings.darkMode}
onChange={(e, data) => {
commonStore.setSettings({
darkMode: data.checked
});
}} />
} />
<Labeled label={t('Automatic Updates Check')} flex spaceBetween content={
<Switch checked={commonStore.settings.autoUpdatesCheck}
onChange={(e, data) => {
commonStore.setSettings({
autoUpdatesCheck: data.checked
});
if (data.checked)
checkUpdate(true);
}} />
} />
{
commonStore.settings.language === 'zh' &&
<Labeled label={t('Use Gitee Updates Source')} flex spaceBetween content={
<Switch checked={commonStore.settings.giteeUpdatesSource}
onChange={(e, data) => {
commonStore.setSettings({
giteeUpdatesSource: data.checked
});
}} />
} />
}
{
commonStore.settings.language === 'zh' && commonStore.platform !== 'linux' &&
<Labeled label={t('Use Tsinghua Pip Mirrors')} flex spaceBetween content={
<Switch checked={commonStore.settings.cnMirror}
onChange={(e, data) => {
commonStore.setSettings({
cnMirror: data.checked
});
}} />
} />
}
<Labeled label={t('Allow external access to the API (service must be restarted)')} flex spaceBetween content={
<Switch checked={commonStore.settings.host !== '127.0.0.1'}
onChange={(e, data) => {
commonStore.setSettings({
host: data.checked ? '0.0.0.0' : '127.0.0.1'
});
}} />
} />
<Accordion collapsible openItems={!commonStore.advancedCollapsed && 'advanced'} onToggle={(e, data) => {
if (data.value === 'advanced')
commonStore.setAdvancedCollapsed(!commonStore.advancedCollapsed);
}}>
<AccordionItem value="advanced">
<AccordionHeader ref={advancedHeaderRef} size="large">{t('Advanced')}</AccordionHeader>
<AccordionPanel>
<div className="flex flex-col gap-2 overflow-hidden">
{commonStore.platform !== 'darwin' &&
<Labeled label={t('Custom Models Path')}
content={
<Input className="grow" placeholder="./models" value={commonStore.settings.customModelsPath}
onChange={(e, data) => {
commonStore.setSettings({
customModelsPath: data.value
});
}} />
} />
}
<Labeled label={t('Custom Python Path')} // if set, will not use precompiled cuda kernel
content={
<Input className="grow" placeholder="./py310/python" value={commonStore.settings.customPythonPath}
onChange={(e, data) => {
commonStore.setDepComplete(false);
commonStore.setSettings({
customPythonPath: data.value
});
}} />
} />
<Labeled label={'API URL'}
content={
<div className="flex gap-2">
<Input style={{ minWidth: 0 }} className="grow" value={commonStore.settings.apiUrl}
onChange={(e, data) => {
commonStore.setSettings({
apiUrl: data.value
});
}} />
<Dropdown style={{ minWidth: 0 }} listbox={{ style: { minWidth: 0 } }}
value="..." selectedOptions={[]} expandIcon={null}
onOptionSelect={(_, data) => {
commonStore.setSettings({
apiUrl: data.optionValue
});
if (data.optionText === 'OpenAI') {
if (commonStore.settings.apiChatModelName === 'rwkv')
commonStore.setSettings({
apiChatModelName: 'gpt-3.5-turbo'
});
if (commonStore.settings.apiCompletionModelName === 'rwkv')
commonStore.setSettings({
apiCompletionModelName: 'text-davinci-003'
});
}
}}>
<Option value="">{t('Localhost')!}</Option>
<Option value="https://api.openai.com">OpenAI</Option>
</Dropdown>
</div>
} />
<Labeled label={'API Key'}
content={
<Input className="grow" placeholder="sk-" value={commonStore.settings.apiKey}
onChange={(e, data) => {
commonStore.setSettings({
apiKey: data.value
});
}} />
} />
<Labeled label={t('API Chat Model Name')}
content={
<div className="flex gap-2">
<Input style={{ minWidth: 0 }} className="grow" placeholder="rwkv"
value={commonStore.settings.apiChatModelName}
onChange={(e, data) => {
commonStore.setSettings({
apiChatModelName: data.value
});
}} />
<Dropdown style={{ minWidth: 0 }} listbox={{ style: { minWidth: 0 } }}
value="..." selectedOptions={[]} expandIcon={null}
onOptionSelect={(_, data) => {
if (data.optionValue) {
commonStore.setSettings({
apiChatModelName: data.optionValue
});
}
}}>
{
['rwkv', 'gpt-4', 'gpt-4-0613', 'gpt-4-32k', 'gpt-4-32k-0613', 'gpt-3.5-turbo', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613']
.map((v, i) =>
<Option key={i} value={v}>{v}</Option>
)
}
</Dropdown>
</div>
} />
<Labeled label={t('API Completion Model Name')}
content={
<div className="flex gap-2">
<Input style={{ minWidth: 0 }} className="grow" placeholder="rwkv"
value={commonStore.settings.apiCompletionModelName}
onChange={(e, data) => {
commonStore.setSettings({
apiCompletionModelName: data.value
});
}} />
<Dropdown style={{ minWidth: 0 }} listbox={{ style: { minWidth: 0 } }}
value="..." selectedOptions={[]} expandIcon={null}
onOptionSelect={(_, data) => {
if (data.optionValue) {
commonStore.setSettings({
apiCompletionModelName: data.optionValue
});
}
}}>
{
['rwkv', 'text-davinci-003', 'text-davinci-002', 'text-curie-001', 'text-babbage-001', 'text-ada-001']
.map((v, i) =>
<Option key={i} value={v}>{v}</Option>
)
}
</Dropdown>
</div>
} />
</div> </div>
</AccordionPanel> )
</AccordionItem> :
</Accordion> (
<div className="flex flex-col gap-2">
<GeneralSettings />
<Labeled label={t('Automatic Updates Check')} flex spaceBetween content={
<Switch checked={commonStore.settings.autoUpdatesCheck}
onChange={(e, data) => {
commonStore.setSettings({
autoUpdatesCheck: data.checked
});
if (data.checked)
checkUpdate(true);
}} />
} />
{
commonStore.settings.language === 'zh' &&
<Labeled label={t('Use Gitee Updates Source')} flex spaceBetween content={
<Switch checked={commonStore.settings.giteeUpdatesSource}
onChange={(e, data) => {
commonStore.setSettings({
giteeUpdatesSource: data.checked
});
}} />
} />
}
{
commonStore.settings.language === 'zh' && commonStore.platform !== 'linux' &&
<Labeled label={t('Use Tsinghua Pip Mirrors')} flex spaceBetween content={
<Switch checked={commonStore.settings.cnMirror}
onChange={(e, data) => {
commonStore.setSettings({
cnMirror: data.checked
});
}} />
} />
}
<Labeled label={t('Allow external access to the API (service must be restarted)')} flex spaceBetween
content={
<Switch checked={commonStore.settings.host !== '127.0.0.1'}
onChange={(e, data) => {
commonStore.setSettings({
host: data.checked ? '0.0.0.0' : '127.0.0.1'
});
}} />
} />
<Accordion collapsible openItems={!commonStore.advancedCollapsed && 'advanced'} onToggle={(e, data) => {
if (data.value === 'advanced')
commonStore.setAdvancedCollapsed(!commonStore.advancedCollapsed);
}}>
<AccordionItem value="advanced">
<AccordionHeader ref={advancedHeaderRef} size="large">{t('Advanced')}</AccordionHeader>
<AccordionPanel>
<div className="flex flex-col gap-2 overflow-hidden">
{commonStore.platform !== 'darwin' &&
<Labeled label={t('Custom Models Path')}
content={
<Input className="grow" placeholder="./models"
value={commonStore.settings.customModelsPath}
onChange={(e, data) => {
commonStore.setSettings({
customModelsPath: data.value
});
}} />
} />
}
<Labeled label={t('Custom Python Path')} // if set, will not use precompiled cuda kernel
content={
<Input className="grow" placeholder="./py310/python"
value={commonStore.settings.customPythonPath}
onChange={(e, data) => {
commonStore.setDepComplete(false);
commonStore.setSettings({
customPythonPath: data.value
});
}} />
} />
<AdvancedGeneralSettings />
</div>
</AccordionPanel>
</AccordionItem>
</Accordion>
</div>
)
}
</div> </div>
} /> } />
); );
}); });
export default Settings;

View File

@@ -1,4 +1,4 @@
import React, { FC, ReactElement, useEffect, useRef, useState } from 'react'; import React, { FC, useEffect, useRef, useState } from 'react';
import { useTranslation } from 'react-i18next'; import { useTranslation } from 'react-i18next';
import { Button, Dropdown, Input, Option, Select, Switch, Tab, TabList } from '@fluentui/react-components'; import { Button, Dropdown, Input, Option, Select, Switch, Tab, TabList } from '@fluentui/react-components';
import { import {
@@ -24,7 +24,6 @@ import { Labeled } from '../components/Labeled';
import { ToolTipButton } from '../components/ToolTipButton'; import { ToolTipButton } from '../components/ToolTipButton';
import { DataUsageSettings20Regular, Folder20Regular } from '@fluentui/react-icons'; import { DataUsageSettings20Regular, Folder20Regular } from '@fluentui/react-icons';
import { useNavigate } from 'react-router'; import { useNavigate } from 'react-router';
import { Precision } from './Configs';
import { import {
CategoryScale, CategoryScale,
Chart as ChartJS, Chart as ChartJS,
@@ -40,6 +39,12 @@ import { ChartJSOrUndefined } from 'react-chartjs-2/dist/types';
import { WindowShow } from '../../wailsjs/runtime'; import { WindowShow } from '../../wailsjs/runtime';
import { t } from 'i18next'; import { t } from 'i18next';
import { DialogButton } from '../components/DialogButton'; import { DialogButton } from '../components/DialogButton';
import {
DataProcessParameters,
LoraFinetuneParameters,
LoraFinetunePrecision,
TrainNavigationItem
} from '../types/train';
ChartJS.register( ChartJS.register(
CategoryScale, CategoryScale,
@@ -86,39 +91,6 @@ const addLossDataToChart = (epoch: number, loss: number) => {
commonStore.setChartData(commonStore.chartData); commonStore.setChartData(commonStore.chartData);
}; };
export type DataProcessParameters = {
dataPath: string;
vocabPath: string;
}
export type LoraFinetunePrecision = 'bf16' | 'fp16' | 'tf32';
export type LoraFinetuneParameters = {
baseModel: string;
ctxLen: number;
epochSteps: number;
epochCount: number;
epochBegin: number;
epochSave: number;
microBsz: number;
accumGradBatches: number;
preFfn: boolean;
headQk: boolean;
lrInit: string;
lrFinal: string;
warmupSteps: number;
beta1: number;
beta2: number;
adamEps: string;
devices: number;
precision: LoraFinetunePrecision;
gradCp: boolean;
loraR: number;
loraAlpha: number;
loraDropout: number;
loraLoad: string
}
const loraFinetuneParametersOptions: Array<[key: keyof LoraFinetuneParameters, type: string, name: string]> = [ const loraFinetuneParametersOptions: Array<[key: keyof LoraFinetuneParameters, type: string, name: string]> = [
['devices', 'number', 'Devices'], ['devices', 'number', 'Devices'],
['precision', 'LoraFinetunePrecision', 'Precision'], ['precision', 'LoraFinetunePrecision', 'Precision'],
@@ -154,7 +126,7 @@ const showError = (e: any) => {
}; };
const errorsMap = Object.entries({ const errorsMap = Object.entries({
'python3 ./finetune/lora/train.py': 'Memory is not enough, try to increase the virtual memory or use a smaller base model.', 'python3 ./finetune/lora/train.py': 'Memory is not enough, try to increase the virtual memory (Swap of WSL) or use a smaller base model.',
'cuda out of memory': 'VRAM is not enough', 'cuda out of memory': 'VRAM is not enough',
'valueerror: high <= 0': 'Training data is not enough, reduce context length or add more data for training', 'valueerror: high <= 0': 'Training data is not enough, reduce context length or add more data for training',
'+= \'+ptx\'': 'You are using WSL 1 for training, please upgrade to WSL 2. e.g. Run "wsl --set-version Ubuntu-22.04 2"', '+= \'+ptx\'': 'You are using WSL 1 for training, please upgrade to WSL 2. e.g. Run "wsl --set-version Ubuntu-22.04 2"',
@@ -414,7 +386,7 @@ const LoraFinetune: FC = observer(() => {
contentText={t('The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\n' + contentText={t('The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\n' +
'When you provide a directory path, all the txt files within that directory will be automatically converted into training data. ' + 'When you provide a directory path, all the txt files within that directory will be automatically converted into training data. ' +
'This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\n' + 'This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\n' +
'The jsonl format file can be referenced at https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl.\n' + 'The jsonl format file can be referenced at https://github.com/josStorer/RWKV-Runner/blob/master/finetune/data/sample.jsonl.\n' +
'You can also write it similar to OpenAI\'s playground format, as shown in https://platform.openai.com/playground/p/default-chat.\n' + 'You can also write it similar to OpenAI\'s playground format, as shown in https://platform.openai.com/playground/p/default-chat.\n' +
'Even for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. ' + 'Even for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. ' +
'If they are different dialogues or topics, they should be written in separate lines.')} /> 'If they are different dialogues or topics, they should be written in separate lines.')} />
@@ -568,10 +540,6 @@ const LoraFinetune: FC = observer(() => {
); );
}); });
type TrainNavigationItem = {
element: ReactElement;
};
const pages: { [label: string]: TrainNavigationItem } = { const pages: { [label: string]: TrainNavigationItem } = {
'LoRA Finetune': { 'LoRA Finetune': {
element: <LoraFinetune /> element: <LoraFinetune />
@@ -582,7 +550,7 @@ const pages: { [label: string]: TrainNavigationItem } = {
}; };
export const Train: FC = () => { const Train: FC = () => {
const { t } = useTranslation(); const { t } = useTranslation();
const [tab, setTab] = useState('LoRA Finetune'); const [tab, setTab] = useState('LoRA Finetune');
@@ -607,3 +575,5 @@ export const Train: FC = () => {
</div> </div>
</div>; </div>;
}; };
export default Train;

View File

@@ -1,5 +1,5 @@
import { ModelConfig } from './Configs'; import { CompletionPreset } from '../types/completion';
import { CompletionPreset } from './Completion'; import { ModelConfig } from '../types/configs';
export const defaultCompositionPrompt = '<pad>'; export const defaultCompositionPrompt = '<pad>';

View File

@@ -1,5 +1,4 @@
import { ReactElement } from 'react'; import { FC, lazy, LazyExoticComponent, ReactElement } from 'react';
import { Configs } from './Configs';
import { import {
ArrowDownload20Regular, ArrowDownload20Regular,
Chat20Regular, Chat20Regular,
@@ -12,21 +11,12 @@ import {
Settings20Regular, Settings20Regular,
Storage20Regular Storage20Regular
} from '@fluentui/react-icons'; } from '@fluentui/react-icons';
import { Home } from './Home';
import { Chat } from './Chat';
import { Models } from './Models';
import { Train } from './Train';
import { Settings } from './Settings';
import { About } from './About';
import { Downloads } from './Downloads';
import { Completion } from './Completion';
import { Composition } from './Composition';
type NavigationItem = { type NavigationItem = {
label: string; label: string;
path: string; path: string;
icon: ReactElement; icon: ReactElement;
element: ReactElement; element: LazyExoticComponent<FC>;
top: boolean; top: boolean;
}; };
@@ -35,70 +25,70 @@ export const pages: NavigationItem[] = [
label: 'Home', label: 'Home',
path: '/', path: '/',
icon: <Home20Regular />, icon: <Home20Regular />,
element: <Home />, element: lazy(() => import('./Home')),
top: true top: true
}, },
{ {
label: 'Chat', label: 'Chat',
path: '/chat', path: '/chat',
icon: <Chat20Regular />, icon: <Chat20Regular />,
element: <Chat />, element: lazy(() => import('./Chat')),
top: true top: true
}, },
{ {
label: 'Completion', label: 'Completion',
path: '/completion', path: '/completion',
icon: <ClipboardEdit20Regular />, icon: <ClipboardEdit20Regular />,
element: <Completion />, element: lazy(() => import('./Completion')),
top: true top: true
}, },
{ {
label: 'Composition', label: 'Composition',
path: '/composition', path: '/composition',
icon: <MusicNote220Regular />, icon: <MusicNote220Regular />,
element: <Composition />, element: lazy(() => import('./Composition')),
top: true top: true
}, },
{ {
label: 'Configs', label: 'Configs',
path: '/configs', path: '/configs',
icon: <DocumentSettings20Regular />, icon: <DocumentSettings20Regular />,
element: <Configs />, element: lazy(() => import('./Configs')),
top: true top: true
}, },
{ {
label: 'Models', label: 'Models',
path: '/models', path: '/models',
icon: <DataUsageSettings20Regular />, icon: <DataUsageSettings20Regular />,
element: <Models />, element: lazy(() => import('./Models')),
top: true top: true
}, },
{ {
label: 'Downloads', label: 'Downloads',
path: '/downloads', path: '/downloads',
icon: <ArrowDownload20Regular />, icon: <ArrowDownload20Regular />,
element: <Downloads />, element: lazy(() => import('./Downloads')),
top: true top: true
}, },
{ {
label: 'Train', label: 'Train',
path: '/train', path: '/train',
icon: <Storage20Regular />, icon: <Storage20Regular />,
element: <Train />, element: lazy(() => import('./Train')),
top: true top: true
}, },
{ {
label: 'Settings', label: 'Settings',
path: '/settings', path: '/settings',
icon: <Settings20Regular />, icon: <Settings20Regular />,
element: <Settings />, element: lazy(() => import('./Settings')),
top: false top: false
}, },
{ {
label: 'About', label: 'About',
path: '/about', path: '/about',
icon: <Info20Regular />, icon: <Info20Regular />,
element: <About />, element: lazy(() => import('./About')),
top: false top: false
} }
]; ];

View File

@@ -5,39 +5,42 @@ import { getStatus } from './apis';
import { EventsOn, WindowSetTitle } from '../wailsjs/runtime'; import { EventsOn, WindowSetTitle } from '../wailsjs/runtime';
import manifest from '../../manifest.json'; import manifest from '../../manifest.json';
import { defaultModelConfigs, defaultModelConfigsMac } from './pages/defaultConfigs'; import { defaultModelConfigs, defaultModelConfigsMac } from './pages/defaultConfigs';
import { Preset } from './pages/PresetsManager/PresetsButton';
import { wslHandler } from './pages/Train';
import { t } from 'i18next'; import { t } from 'i18next';
import { Preset } from './types/presets';
export async function startup() { export async function startup() {
downloadProgramFiles();
EventsOn('downloadList', (data) => {
if (data)
commonStore.setDownloadList(data);
});
EventsOn('wsl', wslHandler);
EventsOn('wslerr', (e) => {
console.log(e);
});
initLocalModelsNotify();
initLoraModels();
initPresets(); initPresets();
initHardwareMonitor();
await GetPlatform().then(p => commonStore.setPlatform(p as Platform)); await GetPlatform().then(p => commonStore.setPlatform(p as Platform));
if (commonStore.platform !== 'web') {
downloadProgramFiles();
EventsOn('downloadList', (data) => {
if (data)
commonStore.setDownloadList(data);
});
EventsOn('wsl', (await import('./pages/Train')).wslHandler);
EventsOn('wslerr', (e) => {
console.log(e);
});
initLocalModelsNotify();
initLoraModels();
initHardwareMonitor();
}
await initConfig(); await initConfig();
initCache(true).then(initRemoteText); // depends on config customModelsPath if (commonStore.platform !== 'web') {
initCache(true).then(initRemoteText); // depends on config customModelsPath
if (commonStore.settings.autoUpdatesCheck) // depends on config settings if (commonStore.settings.autoUpdatesCheck) // depends on config settings
checkUpdate(); checkUpdate();
getStatus(1000).then(status => { // depends on config api port getStatus(1000).then(status => { // depends on config api port
if (status) if (status)
commonStore.setStatus(status); commonStore.setStatus(status);
}); });
}
} }
async function initRemoteText() { async function initRemoteText() {
@@ -88,7 +91,8 @@ async function initCache(initUnfinishedModels: boolean) {
async function initPresets() { async function initPresets() {
await ReadJson('presets.json').then((presets: Preset[]) => { await ReadJson('presets.json').then((presets: Preset[]) => {
commonStore.setPresets(presets, false); if (Array.isArray(presets))
commonStore.setPresets(presets, false);
}).catch(() => { }).catch(() => {
}); });
} }

View File

@@ -2,21 +2,20 @@ import { makeAutoObservable } from 'mobx';
import { getUserLanguage, isSystemLightMode, saveCache, saveConfigs, savePresets } from '../utils'; import { getUserLanguage, isSystemLightMode, saveCache, saveConfigs, savePresets } from '../utils';
import { WindowSetDarkTheme, WindowSetLightTheme } from '../../wailsjs/runtime'; import { WindowSetDarkTheme, WindowSetLightTheme } from '../../wailsjs/runtime';
import manifest from '../../../manifest.json'; import manifest from '../../../manifest.json';
import { ModelConfig } from '../pages/Configs';
import { Conversation } from '../pages/Chat';
import { ModelSourceItem } from '../pages/Models';
import { DownloadStatus } from '../pages/Downloads';
import { SettingsType } from '../pages/Settings';
import { IntroductionContent } from '../pages/Home';
import { AboutContent } from '../pages/About';
import i18n from 'i18next'; import i18n from 'i18next';
import { CompletionPreset } from '../pages/Completion';
import { defaultCompositionPrompt, defaultModelConfigs, defaultModelConfigsMac } from '../pages/defaultConfigs'; import { defaultCompositionPrompt, defaultModelConfigs, defaultModelConfigsMac } from '../pages/defaultConfigs';
import commonStore from './commonStore';
import { Preset } from '../pages/PresetsManager/PresetsButton';
import { DataProcessParameters, LoraFinetuneParameters } from '../pages/Train';
import { ChartData } from 'chart.js'; import { ChartData } from 'chart.js';
import { CompositionParams } from '../pages/Composition'; import { Preset } from '../types/presets';
import { AboutContent } from '../types/about';
import { Conversation } from '../types/chat';
import { CompletionPreset } from '../types/completion';
import { CompositionParams } from '../types/composition';
import { ModelConfig } from '../types/configs';
import { DownloadStatus } from '../types/downloads';
import { IntroductionContent } from '../types/home';
import { ModelSourceItem } from '../types/models';
import { SettingsType } from '../types/settings';
import { DataProcessParameters, LoraFinetuneParameters } from '../types/train';
export enum ModelStatus { export enum ModelStatus {
Offline, Offline,
@@ -31,9 +30,13 @@ export type Status = {
device_name: string; device_name: string;
} }
export type Platform = 'windows' | 'darwin' | 'linux'; export type Attachment = {
name: string;
size: number;
content: string;
}
const labels = ['January', 'February', 'March', 'April', 'May', 'June', 'July']; export type Platform = 'windows' | 'darwin' | 'linux' | 'web';
class CommonStore { class CommonStore {
// global // global
@@ -54,6 +57,9 @@ class CommonStore {
conversation: Conversation = {}; conversation: Conversation = {};
conversationOrder: string[] = []; conversationOrder: string[] = [];
activePreset: Preset | null = null; activePreset: Preset | null = null;
attachmentUploading: boolean = false;
attachments: { [uuid: string]: Attachment[] } = {};
currentTempAttachment: Attachment | null = null;
// completion // completion
completionPreset: CompletionPreset | null = null; completionPreset: CompletionPreset | null = null;
completionGenerating: boolean = false; completionGenerating: boolean = false;
@@ -74,6 +80,7 @@ class CommonStore {
// configs // configs
currentModelConfigIndex: number = 0; currentModelConfigIndex: number = 0;
modelConfigs: ModelConfig[] = []; modelConfigs: ModelConfig[] = [];
modelParamsCollapsed: boolean = true;
// models // models
modelSourceManifestList: string = 'https://cdn.jsdelivr.net/gh/josstorer/RWKV-Runner@master/manifest.json;'; modelSourceManifestList: string = 'https://cdn.jsdelivr.net/gh/josstorer/RWKV-Runner@master/manifest.json;';
modelSourceList: ModelSourceItem[] = []; modelSourceList: ModelSourceItem[] = [];
@@ -95,7 +102,7 @@ class CommonStore {
epochSteps: 200, epochSteps: 200,
epochCount: 20, epochCount: 20,
epochBegin: 0, epochBegin: 0,
epochSave: 2, epochSave: 1,
microBsz: 1, microBsz: 1,
accumGradBatches: 8, accumGradBatches: 8,
preFfn: false, preFfn: false,
@@ -127,7 +134,7 @@ class CommonStore {
customModelsPath: './models', customModelsPath: './models',
customPythonPath: '', customPythonPath: '',
apiUrl: '', apiUrl: '',
apiKey: 'sk-', apiKey: '',
apiChatModelName: 'rwkv', apiChatModelName: 'rwkv',
apiCompletionModelName: 'rwkv' apiCompletionModelName: 'rwkv'
}; };
@@ -167,7 +174,7 @@ class CommonStore {
createModelConfig = (config: ModelConfig = defaultModelConfigs[0], saveConfig: boolean = true) => { createModelConfig = (config: ModelConfig = defaultModelConfigs[0], saveConfig: boolean = true) => {
if (config.name === defaultModelConfigs[0].name) { if (config.name === defaultModelConfigs[0].name) {
// deep copy // deep copy
config = JSON.parse(JSON.stringify(commonStore.platform !== 'darwin' ? defaultModelConfigs[0] : defaultModelConfigsMac[0])); config = JSON.parse(JSON.stringify(this.platform !== 'darwin' ? defaultModelConfigs[0] : defaultModelConfigsMac[0]));
config.name = new Date().toLocaleString(); config.name = new Date().toLocaleString();
} }
this.modelConfigs.push(config); this.modelConfigs.push(config);
@@ -259,6 +266,10 @@ class CommonStore {
this.advancedCollapsed = value; this.advancedCollapsed = value;
} }
setModelParamsCollapsed(value: boolean) {
this.modelParamsCollapsed = value;
}
setLastUnfinishedModelDownloads(value: DownloadStatus[]) { setLastUnfinishedModelDownloads(value: DownloadStatus[]) {
this.lastUnfinishedModelDownloads = value; this.lastUnfinishedModelDownloads = value;
} }
@@ -320,6 +331,25 @@ class CommonStore {
setLoraModels(value: string[]) { setLoraModels(value: string[]) {
this.loraModels = value; this.loraModels = value;
} }
setAttachmentUploading(value: boolean) {
this.attachmentUploading = value;
}
setAttachments(value: { [uuid: string]: Attachment[] }) {
this.attachments = value;
}
setAttachment(uuid: string, value: Attachment[] | null) {
if (value === null)
delete this.attachments[uuid];
else
this.attachments[uuid] = value;
}
setCurrentTempAttachment(value: Attachment | null) {
this.currentTempAttachment = value;
}
} }
export default new CommonStore(); export default new CommonStore();

View File

@@ -1,12 +1,10 @@
[data-theme='dark'] { [data-theme='dark'] {
@import 'highlight.js/scss/github-dark.scss'; @import 'highlight.js/scss/github-dark.scss';
@import 'github-markdown-css/github-markdown-dark.css';
--color-neutral-muted: rgba(110, 118, 129, 0.4); --color-neutral-muted: rgba(110, 118, 129, 0.4);
} }
[data-theme='light'] { [data-theme='light'] {
@import 'highlight.js/scss/github.scss'; @import 'highlight.js/scss/github.scss';
@import 'github-markdown-css/github-markdown-light.css';
--color-neutral-muted: rgba(150, 160, 170, 0.3); --color-neutral-muted: rgba(150, 160, 170, 0.3);
} }

View File

@@ -0,0 +1 @@
export type AboutContent = { [lang: string]: string }

View File

@@ -0,0 +1,29 @@
export const userName = 'M E';
export const botName = 'A I';
export const welcomeUuid = 'welcome';
export enum MessageType {
Normal,
Error
}
export type Side = 'left' | 'right'
export type Color = 'neutral' | 'brand' | 'colorful'
export type MessageItem = {
sender: string,
type: MessageType,
color: Color,
avatarImg?: string,
time: string,
content: string,
side: Side,
done: boolean
}
export type Conversation = {
[uuid: string]: MessageItem
}
export type Role = 'assistant' | 'user' | 'system';
export type ConversationMessage = {
role: Role;
content: string;
}

View File

@@ -0,0 +1,12 @@
import { ApiParameters } from './configs';
export type CompletionParams = Omit<ApiParameters, 'apiPort'> & {
stop: string,
injectStart: string,
injectEnd: string
};
export type CompletionPreset = {
name: string,
prompt: string,
params: CompletionParams
}

View File

@@ -0,0 +1,12 @@
import { NoteSequence } from '@magenta/music/esm/protobuf';
export type CompositionParams = {
prompt: string,
maxResponseToken: number,
temperature: number,
topP: number,
autoPlay: boolean,
useLocalSoundFont: boolean,
midi: ArrayBuffer | null,
ns: NoteSequence | null
}

View File

@@ -0,0 +1,29 @@
export type ApiParameters = {
apiPort: number
maxResponseToken: number;
temperature: number;
topP: number;
presencePenalty: number;
frequencyPenalty: number;
}
export type Device = 'CPU' | 'CUDA' | 'CUDA-Beta' | 'WebGPU' | 'MPS' | 'Custom';
export type Precision = 'fp16' | 'int8' | 'fp32';
export type ModelParameters = {
// different models can not have the same name
modelName: string;
device: Device;
precision: Precision;
storedLayers: number;
maxStoredLayers: number;
useCustomCuda?: boolean;
customStrategy?: string;
useCustomTokenizer?: boolean;
customTokenizer?: string;
}
export type ModelConfig = {
// different configs can have the same name
name: string;
apiParameters: ApiParameters;
modelParameters: ModelParameters;
enableWebUI?: boolean;
}

View File

@@ -0,0 +1,11 @@
export type DownloadStatus = {
name: string;
path: string;
url: string;
transferred: number;
size: number;
speed: number;
progress: number;
downloading: boolean;
done: boolean;
}

View File

@@ -0,0 +1,11 @@
import { ReactElement } from 'react';
export type IntroductionContent = {
[lang: string]: string
}
export type NavCard = {
label: string;
desc: string;
path: string;
icon: ReactElement;
};

View File

@@ -0,0 +1,14 @@
export type ModelSourceItem = {
name: string;
size: number;
lastUpdated: string;
desc?: { [lang: string]: string | undefined; };
SHA256?: string;
url?: string;
downloadUrl?: string;
isComplete?: boolean;
isLocal?: boolean;
localSize?: number;
lastUpdatedMs?: number;
hide?: boolean;
};

View File

@@ -0,0 +1,30 @@
import { ReactElement } from 'react';
import { ConversationMessage } from './chat';
export type PresetType = 'chat' | 'completion' | 'chatInCompletion'
export type Preset = {
name: string,
tag: string,
// if name and sourceUrl are same, it will be overridden when importing
sourceUrl: string,
desc: string,
avatarImg: string,
type: PresetType,
// chat
welcomeMessage: string,
messages: ConversationMessage[],
displayPresetMessages: boolean,
// completion
prompt: string,
stop: string,
injectStart: string,
injectEnd: string,
presystem?: boolean,
userName?: string,
assistantName?: string
}
export type PresetsNavigationItem = {
icon: ReactElement;
element: ReactElement;
};

View File

@@ -0,0 +1,21 @@
export const Languages = {
dev: 'English', // i18n default
zh: '简体中文',
ja: '日本語'
};
export type Language = keyof typeof Languages;
export type SettingsType = {
language: Language
darkMode: boolean
autoUpdatesCheck: boolean
giteeUpdatesSource: boolean
cnMirror: boolean
host: string
dpiScaling: number
customModelsPath: string
customPythonPath: string
apiUrl: string
apiKey: string
apiChatModelName: string
apiCompletionModelName: string
}

View File

@@ -0,0 +1,35 @@
import { ReactElement } from 'react';
export type DataProcessParameters = {
dataPath: string;
vocabPath: string;
}
export type LoraFinetunePrecision = 'bf16' | 'fp16' | 'tf32';
export type LoraFinetuneParameters = {
baseModel: string;
ctxLen: number;
epochSteps: number;
epochCount: number;
epochBegin: number;
epochSave: number;
microBsz: number;
accumGradBatches: number;
preFfn: boolean;
headQk: boolean;
lrInit: string;
lrFinal: string;
warmupSteps: number;
beta1: number;
beta2: number;
adamEps: string;
devices: number;
precision: LoraFinetunePrecision;
gradCp: boolean;
loraR: number;
loraAlpha: number;
loraDropout: number;
loraLoad: string
}
export type TrainNavigationItem = {
element: ReactElement;
};

View File

@@ -1,6 +1,5 @@
import { import {
AddToDownloadList, AddToDownloadList,
CopyFile,
DeleteFile, DeleteFile,
DepCheck, DepCheck,
InstallPyDep, InstallPyDep,
@@ -16,13 +15,13 @@ import { toast } from 'react-toastify';
import { t } from 'i18next'; import { t } from 'i18next';
import { ToastOptions } from 'react-toastify/dist/types'; import { ToastOptions } from 'react-toastify/dist/types';
import { Button } from '@fluentui/react-components'; import { Button } from '@fluentui/react-components';
import { Language, Languages, SettingsType } from '../pages/Settings';
import { ModelSourceItem } from '../pages/Models';
import { ModelConfig, ModelParameters } from '../pages/Configs';
import { DownloadStatus } from '../pages/Downloads';
import { DataProcessParameters, LoraFinetuneParameters } from '../pages/Train';
import { BrowserOpenURL, WindowShow } from '../../wailsjs/runtime'; import { BrowserOpenURL, WindowShow } from '../../wailsjs/runtime';
import { NavigateFunction } from 'react-router'; import { NavigateFunction } from 'react-router';
import { ModelConfig, ModelParameters } from '../types/configs';
import { DownloadStatus } from '../types/downloads';
import { ModelSourceItem } from '../types/models';
import { Language, Languages, SettingsType } from '../types/settings';
import { DataProcessParameters, LoraFinetuneParameters } from '../types/train';
export type Cache = { export type Cache = {
version: string version: string
@@ -184,7 +183,7 @@ export const getStrategy = (modelConfig: ModelConfig | undefined = undefined) =>
case 'CUDA': case 'CUDA':
case 'CUDA-Beta': case 'CUDA-Beta':
if (avoidOverflow) if (avoidOverflow)
strategy = 'cuda fp32 *1 -> '; strategy = params.useCustomCuda ? 'cuda fp16 *1 -> ' : 'cuda fp32 *1 -> ';
strategy += 'cuda '; strategy += 'cuda ';
strategy += params.precision === 'fp16' ? 'fp16' : params.precision === 'int8' ? 'fp16i8' : 'fp32'; strategy += params.precision === 'fp16' ? 'fp16' : params.precision === 'int8' ? 'fp16i8' : 'fp32';
if (params.storedLayers < params.maxStoredLayers) if (params.storedLayers < params.maxStoredLayers)
@@ -283,6 +282,32 @@ export function bytesToKb(size: number) {
return (size / 1024).toFixed(2); return (size / 1024).toFixed(2);
} }
export function bytesToReadable(size: number) {
if (size < 1024) return size + ' B';
else if (size < 1024 * 1024) return bytesToKb(size) + ' KB';
else if (size < 1024 * 1024 * 1024) return bytesToMb(size) + ' MB';
else return bytesToGb(size) + ' GB';
}
export function getServerRoot(defaultLocalPort: number) {
const customApiUrl = commonStore.settings.apiUrl.trim().replace(/\/$/, '');
if (customApiUrl)
return customApiUrl;
if (commonStore.platform === 'web')
return '';
return `http://127.0.0.1:${defaultLocalPort}`;
}
export function absPathAsset(path: string) {
if (commonStore.platform === 'web')
return path;
if ((path.length > 0 && path[0] === '/') ||
(path.length > 1 && path[1] === ':')) {
return '=>' + path;
}
return path;
}
export async function checkUpdate(notifyEvenLatest: boolean = false) { export async function checkUpdate(notifyEvenLatest: boolean = false) {
fetch(!commonStore.settings.giteeUpdatesSource ? fetch(!commonStore.settings.giteeUpdatesSource ?
'https://api.github.com/repos/josstorer/RWKV-Runner/releases/latest' : 'https://api.github.com/repos/josstorer/RWKV-Runner/releases/latest' :
@@ -402,8 +427,6 @@ export const checkDependencies = async (navigate: NavigateFunction) => {
return false; return false;
} }
commonStore.setDepComplete(true); commonStore.setDepComplete(true);
if (commonStore.platform === 'windows')
CopyFile('./backend-python/wkv_cuda_utils/wkv_cuda_model.py', './py310/Lib/site-packages/rwkv/model.py');
} }
return true; return true;
}; };
@@ -428,12 +451,16 @@ export function toastWithButton(text: string, buttonText: string, onClickButton:
return id; return id;
} }
export function getSupportedCustomCudaFile() { export function getSupportedCustomCudaFile(isBeta: boolean) {
if ([' 10', ' 16', ' 20', ' 30', 'MX', 'Tesla P', 'Quadro P', 'NVIDIA P', 'TITAN X', 'TITAN RTX', 'RTX A', if ([' 10', ' 16', ' 20', ' 30', 'MX', 'Tesla P', 'Quadro P', 'NVIDIA P', 'TITAN X', 'TITAN RTX', 'RTX A',
'Quadro RTX 4000', 'Quadro RTX 5000', 'Tesla T4', 'NVIDIA A10', 'NVIDIA A40'].some(v => commonStore.status.device_name.includes(v))) 'Quadro RTX 4000', 'Quadro RTX 5000', 'Tesla T4', 'NVIDIA A10', 'NVIDIA A40'].some(v => commonStore.status.device_name.includes(v)))
return './backend-python/wkv_cuda_utils/wkv_cuda10_30.pyd'; return isBeta ?
'./backend-python/wkv_cuda_utils/beta/wkv_cuda10_30.pyd' :
'./backend-python/wkv_cuda_utils/wkv_cuda10_30.pyd';
else if ([' 40', 'RTX 5000 Ada', 'RTX 6000 Ada', 'RTX TITAN Ada', 'NVIDIA L40'].some(v => commonStore.status.device_name.includes(v))) else if ([' 40', 'RTX 5000 Ada', 'RTX 6000 Ada', 'RTX TITAN Ada', 'NVIDIA L40'].some(v => commonStore.status.device_name.includes(v)))
return './backend-python/wkv_cuda_utils/wkv_cuda40.pyd'; return isBeta ?
'./backend-python/wkv_cuda_utils/beta/wkv_cuda40.pyd' :
'./backend-python/wkv_cuda_utils/wkv_cuda40.pyd';
else else
return ''; return '';
} }

157
frontend/src/webWails.js Normal file
View File

@@ -0,0 +1,157 @@
function defineRuntime(name, func) {
window.runtime[name] = func
}
function defineApp(name, func) {
window.go['backend_golang']['App'][name] = func
}
if (!window.runtime) {
window.runtime = {}
document.title += ' WebUI'
// not implemented
defineRuntime('EventsOnMultiple', () => {
})
defineRuntime('WindowSetLightTheme', () => {
})
defineRuntime('WindowSetDarkTheme', () => {
})
defineRuntime('WindowShow', () => {
})
defineRuntime('WindowHide', () => {
})
// implemented
defineRuntime('ClipboardGetText', async () => {
return await navigator.clipboard.readText()
})
defineRuntime('ClipboardSetText', async (text) => {
await navigator.clipboard.writeText(text)
return true
})
defineRuntime('WindowSetTitle', (title) => {
document.title = title
})
defineRuntime('BrowserOpenURL', (url) => {
window.open(url, '_blank', 'noopener, noreferrer')
})
}
if (!window.go) {
window.go = {}
window.go['backend_golang'] = {}
window.go['backend_golang']['App'] = {}
// not implemented
defineApp('AddToDownloadList', async () => {
})
defineApp('ContinueDownload', async () => {
})
defineApp('ConvertData', async () => {
})
defineApp('ConvertModel', async () => {
})
defineApp('ConvertSafetensors', async () => {
})
defineApp('CopyFile', async () => {
})
defineApp('DeleteFile', async () => {
})
defineApp('DepCheck', async () => {
})
defineApp('DownloadFile', async () => {
})
defineApp('GetPyError', async () => {
})
defineApp('InstallPyDep', async () => {
})
defineApp('IsPortAvailable', async () => {
})
defineApp('MergeLora', async () => {
})
defineApp('OpenFileFolder', async () => {
})
defineApp('PauseDownload', async () => {
})
defineApp('ReadFileInfo', async () => {
})
defineApp('RestartApp', async () => {
})
defineApp('StartServer', async () => {
})
defineApp('StartWebGPUServer', async () => {
})
defineApp('UpdateApp', async () => {
})
defineApp('WslCommand', async () => {
})
defineApp('WslEnable', async () => {
})
defineApp('WslInstallUbuntu', async () => {
})
defineApp('WslIsEnabled', async () => {
})
defineApp('WslStart', async () => {
})
defineApp('WslStop', async () => {
})
// implemented
defineApp('FileExists', async () => {
return false
})
defineApp('GetPlatform', async () => {
return 'web'
})
defineApp('ListDirFiles', async () => {
return []
})
defineApp('OpenOpenFileDialog', async (filterPattern) => {
return new Promise((resolve, reject) => {
const input = document.createElement('input')
input.type = 'file'
input.accept = filterPattern
.replaceAll('*.txt', 'text/plain')
.replaceAll('*.', 'application/')
.replaceAll(';', ',')
input.onchange = e => {
const file = e.target?.files[0]
if (file.type === 'text/plain') {
const reader = new FileReader()
reader.readAsText(file, 'UTF-8')
reader.onload = readerEvent => {
const content = readerEvent.target?.result
resolve({
blob: file,
content: content
})
}
} else {
resolve({
blob: file
})
}
}
input.click()
})
})
defineApp('OpenSaveFileDialog', async (filterPattern, defaultFileName, savedContent) => {
const saver = await import('file-saver')
saver.saveAs(new Blob([savedContent], { type: 'text/plain;charset=utf-8' }), defaultFileName)
return ''
})
defineApp('OpenSaveFileDialogBytes', async (filterPattern, defaultFileName, savedContent) => {
const saver = await import('file-saver')
saver.saveAs(new Blob([new Uint8Array(savedContent)], { type: 'octet/stream' }), defaultFileName)
return ''
})
defineApp('ReadJson', async (fileName) => {
return JSON.parse(localStorage.getItem(fileName))
})
defineApp('SaveJson', async (fileName, data) => {
localStorage.setItem(fileName, JSON.stringify(data))
})
}

View File

@@ -1,6 +1,37 @@
import {defineConfig} from 'vite'; // @ts-ignore
import { dependencies } from './package.json';
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react'; import react from '@vitejs/plugin-react';
import {visualizer} from 'rollup-plugin-visualizer'; import { visualizer } from 'rollup-plugin-visualizer';
// dependencies that exist anywhere
const vendor = [
'react', 'react-dom', 'react-router', 'react-router-dom',
'@fluentui/react-icons',
'mobx', 'mobx-react-lite',
'i18next', 'react-i18next',
'usehooks-ts', 'react-toastify',
'classnames'
];
const embedded = [
// split @fluentui/react-components by components
'@fluentui/react-components',
// dependencies that exist in single component
'react-beautiful-dnd',
'@magenta/music', 'html-midi-player',
'react-markdown', 'rehype-highlight', 'rehype-raw', 'remark-breaks', 'remark-gfm'
];
function renderChunks(deps: Record<string, string>) {
let chunks = {};
Object.keys(deps).forEach((key) => {
if ([...vendor, ...embedded].includes(key)) return;
chunks[key] = [key];
});
return chunks;
}
// https://vitejs.dev/config/ // https://vitejs.dev/config/
export default defineConfig({ export default defineConfig({
@@ -9,5 +40,16 @@ export default defineConfig({
template: 'treemap', template: 'treemap',
gzipSize: true, gzipSize: true,
brotliSize: true brotliSize: true
})] })],
build: {
chunkSizeWarningLimit: 3000,
rollupOptions: {
output: {
manualChunks: {
vendor,
...renderChunks(dependencies)
}
}
}
}
}); });

View File

@@ -28,12 +28,16 @@ export function GetPyError():Promise<string>;
export function InstallPyDep(arg1:string,arg2:boolean):Promise<string>; export function InstallPyDep(arg1:string,arg2:boolean):Promise<string>;
export function IsPortAvailable(arg1:number):Promise<boolean>;
export function ListDirFiles(arg1:string):Promise<Array<backend_golang.FileInfo>>; export function ListDirFiles(arg1:string):Promise<Array<backend_golang.FileInfo>>;
export function MergeLora(arg1:string,arg2:boolean,arg3:number,arg4:string,arg5:string,arg6:string):Promise<string>; export function MergeLora(arg1:string,arg2:boolean,arg3:number,arg4:string,arg5:string,arg6:string):Promise<string>;
export function OpenFileFolder(arg1:string,arg2:boolean):Promise<void>; export function OpenFileFolder(arg1:string,arg2:boolean):Promise<void>;
export function OpenOpenFileDialog(arg1:string):Promise<string>;
export function OpenSaveFileDialog(arg1:string,arg2:string,arg3:string):Promise<string>; export function OpenSaveFileDialog(arg1:string,arg2:string,arg3:string):Promise<string>;
export function OpenSaveFileDialogBytes(arg1:string,arg2:string,arg3:Array<number>):Promise<string>; export function OpenSaveFileDialogBytes(arg1:string,arg2:string,arg3:Array<number>):Promise<string>;
@@ -48,7 +52,7 @@ export function RestartApp():Promise<void>;
export function SaveJson(arg1:string,arg2:any):Promise<void>; export function SaveJson(arg1:string,arg2:any):Promise<void>;
export function StartServer(arg1:string,arg2:number,arg3:string,arg4:boolean):Promise<string>; export function StartServer(arg1:string,arg2:number,arg3:string,arg4:boolean,arg5:boolean):Promise<string>;
export function StartWebGPUServer(arg1:number,arg2:string):Promise<string>; export function StartWebGPUServer(arg1:number,arg2:string):Promise<string>;

View File

@@ -54,6 +54,10 @@ export function InstallPyDep(arg1, arg2) {
return window['go']['backend_golang']['App']['InstallPyDep'](arg1, arg2); return window['go']['backend_golang']['App']['InstallPyDep'](arg1, arg2);
} }
export function IsPortAvailable(arg1) {
return window['go']['backend_golang']['App']['IsPortAvailable'](arg1);
}
export function ListDirFiles(arg1) { export function ListDirFiles(arg1) {
return window['go']['backend_golang']['App']['ListDirFiles'](arg1); return window['go']['backend_golang']['App']['ListDirFiles'](arg1);
} }
@@ -66,6 +70,10 @@ export function OpenFileFolder(arg1, arg2) {
return window['go']['backend_golang']['App']['OpenFileFolder'](arg1, arg2); return window['go']['backend_golang']['App']['OpenFileFolder'](arg1, arg2);
} }
export function OpenOpenFileDialog(arg1) {
return window['go']['backend_golang']['App']['OpenOpenFileDialog'](arg1);
}
export function OpenSaveFileDialog(arg1, arg2, arg3) { export function OpenSaveFileDialog(arg1, arg2, arg3) {
return window['go']['backend_golang']['App']['OpenSaveFileDialog'](arg1, arg2, arg3); return window['go']['backend_golang']['App']['OpenSaveFileDialog'](arg1, arg2, arg3);
} }
@@ -94,8 +102,8 @@ export function SaveJson(arg1, arg2) {
return window['go']['backend_golang']['App']['SaveJson'](arg1, arg2); return window['go']['backend_golang']['App']['SaveJson'](arg1, arg2);
} }
export function StartServer(arg1, arg2, arg3, arg4) { export function StartServer(arg1, arg2, arg3, arg4, arg5) {
return window['go']['backend_golang']['App']['StartServer'](arg1, arg2, arg3, arg4); return window['go']['backend_golang']['App']['StartServer'](arg1, arg2, arg3, arg4, arg5);
} }
export function StartWebGPUServer(arg1, arg2) { export function StartWebGPUServer(arg1, arg2) {

11
go.mod
View File

@@ -4,15 +4,16 @@ go 1.20
require ( require (
github.com/cavaliergopher/grab/v3 v3.0.1 github.com/cavaliergopher/grab/v3 v3.0.1
github.com/fsnotify/fsnotify v1.6.0
github.com/minio/selfupdate v0.6.0 github.com/minio/selfupdate v0.6.0
github.com/nyaosorg/go-windows-su v0.2.1
github.com/ubuntu/gowsl v0.0.0-20230615094051-94945650cc1e github.com/ubuntu/gowsl v0.0.0-20230615094051-94945650cc1e
github.com/wailsapp/wails/v2 v2.5.1 github.com/wailsapp/wails/v2 v2.6.0
) )
require ( require (
aead.dev/minisign v0.2.0 // indirect aead.dev/minisign v0.2.0 // indirect
github.com/bep/debounce v1.2.1 // indirect github.com/bep/debounce v1.2.1 // indirect
github.com/fsnotify/fsnotify v1.6.0
github.com/go-ole/go-ole v1.2.6 // indirect github.com/go-ole/go-ole v1.2.6 // indirect
github.com/google/uuid v1.3.0 // indirect github.com/google/uuid v1.3.0 // indirect
github.com/jchv/go-winloader v0.0.0-20210711035445-715c2860da7e // indirect github.com/jchv/go-winloader v0.0.0-20210711035445-715c2860da7e // indirect
@@ -22,8 +23,7 @@ require (
github.com/leaanthony/gosod v1.0.3 // indirect github.com/leaanthony/gosod v1.0.3 // indirect
github.com/leaanthony/slicer v1.6.0 // indirect github.com/leaanthony/slicer v1.6.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.18 // indirect github.com/mattn/go-isatty v0.0.19 // indirect
github.com/nyaosorg/go-windows-su v0.2.1
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/rivo/uniseg v0.4.4 // indirect github.com/rivo/uniseg v0.4.4 // indirect
@@ -33,9 +33,10 @@ require (
github.com/ubuntu/decorate v0.0.0-20230125165522-2d5b0a9bb117 // indirect github.com/ubuntu/decorate v0.0.0-20230125165522-2d5b0a9bb117 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/fasttemplate v1.2.2 // indirect github.com/valyala/fasttemplate v1.2.2 // indirect
github.com/wailsapp/go-webview2 v1.0.1 // indirect
github.com/wailsapp/mimetype v1.4.1 // indirect github.com/wailsapp/mimetype v1.4.1 // indirect
golang.org/x/crypto v0.9.0 // indirect golang.org/x/crypto v0.9.0 // indirect
golang.org/x/exp v0.0.0-20230515195305-f3d0a9c9a5cc // indirect golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 // indirect
golang.org/x/net v0.10.0 // indirect golang.org/x/net v0.10.0 // indirect
golang.org/x/sys v0.9.0 // indirect golang.org/x/sys v0.9.0 // indirect
golang.org/x/text v0.9.0 // indirect golang.org/x/text v0.9.0 // indirect

14
go.sum
View File

@@ -36,8 +36,8 @@ github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxec
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg= github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94= github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.18 h1:DOKFKCQ7FNG2L1rbrmstDN4QVRdS89Nkh85u68Uwp98= github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.18/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/minio/selfupdate v0.6.0 h1:i76PgT0K5xO9+hjzKcacQtO7+MjJ4JKA8Ak8XQ9DDwU= github.com/minio/selfupdate v0.6.0 h1:i76PgT0K5xO9+hjzKcacQtO7+MjJ4JKA8Ak8XQ9DDwU=
github.com/minio/selfupdate v0.6.0/go.mod h1:bO02GTIPCMQFTEvE5h4DjYB58bCoZ35XLeBf0buTDdM= github.com/minio/selfupdate v0.6.0/go.mod h1:bO02GTIPCMQFTEvE5h4DjYB58bCoZ35XLeBf0buTDdM=
github.com/nyaosorg/go-windows-su v0.2.1 h1:5V0XavLyjOqPUp7psxxCvBISaneU4XmFPSMlejSl5sc= github.com/nyaosorg/go-windows-su v0.2.1 h1:5V0XavLyjOqPUp7psxxCvBISaneU4XmFPSMlejSl5sc=
@@ -69,17 +69,19 @@ github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyC
github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ= github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo= github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo=
github.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ= github.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/wailsapp/go-webview2 v1.0.1 h1:dEJIeEApW/MhO2tTMISZBFZPuW7kwrFA1NtgFB1z1II=
github.com/wailsapp/go-webview2 v1.0.1/go.mod h1:Uk2BePfCRzttBBjFrBmqKGJd41P6QIHeV9kTgIeOZNo=
github.com/wailsapp/mimetype v1.4.1 h1:pQN9ycO7uo4vsUUuPeHEYoUkLVkaRntMnHJxVwYhwHs= github.com/wailsapp/mimetype v1.4.1 h1:pQN9ycO7uo4vsUUuPeHEYoUkLVkaRntMnHJxVwYhwHs=
github.com/wailsapp/mimetype v1.4.1/go.mod h1:9aV5k31bBOv5z6u+QP8TltzvNGJPmNJD4XlAL3U+j3o= github.com/wailsapp/mimetype v1.4.1/go.mod h1:9aV5k31bBOv5z6u+QP8TltzvNGJPmNJD4XlAL3U+j3o=
github.com/wailsapp/wails/v2 v2.5.1 h1:mfG+2kWqQXYOwdgI43HEILjOZDXbk5woPYI3jP2b+js= github.com/wailsapp/wails/v2 v2.6.0 h1:EyH0zR/EO6dDiqNy8qU5spaXDfkluiq77xrkabPYD4c=
github.com/wailsapp/wails/v2 v2.5.1/go.mod h1:jbOZbcr/zm79PxXxAjP8UoVlDd9wLW3uDs+isIthDfs= github.com/wailsapp/wails/v2 v2.6.0/go.mod h1:WBG9KKWuw0FKfoepBrr/vRlyTmHaMibWesK3yz6nNiM=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I= golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20211209193657-4570a0811e8b/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.0.0-20211209193657-4570a0811e8b/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.9.0 h1:LF6fAI+IutBocDJ2OT0Q1g8plpYljMZ4+lty+dsqw3g= golang.org/x/crypto v0.9.0 h1:LF6fAI+IutBocDJ2OT0Q1g8plpYljMZ4+lty+dsqw3g=
golang.org/x/crypto v0.9.0/go.mod h1:yrmDGqONDYtNj3tH8X9dzUun2m2lzPa9ngI6/RUPGR0= golang.org/x/crypto v0.9.0/go.mod h1:yrmDGqONDYtNj3tH8X9dzUun2m2lzPa9ngI6/RUPGR0=
golang.org/x/exp v0.0.0-20230515195305-f3d0a9c9a5cc h1:mCRnTeVUjcrhlRmO0VK8a6k6Rrf6TF9htwo2pJVSjIU= golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 h1:k/i9J1pBpvlfR+9QsetwPyERsqu1GIbi967PQMq3Ivc=
golang.org/x/exp v0.0.0-20230515195305-f3d0a9c9a5cc/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w= golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20210505024714-0287a6fb4125/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210505024714-0287a6fb4125/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=

Some files were not shown because too many files have changed in this diff Show More