Compare commits

...

66 Commits

Author SHA1 Message Date
josc146
97ae139de5 release v1.4.9 2023-10-27 14:03:28 +08:00
josc146
afd15ef2c5 base64 preset support 2023-10-27 13:35:29 +08:00
josc146
6c73eae9f6 edited chat message now is marked as Normal 2023-10-27 13:11:12 +08:00
josc146
7078f47f72 allow avatarImg to be local absolute path 2023-10-27 12:53:20 +08:00
josc146
d43954cc88 improve message interruption and retry for Chat page 2023-10-27 12:13:05 +08:00
josc146
c87de93498 allow conversation with some document (.pdf, .txt) 2023-10-27 11:36:29 +08:00
josc146
810843a5ab update manifest.json 2023-10-27 00:48:37 +08:00
josc146
f7cbd2c803 update manifest.json 2023-10-26 18:04:06 +08:00
josc146
faf1852012 update stop strategy 2023-10-26 17:47:40 +08:00
josc146
43cfab5d4b change default World series prefix to User/Assistant 2023-10-26 16:58:53 +08:00
josc146
627a20936d RWKVType now no longer relies on the file name 2023-10-26 16:55:33 +08:00
josc146
1d7f19ffaf update sample.jsonl 2023-10-26 14:08:16 +08:00
josc146
d80565d780 mark rwkv raven series as old model 2023-10-26 13:32:59 +08:00
josc146
d7ba88953d chore 2023-10-25 22:53:14 +08:00
josc146
30e1c3171e update kernel (CUDA Compute Capability 5.3) 2023-10-25 22:53:14 +08:00
josc146
1f058b16ac update kernel (CUDA Compute Capability 6.1, Previously 7.5) 2023-10-25 22:53:13 +08:00
josc146
4a192f4057 upgrade to webgpu 0.2.2 (https://github.com/josStorer/ai00_rwkv_server) 2023-10-25 21:02:44 +08:00
josc146
0331bf47f7 upgrade rwkv 0.8.16 (DirectML support; rwkv 5.2 no longer needs to ensure custom cuda kernel enabled) 2023-10-25 17:56:18 +08:00
josc146
2acdaa96b2 chore 2023-10-25 17:51:59 +08:00
josc146
1d200d53ab fix beta linux kernel 2023-10-25 17:51:13 +08:00
josc146
df9e1f408e add /file-to-text api 2023-10-25 17:14:33 +08:00
josc146
4a18696686 add pip --no-warn-script-location 2023-10-25 17:08:50 +08:00
josc146
46b3b285f5 upgrade packages 2023-10-25 17:07:40 +08:00
josc146
1d6aeab9dc fix the make command on Linux and macOS, no longer need manual operations on the wsl.go file. (#158, #173, #207) 2023-10-25 16:12:34 +08:00
josc146
ab110ba30b chore 2023-10-24 23:41:18 +08:00
josc146
2f0fa4ee56 update readme 2023-10-24 21:11:55 +08:00
josc146
0005816c1d fix linux kernel (partial revert 68228a45) 2023-10-05 00:08:18 +08:00
josc146
f70672e5a0 update .gitignore 2023-10-05 00:08:02 +08:00
github-actions[bot]
ee057071a5 release v1.4.8 2023-10-03 07:05:41 +00:00
josc146
4f26404002 release v1.4.8 2023-10-03 15:05:13 +08:00
josc146
df7652856a completion page: add format content button 2023-10-03 14:54:36 +08:00
josc146
de755463e3 improve overflow 2023-10-03 14:27:44 +08:00
josc146
2fe98d9a2c add rwkv5 cuda kernel error prompt 2023-10-03 14:25:31 +08:00
josc146
2e42039607 chore 2023-10-03 14:04:46 +08:00
josc146
71abd357a4 update startup 2023-10-03 13:50:58 +08:00
josc146
68228a4552 rwkv5 pre-compiled kernel (for windows) 2023-10-03 13:39:07 +08:00
josc146
79851433f8 upgrade rwkv pip (0.8.13) 2023-10-03 13:33:55 +08:00
github-actions[bot]
bd4de12e05 release v1.4.7 2023-09-18 15:04:47 +00:00
josc146
c0aa6aaba9 release v1.4.7 2023-09-18 23:03:54 +08:00
josc146
d7abe5f0d1 add pre-compiled beta cuda kernel (rwkv-beta==0.8.5, 40%+ faster for fp16) (thanks to #180, pre-compiled kernel of RTX 40 Series will be included later) 2023-09-18 23:02:49 +08:00
josc146
5e5e1e9651 custom tokenizer .txt support 2023-09-18 17:20:55 +08:00
github-actions[bot]
f8388a0527 release v1.4.6 2023-09-16 05:06:08 +00:00
josc146
f8b764ef8f release v1.4.6 2023-09-16 13:05:34 +08:00
josc146
fcfaa5944e frontend feature adaptation for api params (user_name, assistant_name, presystem) 2023-09-16 13:02:06 +08:00
josc146
f89e89c1c9 chore 2023-09-16 12:23:16 +08:00
josc146
a25965530c custom tokenizer (#77) 2023-09-16 00:34:11 +08:00
josc146
971124d0d7 upgrade to wails@v2.6.0 (EnableDefaultContextMenu: true) 2023-09-16 00:29:45 +08:00
josc146
d7dcc90008 chore 2023-09-15 16:31:14 +08:00
josc146
df969fcfc6 upgrade cuda-beta 2023-09-15 16:30:11 +08:00
josc146
c4042bbfd8 improve ui desc 2023-09-15 16:26:32 +08:00
josc146
4112200b4c revert(2d5456): refresh local models when download complete (for macOS) 2023-09-15 16:25:04 +08:00
Ikko Eltociear Ashimine
3f9a54e36f Update README_JA.md
add translation.
2023-09-13 16:11:43 +08:00
github-actions[bot]
3ed4456135 release v1.4.5 2023-08-27 15:57:18 +00:00
josc146
e0df9ae47b release v1.4.5 2023-08-27 23:56:37 +08:00
josc146
87b2c3ed7d fix build 2023-08-27 23:56:30 +08:00
josc146
50ff7ef6bc always use requirements.txt 2023-08-27 23:52:52 +08:00
josc146
c7a580ca8a update manifest 2023-08-27 23:16:56 +08:00
josc146
eaae7624a7 add HardwareMonitor (Windows Only) 2023-08-27 22:53:18 +08:00
josc146
fcd59de6fb correct Preset UI description 2023-08-27 21:37:32 +08:00
josc146
1bbe127209 fix webgpu_server file permissions of linux and macos 2023-08-27 21:22:26 +08:00
josc146
b868adc058 chore 2023-08-27 21:21:34 +08:00
josc146
a24b78e8c3 python-backend: extra ChatCompletionBody params (raw, presystem);
add default_stop when stop is null
2023-08-27 21:21:11 +08:00
josc146
c8025f1cff allow message content to be empty 2023-08-27 21:02:54 +08:00
josc146
fe0860dbf0 fix lora finetune max_epochs (#170) 2023-08-24 22:49:57 +08:00
josc146
02d5d641d1 chore 2023-08-24 22:48:54 +08:00
github-actions[bot]
a057bb6c5b release v1.4.4 2023-08-16 15:33:53 +00:00
65 changed files with 4218 additions and 1155 deletions

View File

@@ -57,20 +57,23 @@ jobs:
with:
args: install upx
- run: |
Start-BitsTransfer https://github.com/josStorer/LibreHardwareMonitor.Console/releases/download/v0.1.0/LibreHardwareMonitor.Console.zip ./LibreHardwareMonitor.Console.zip
Expand-Archive ./LibreHardwareMonitor.Console.zip -DestinationPath ./components/LibreHardwareMonitor.Console
Start-BitsTransfer https://www.python.org/ftp/python/3.10.11/python-3.10.11-embed-amd64.zip ./python-3.10.11-embed-amd64.zip
Expand-Archive ./python-3.10.11-embed-amd64.zip -DestinationPath ./py310
$content=Get-Content "./py310/python310._pth"; $content | ForEach-Object {if ($_.ReadCount -eq 3) {"Lib\\site-packages"} else {$_}} | Set-Content ./py310/python310._pth
./py310/python ./backend-python/get-pip.py
./py310/python -m pip install Cython==0.29.36
./py310/python -m pip install Cython==3.0.4
Copy-Item -Path "${{ steps.cp310.outputs.python-path }}/../include" -Destination "py310/include" -Recurse
Copy-Item -Path "${{ steps.cp310.outputs.python-path }}/../libs" -Destination "py310/libs" -Recurse
./py310/python -m pip install cyac==1.7
./py310/python -m pip install cyac==1.9
git clone https://github.com/josStorer/ai00_rwkv_server --depth=1
cd ai00_rwkv_server
cargo build --release
mv ./target/release/ai00_server.exe ../backend-rust/webgpu_server.exe
cd ..
go install github.com/wailsapp/wails/v2/cmd/wails@latest
(Get-Content -Path ./backend-golang/app.go) -replace "//go:custom_build windows ", "" | Set-Content -Path ./backend-golang/app.go
make
Rename-Item -Path "build/bin/RWKV-Runner.exe" -NewName "RWKV-Runner_windows_x64.exe"
@@ -104,11 +107,10 @@ jobs:
mv ./target/x86_64-unknown-linux-gnu/release/ai00_server ../backend-rust/webgpu_server
cd ..
go install github.com/wailsapp/wails/v2/cmd/wails@latest
rm -rf ./backend-python/wkv_cuda_utils
rm ./backend-python/rwkv_pip/wkv_cuda.pyd
rm ./backend-python/rwkv_pip/rwkv5.pyd
rm ./backend-python/rwkv_pip/beta/wkv_cuda.pyd
rm ./backend-python/get-pip.py
sed -i '1,2d' ./backend-golang/wsl_not_windows.go
rm ./backend-golang/wsl.go
mv ./backend-golang/wsl_not_windows.go ./backend-golang/wsl.go
make
mv build/bin/RWKV-Runner build/bin/RWKV-Runner_linux_x64
@@ -136,11 +138,10 @@ jobs:
mv ./target/release/ai00_server ../backend-rust/webgpu_server
cd ..
go install github.com/wailsapp/wails/v2/cmd/wails@latest
rm -rf ./backend-python/wkv_cuda_utils
rm ./backend-python/rwkv_pip/wkv_cuda.pyd
rm ./backend-python/rwkv_pip/rwkv5.pyd
rm ./backend-python/rwkv_pip/beta/wkv_cuda.pyd
rm ./backend-python/get-pip.py
sed -i '' '1,2d' ./backend-golang/wsl_not_windows.go
rm ./backend-golang/wsl.go
mv ./backend-golang/wsl_not_windows.go ./backend-golang/wsl.go
make
cp build/darwin/Readme_Install.txt build/bin/Readme_Install.txt
cp build/bin/RWKV-Runner.app/Contents/MacOS/RWKV-Runner build/bin/RWKV-Runner_darwin_universal

2
.gitignore vendored
View File

@@ -18,6 +18,7 @@ __pycache__
/cmd-helper.bat
/install-py-dep.bat
/backend-python/wkv_cuda
/backend-python/rwkv5
*.exe
*.old
.DS_Store
@@ -26,3 +27,4 @@ __pycache__
train_log.txt
finetune/json2binidx_tool/data
/wsl.state
/components

View File

@@ -1,10 +1,41 @@
## Changes
- webgpu support (AMD, Intel, Nvidia, Apple)
- add rwkv-cuda-beta support (faster)
- add misc API (`/models` and `/dashboard/billing/credit_grants`)
- allow multiple systems
- allow completions input to be null
### Features
- allow conversation with some document (.pdf, .txt) (Experimental)
- add `/file-to-text` api
- allow avatarImg to be local absolute path
- base64 preset support
### Upgrades
- upgrade to rwkv 0.8.16 (DirectML support; rwkv 5.2 no longer needs to ensure custom cuda kernel enabled)
- upgrade to webgpu 0.2.2 (WebGPU Mode is now recommended for AMD and Intel
Users) (https://github.com/josStorer/ai00_rwkv_server)
- upgrade python packages
### Improvements
- improve cuda kernel compatibility (compute compatibility 5.3, Jetson Nano, Nvidia 10 Series+)
- RWKVType now no longer relies on the file name (use emb)
- improve message interruption and retry for Chat page
- update sample.jsonl of LoRA finetune
- update api stop strategy for better custom user_name and assistant_name support
- edited chat message now is marked as Normal
- change default World series prefix to User/Assistant
### Chores
- update manifest.json (RWKV-5)
- update readme and client text description
- add pip --no-warn-script-location
- mark rwkv raven series as old model
- chore
### Fixes
- fix linux kernel (partial revert 68228a45)
- fix the `make` command on Linux and macOS, no longer need manual operations on the wsl.go file. (#158, #173, #207)
## Install

View File

@@ -47,7 +47,9 @@ English | [简体中文](README_ZH.md) | [日本語](README_JA.md)
</div>
#### Default configs has enabled custom CUDA kernel acceleration, which is much faster and consumes much less VRAM. If you encounter possible compatibility issues, go to the Configs page and turn off `Use Custom CUDA kernel to Accelerate`.
#### Tip: You can deploy [backend-python](./backend-python/) on a server and use this program as a client only. Fill in your server address in the Settings `API URL`.
#### Default configs has enabled custom CUDA kernel acceleration, which is much faster and consumes much less VRAM. If you encounter possible compatibility issues (output garbled), go to the Configs page and turn off `Use Custom CUDA kernel to Accelerate`, or try to upgrade your gpu driver.
#### If Windows Defender claims this is a virus, you can try downloading [v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip) and letting it update automatically to the latest version, or add it to the trusted list (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`).

View File

@@ -47,7 +47,9 @@
</div>
#### デフォルトの設定はカスタム CUDA カーネルアクセラレーションを有効にしています。互換性の問題が発生する可能性がある場合は、コンフィグページに移動し、`Use Custom CUDA kernel to Accelerate` をオフにしてください。
#### ヒント:サーバーに[backend-python](./backend-python/)をデプロイし、このプログラムをクライアントとして使用することができます。設定された`API URL`にサーバーアドレスを入力してください。
#### デフォルトの設定はカスタム CUDA カーネルアクセラレーションを有効にしています。互換性の問題 (文字化けを出力する) が発生する可能性がある場合は、コンフィグページに移動し、`Use Custom CUDA kernel to Accelerate` をオフにしてください、あるいは、GPUドライバーをアップグレードしてみてください。
#### Windows Defender がこれをウイルスだと主張する場合は、[v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip) をダウンロードして最新版に自動更新させるか、信頼済みリストに追加してみてください (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`)。
@@ -91,8 +93,8 @@ body.json:
## 埋め込み API の例
Note: v1.4.0 has improved the quality of embeddings API. The generated results are not compatible
with previous versions. If you are using embeddings API to generate knowledge bases or similar, please regenerate.
注意: v1.4.0 では、埋め込み API の品質が向上しました。生成される結果は、以前のバージョンとは互換性がありません。
もし、embeddings API を使って知識ベースなどを生成している場合は、再生成してください。
LangChain を使用している場合は、`OpenAIEmbeddings(openai_api_base="http://127.0.0.1:8000", openai_api_key="sk-")`
を使用してください

View File

@@ -46,7 +46,9 @@ API兼容的接口这意味着一切ChatGPT客户端都是RWKV客户端。
</div>
#### 预设配置已经开启自定义CUDA算子加速速度更快且显存消耗更少。如果你遇到可能的兼容性问题前往配置页面关闭`使用自定义CUDA算子加速`
#### 小贴士:你可以在服务器部署[backend-python](./backend-python/),然后将此程序仅用作客户端,在设置的`API URL`中填入你的服务器地址
#### 预设配置已经开启自定义CUDA算子加速速度更快且显存消耗更少。如果你遇到可能的兼容性(输出乱码)问题,前往配置页面,关闭`使用自定义CUDA算子加速`,或更新你的显卡驱动
#### 如果Windows Defender说这是一个病毒你可以尝试下载[v1.3.7_win.zip](https://github.com/josStorer/RWKV-Runner/releases/download/v1.3.7/RWKV-Runner_win.zip),然后让其自动更新到最新版,或添加信任 (`Windows Security` -> `Virus & threat protection` -> `Manage settings` -> `Exclusions` -> `Add or remove exclusions` -> `Add an exclusion` -> `Folder` -> `RWKV-Runner`)

View File

@@ -1,6 +1,7 @@
package backend_golang
import (
"bufio"
"context"
"errors"
"net/http"
@@ -8,6 +9,7 @@ import (
"os/exec"
"path/filepath"
"runtime"
"syscall"
"github.com/fsnotify/fsnotify"
"github.com/minio/selfupdate"
@@ -41,6 +43,7 @@ func (a *App) OnStartup(ctx context.Context) {
a.cmdPrefix = "cd " + a.exDir + " && "
}
os.Chmod("./backend-rust/webgpu_server", 0777)
os.Mkdir(a.exDir+"models", os.ModePerm)
os.Mkdir(a.exDir+"lora-models", os.ModePerm)
os.Mkdir(a.exDir+"finetune/json2binidx_tool/data", os.ModePerm)
@@ -50,7 +53,18 @@ func (a *App) OnStartup(ctx context.Context) {
}
a.downloadLoop()
a.watchFs()
a.monitorHardware()
}
func (a *App) OnBeforeClose(ctx context.Context) bool {
if monitor != nil {
monitor.Process.Kill()
}
return false
}
func (a *App) watchFs() {
watcher, err := fsnotify.NewWatcher()
if err == nil {
watcher.Add("./lora-models")
@@ -62,7 +76,7 @@ func (a *App) OnStartup(ctx context.Context) {
if !ok {
return
}
wruntime.EventsEmit(ctx, "fsnotify", event.Name)
wruntime.EventsEmit(a.ctx, "fsnotify", event.Name)
case _, ok := <-watcher.Errors:
if !ok {
return
@@ -73,6 +87,37 @@ func (a *App) OnStartup(ctx context.Context) {
}
}
var monitor *exec.Cmd
func (a *App) monitorHardware() {
if runtime.GOOS != "windows" {
return
}
monitor = exec.Command("./components/LibreHardwareMonitor.Console/LibreHardwareMonitor.Console.exe")
stdout, err := monitor.StdoutPipe()
if err != nil {
monitor = nil
return
}
go func() {
reader := bufio.NewReader(stdout)
for {
line, _, err := reader.ReadLine()
if err != nil {
wruntime.EventsEmit(a.ctx, "monitorerr", err.Error())
break
}
wruntime.EventsEmit(a.ctx, "monitor", string(line))
}
}()
monitor.SysProcAttr = &syscall.SysProcAttr{}
//go:custom_build windows monitor.SysProcAttr.HideWindow = true
monitor.Start()
}
func (a *App) UpdateApp(url string) (broken bool, err error) {
resp, err := http.Get(url)
if err != nil {

View File

@@ -53,12 +53,12 @@ type FileInfo struct {
ModTime string `json:"modTime"`
}
func (a *App) ReadFileInfo(fileName string) (FileInfo, error) {
func (a *App) ReadFileInfo(fileName string) (*FileInfo, error) {
info, err := os.Stat(a.exDir + fileName)
if err != nil {
return FileInfo{}, err
return nil, err
}
return FileInfo{
return &FileInfo{
Name: info.Name(),
Size: info.Size(),
IsDir: info.IsDir(),
@@ -145,6 +145,20 @@ func (a *App) OpenSaveFileDialogBytes(filterPattern string, defaultFileName stri
return path, nil
}
// Only return the path of the selected file, because communication between frontend and backend is slow. Use AssetServer Handler to read the file.
func (a *App) OpenOpenFileDialog(filterPattern string) (string, error) {
path, err := wruntime.OpenFileDialog(a.ctx, wruntime.OpenDialogOptions{
Filters: []wruntime.FileFilter{{Pattern: filterPattern}},
})
if err != nil {
return "", err
}
if path == "" {
return "", nil
}
return path, nil
}
func (a *App) OpenFileFolder(path string, relative bool) error {
var absPath string
var err error

View File

@@ -28,8 +28,7 @@ func (a *App) StartServer(python string, port int, host string, rwkvBeta bool) (
func (a *App) StartWebGPUServer(port int, host string) (string, error) {
args := []string{"./backend-rust/webgpu_server"}
args = append(args, "-a", "0", "-t", "backend-rust/assets/rwkv_vocab_v20230424.json",
"--port", strconv.Itoa(port), "--ip", host)
args = append(args, "--port", strconv.Itoa(port), "--ip", host)
return Cmd(args...)
}
@@ -149,13 +148,12 @@ func (a *App) InstallPyDep(python string, cnMirror bool) (string, error) {
if runtime.GOOS == "windows" {
ChangeFileLine("./py310/python310._pth", 3, "Lib\\site-packages")
installScript := python + " ./backend-python/get-pip.py -i https://pypi.tuna.tsinghua.edu.cn/simple\n" +
python + " -m pip install torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 --index-url https://download.pytorch.org/whl/cu117\n" +
python + " -m pip install -r ./backend-python/requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple\n" +
installScript := python + " ./backend-python/get-pip.py -i https://pypi.tuna.tsinghua.edu.cn/simple --no-warn-script-location\n" +
python + " -m pip install torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 --index-url https://download.pytorch.org/whl/cu117 --no-warn-script-location\n" +
python + " -m pip install -r ./backend-python/requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple --no-warn-script-location\n" +
"exit"
if !cnMirror {
installScript = strings.Replace(installScript, " -i https://pypi.tuna.tsinghua.edu.cn/simple", "", -1)
installScript = strings.Replace(installScript, "requirements.txt", "requirements_versions.txt", -1)
}
err = os.WriteFile("./install-py-dep.bat", []byte(installScript), 0644)
if err != nil {

View File

@@ -18,20 +18,31 @@ parser.add_argument(
args = parser.parse_args()
def convert_file(
pt_filename: str,
sf_filename: str,
):
def rename_key(rename, name):
for k, v in rename.items():
if k in name:
name = name.replace(k, v)
return name
def convert_file(pt_filename: str, sf_filename: str, transpose_names=[], rename={}):
loaded = torch.load(pt_filename, map_location="cpu")
if "state_dict" in loaded:
loaded = loaded["state_dict"]
loaded = {k: v.clone().half() for k, v in loaded.items()}
for k, v in loaded.items():
print(f"{k}\t{v.shape}\t{v.dtype}")
# for k, v in loaded.items():
# print(f'{k}\t{v.shape}\t{v.dtype}')
# For tensors to be contiguous
loaded = {k: v.contiguous() for k, v in loaded.items()}
for k, v in loaded.items():
for transpose_name in transpose_names:
if transpose_name in k:
loaded[k] = v.transpose(0, 1)
loaded = {rename_key(rename, k).lower(): v.contiguous() for k, v in loaded.items()}
for k, v in loaded.items():
print(f"{k}\t{v.shape}\t{v.dtype}")
dirname = os.path.dirname(sf_filename)
os.makedirs(dirname, exist_ok=True)
@@ -46,7 +57,12 @@ def convert_file(
if __name__ == "__main__":
try:
convert_file(args.input, args.output)
convert_file(
args.input,
args.output,
["lora_A"],
{"time_faaaa": "time_first", "lora_A": "lora.0", "lora_B": "lora.1"},
)
print(f"Saved to {args.output}")
except Exception as e:
with open("error.txt", "w") as f:

View File

@@ -1,3 +1,5 @@
import multipart
import fitz
import safetensors
import midi2audio
import mido
@@ -9,6 +11,7 @@ import GPUtil
import torch
import rwkv
import langchain
import numpy
import tokenizers
import fastapi

View File

@@ -2,10 +2,12 @@ import time
start_time = time.time()
import setuptools # avoid warnings
import os
import sys
import argparse
from typing import Sequence
from contextlib import asynccontextmanager
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
@@ -18,10 +20,17 @@ from utils.rwkv import *
from utils.torch import *
from utils.ngrok import *
from utils.log import log_middleware
from routes import completion, config, state_cache, midi, misc
from routes import completion, config, state_cache, midi, misc, file_process
import global_var
app = FastAPI(dependencies=[Depends(log_middleware)])
@asynccontextmanager
async def lifespan(app: FastAPI):
init()
yield
app = FastAPI(lifespan=lifespan, dependencies=[Depends(log_middleware)])
app.add_middleware(
CORSMiddleware,
@@ -34,11 +43,11 @@ app.add_middleware(
app.include_router(completion.router)
app.include_router(config.router)
app.include_router(midi.router)
app.include_router(file_process.router)
app.include_router(misc.router)
app.include_router(state_cache.router)
@app.on_event("startup")
def init():
global_var.init()
cmd_params = os.environ["RWKV_RUNNER_PARAMS"]

Binary file not shown.

View File

@@ -25,32 +25,46 @@ class Role(Enum):
class Message(BaseModel):
role: Role
content: str = Field(min_length=1)
content: str = Field(min_length=0)
raw: bool = Field(False, description="Whether to treat content as raw text")
default_stop = [
"\n\nUser",
"\n\nQuestion",
"\n\nQ",
"\n\nHuman",
"\n\nBob",
]
class ChatCompletionBody(ModelConfigBody):
messages: Union[List[Message], None]
model: str = "rwkv"
model: Union[str, None] = "rwkv"
stream: bool = False
stop: Union[str, List[str], None] = [
"\n\nUser",
"\n\nQuestion",
"\n\nQ",
"\n\nHuman",
"\n\nBob",
]
user_name: Union[str, None] = None
assistant_name: Union[str, None] = None
stop: Union[str, List[str], None] = default_stop
user_name: Union[str, None] = Field(
None, description="Internal user name", min_length=1
)
assistant_name: Union[str, None] = Field(
None, description="Internal assistant name", min_length=1
)
presystem: bool = Field(
True, description="Whether to insert default system prompt at the beginning"
)
class Config:
schema_extra = {
json_schema_extra = {
"example": {
"messages": [{"role": Role.User.value, "content": "hello"}],
"messages": [
{"role": Role.User.value, "content": "hello", "raw": False}
],
"model": "rwkv",
"stream": False,
"stop": None,
"user_name": None,
"assistant_name": None,
"presystem": True,
"max_tokens": 1000,
"temperature": 1.2,
"top_p": 0.5,
@@ -62,12 +76,12 @@ class ChatCompletionBody(ModelConfigBody):
class CompletionBody(ModelConfigBody):
prompt: Union[str, List[str], None]
model: str = "rwkv"
model: Union[str, None] = "rwkv"
stream: bool = False
stop: Union[str, List[str], None] = None
class Config:
schema_extra = {
json_schema_extra = {
"example": {
"prompt": "The following is an epic science fiction masterpiece that is immortalized, "
+ "with delicate descriptions and grand depictions of interstellar civilization wars.\nChapter 1.\n",
@@ -233,10 +247,6 @@ async def chat_completions(body: ChatCompletionBody, request: Request):
if body.messages is None or body.messages == []:
raise HTTPException(status.HTTP_400_BAD_REQUEST, "messages not found")
basic_system: str = ""
if body.messages[0].role == Role.System:
basic_system = body.messages[0].content
interface = model.interface
user = model.user if body.user_name is None else body.user_name
bot = model.bot if body.assistant_name is None else body.assistant_name
@@ -244,46 +254,54 @@ async def chat_completions(body: ChatCompletionBody, request: Request):
is_raven = model.rwkv_type == RWKVType.Raven
completion_text: str = ""
if basic_system == "":
completion_text = (
f"""
basic_system: Union[str, None] = None
if body.presystem:
if body.messages[0].role == Role.System:
basic_system = body.messages[0].content
if basic_system is None:
completion_text = (
f"""
The following is a coherent verbose detailed conversation between a girl named {bot} and her friend {user}. \
{bot} is very intelligent, creative and friendly. \
{bot} is unlikely to disagree with {user}, and {bot} doesn't like to ask {user} questions. \
{bot} likes to tell {user} a lot about herself and her opinions. \
{bot} usually gives {user} kind, helpful and informative advices.\n
"""
if is_raven
else (
f"{user}{interface} hi\n\n{bot}{interface} Hi. "
+ "I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.\n\n"
)
)
elif basic_system != "":
completion_text = (
(
f"The following is a coherent verbose detailed conversation between a girl named {bot} and her friend {user}. "
if is_raven
else f"{user}{interface} hi\n\n{bot}{interface} Hi. "
else (
f"{user}{interface} hi\n\n{bot}{interface} Hi. "
+ "I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.\n\n"
)
)
else:
if not body.messages[0].raw:
basic_system = (
basic_system.replace("\r\n", "\n")
.replace("\r", "\n")
.replace("\n\n", "\n")
.replace("\n", " ")
.strip()
)
completion_text = (
(
f"The following is a coherent verbose detailed conversation between a girl named {bot} and her friend {user}. "
if is_raven
else f"{user}{interface} hi\n\n{bot}{interface} Hi. "
)
+ basic_system.replace("You are", f"{bot} is" if is_raven else "I am")
.replace("you are", f"{bot} is" if is_raven else "I am")
.replace("You're", f"{bot} is" if is_raven else "I'm")
.replace("you're", f"{bot} is" if is_raven else "I'm")
.replace("You", f"{bot}" if is_raven else "I")
.replace("you", f"{bot}" if is_raven else "I")
.replace("Your", f"{bot}'s" if is_raven else "My")
.replace("your", f"{bot}'s" if is_raven else "my")
.replace("", f"{bot}" if is_raven else "")
+ "\n\n"
)
+ basic_system.replace("\r\n", "\n")
.replace("\r", "\n")
.replace("\n\n", "\n")
.replace("\n", " ")
.strip()
.replace("You are", f"{bot} is" if is_raven else "I am")
.replace("you are", f"{bot} is" if is_raven else "I am")
.replace("You're", f"{bot} is" if is_raven else "I'm")
.replace("you're", f"{bot} is" if is_raven else "I'm")
.replace("You", f"{bot}" if is_raven else "I")
.replace("you", f"{bot}" if is_raven else "I")
.replace("Your", f"{bot}'s" if is_raven else "My")
.replace("your", f"{bot}'s" if is_raven else "my")
.replace("", f"{bot}" if is_raven else "")
+ "\n\n"
)
for message in body.messages[(0 if basic_system == "" else 1) :]:
for message in body.messages[(0 if basic_system is None else 1) :]:
append_message: str = ""
if message.role == Role.User:
append_message = f"{user}{interface} " + message.content
@@ -291,20 +309,25 @@ The following is a coherent verbose detailed conversation between a girl named {
append_message = f"{bot}{interface} " + message.content
elif message.role == Role.System:
append_message = message.content
completion_text += (
append_message.replace("\r\n", "\n")
.replace("\r", "\n")
.replace("\n\n", "\n")
.strip()
+ "\n\n"
)
if not message.raw:
append_message = (
append_message.replace("\r\n", "\n")
.replace("\r", "\n")
.replace("\n\n", "\n")
.strip()
)
completion_text += append_message + "\n\n"
completion_text += f"{bot}{interface}"
user_code = model.pipeline.decode([model.pipeline.encode(user)[0]])
bot_code = model.pipeline.decode([model.pipeline.encode(bot)[0]])
if type(body.stop) == str:
body.stop = [body.stop, f"\n\n{user}", f"\n\n{bot}"]
else:
body.stop.append(f"\n\n{user}")
body.stop.append(f"\n\n{bot}")
body.stop = [body.stop, f"\n\n{user_code}", f"\n\n{bot_code}"]
elif type(body.stop) == list:
body.stop.append(f"\n\n{user_code}")
body.stop.append(f"\n\n{bot_code}")
elif body.stop is None:
body.stop = default_stop
if body.stream:
return EventSourceResponse(
@@ -349,12 +372,12 @@ async def completions(body: CompletionBody, request: Request):
class EmbeddingsBody(BaseModel):
input: Union[str, List[str], List[List[int]], None]
model: str = "rwkv"
model: Union[str, None] = "rwkv"
encoding_format: str = None
fast_mode: bool = False
class Config:
schema_extra = {
json_schema_extra = {
"example": {
"input": "a big apple",
"model": "rwkv",

View File

@@ -10,32 +10,18 @@ import global_var
router = APIRouter()
def get_tokens_path(model_path: str):
model_path = model_path.lower()
tokenizer_dir = f"{pathlib.Path(__file__).parent.parent.resolve()}/rwkv_pip/"
default_tokens_path = tokenizer_dir + "20B_tokenizer.json"
if "raven" in model_path:
return default_tokens_path
elif "world" in model_path:
return "rwkv_vocab_v20230424"
elif "midi" in model_path:
return tokenizer_dir + "tokenizer-midi.json"
else:
return default_tokens_path
class SwitchModelBody(BaseModel):
model: str
strategy: str
tokenizer: Union[str, None] = None
customCuda: bool = False
class Config:
schema_extra = {
json_schema_extra = {
"example": {
"model": "models/RWKV-4-World-3B-v1-20230619-ctx4096.pth",
"strategy": "cuda fp16",
"tokenizer": None,
"customCuda": False,
}
}
@@ -68,17 +54,7 @@ def switch_model(body: SwitchModelBody, response: Response, request: Request):
try:
global_var.set(
global_var.Model,
TextRWKV(
model=body.model,
strategy=body.strategy,
tokens_path=get_tokens_path(body.model),
)
if "midi" not in body.model.lower()
else MusicRWKV(
model=body.model,
strategy=body.strategy,
tokens_path=get_tokens_path(body.model),
),
RWKV(model=body.model, strategy=body.strategy, tokenizer=body.tokenizer),
)
except Exception as e:
print(e)

View File

@@ -0,0 +1,79 @@
import os
from fastapi import (
APIRouter,
HTTPException,
status,
Depends,
File,
UploadFile,
)
from pydantic import BaseModel
from typing import Iterator
router = APIRouter()
class FileToTextParams(BaseModel):
file_name: str
file_encoding: str = "utf-8"
@router.post("/file-to-text", tags=["File Process"])
async def file_to_text(
params: FileToTextParams = Depends(), file_data: UploadFile = File(...)
):
from langchain.schema import Document
from langchain.document_loaders.blob_loaders import Blob
# from langchain
def parse_text(blob: Blob) -> Iterator[Document]:
yield Document(page_content=blob.as_string(), metadata={"source": blob.source})
# from langchain
def parse_pdf(blob: Blob) -> Iterator[Document]:
import fitz
with blob.as_bytes_io() as stream:
doc = fitz.Document(stream=stream)
yield from [
Document(
page_content=page.get_text(),
metadata=dict(
{
"source": blob.source,
"file_path": blob.source,
"page": page.number,
"total_pages": len(doc),
},
**{
k: doc.metadata[k]
for k in doc.metadata
if type(doc.metadata[k]) in [str, int]
},
),
)
for page in doc
]
file_parsers = {".txt": parse_text, ".pdf": parse_pdf}
file_name = file_data.filename or params.file_name
file_ext = os.path.splitext(file_name)[-1]
if file_ext not in file_parsers:
raise HTTPException(status.HTTP_400_BAD_REQUEST, "file type not supported")
try:
pages: Iterator[Document] = file_parsers[file_ext](
Blob.from_data(
await file_data.read(),
encoding=params.file_encoding,
path=file_name,
)
)
pages = list(pages)
except Exception as e:
raise HTTPException(status.HTTP_400_BAD_REQUEST, f"{e}")
return {"pages": pages}

View File

@@ -12,7 +12,7 @@ class TextToMidiBody(BaseModel):
text: str
class Config:
schema_extra = {
json_schema_extra = {
"example": {
"text": "p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:2d:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:2d:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:1f:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:26:a g:39:a g:39:a g:3e:a g:3e:a g:42:a g:42:a pi:39:a pi:3e:a pi:42:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0",
}
@@ -36,7 +36,7 @@ class TxtToMidiBody(BaseModel):
midi_path: str
class Config:
schema_extra = {
json_schema_extra = {
"example": {
"txt_path": "midi/sample.txt",
"midi_path": "midi/sample.mid",
@@ -66,7 +66,7 @@ class MidiToWavBody(BaseModel):
sound_font_path: str = "assets/default_sound_font.sf2"
class Config:
schema_extra = {
json_schema_extra = {
"example": {
"midi_path": "midi/sample.mid",
"wav_path": "midi/sample.wav",
@@ -96,7 +96,7 @@ class TextToWavBody(BaseModel):
sound_font_path: str = "assets/default_sound_font.sf2"
class Config:
schema_extra = {
json_schema_extra = {
"example": {
"text": "p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:2e:a p:3b:a p:45:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2e:0 p:3b:0 p:45:0 b:26:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:26:a g:3e:a g:3e:a g:42:a g:42:a g:45:a g:45:a pi:3e:a pi:42:a pi:45:a t14 p:2a:0 p:3b:0 p:45:0 b:26:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:2d:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:2d:0 g:3e:0 g:3e:0 g:42:0 g:42:0 g:45:0 g:45:0 pi:3e:0 pi:42:0 pi:45:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2a:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a g:39:a g:39:a g:3d:a g:3d:a g:40:a g:40:a pi:39:a pi:3d:a pi:40:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 t2 p:26:a p:2e:a p:31:a p:39:a p:3b:a p:45:a b:21:a t14 p:26:0 p:2e:0 p:31:0 p:39:0 p:3b:0 p:45:0 b:21:0 g:39:0 g:39:0 g:3d:0 g:3d:0 g:40:0 g:40:0 pi:39:0 pi:3d:0 pi:40:0 t2 p:24:a p:2a:a p:31:a p:39:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:24:0 p:2a:0 p:31:0 p:39:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:2e:a p:3b:a p:45:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2e:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:26:a p:2a:a p:3b:a p:45:a t14 p:26:0 p:2a:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a b:1f:a g:3b:a g:3b:a g:3e:a g:3e:a g:43:a g:43:a pi:3b:a pi:3e:a pi:43:a t14 p:2a:0 p:3b:0 p:45:0 b:1f:0 t2 p:24:a p:2a:a p:3b:a p:45:a b:1f:a t14 p:24:0 p:2a:0 p:3b:0 p:45:0 b:1f:0 g:3b:0 g:3b:0 g:3e:0 g:3e:0 g:43:0 g:43:0 pi:3b:0 pi:3e:0 pi:43:0 t2 p:24:a p:2e:a p:3b:a p:45:a b:26:a g:39:a g:39:a g:3e:a g:3e:a g:42:a g:42:a pi:39:a pi:3e:a pi:42:a t14 p:24:0 p:2e:0 p:3b:0 p:45:0 t2 p:2a:a p:3b:a p:45:a t14 p:2a:0 p:3b:0",
"wav_name": "sample",

View File

@@ -96,7 +96,7 @@ def add_state(body: AddStateBody):
quick_log(
None,
None,
f"New Trie Id: {id}\nTrie Len: {len(trie)}\nTrie Buff Size: {trie.buff_size()}\nDtrie Buff Size Of Id: {_get_a_dtrie_buff_size(dtrie[id])}",
f"New Trie Id: {id}\nTrie Len: {len(trie)}\nTrie Buff Size: {trie.buff_size()}\nDtrie Buff Size Of Id: {__get_a_dtrie_buff_size(dtrie[id])}",
)
return "success"
except Exception as e:
@@ -124,7 +124,7 @@ class LongestPrefixStateBody(BaseModel):
prompt: str
def _get_a_dtrie_buff_size(dtrie_v):
def __get_a_dtrie_buff_size(dtrie_v):
# print(sys.getsizeof(dtrie_v["tokens"][0])) # str
# print(sys.getsizeof(dtrie_v["tokens"][0]) * len(dtrie_v["tokens"]))
# print(dtrie_v["state"][0][0].element_size())

View File

@@ -88,7 +88,7 @@ struct Mix {
using torch::Tensor;
void gemm_fp16_cublas(Tensor a, Tensor b, Tensor c);
void gemm_fp16_cublas_tensor(Tensor a, Tensor b, Tensor c);
Tensor att_one(Tensor x, Tensor ln_w, Tensor ln_b, Tensor sx, Tensor k_mix,
Tensor v_mix, Tensor r_mix, Tensor kw,
@@ -105,9 +105,9 @@ Tensor att_one(Tensor x, Tensor ln_w, Tensor ln_b, Tensor sx, Tensor k_mix,
data_ptr<half>(vx), data_ptr<half>(rx)},
x.numel());
gemm_fp16_cublas(kx, kw, k);
gemm_fp16_cublas(vx, vw, v);
gemm_fp16_cublas(rx, rw, r);
gemm_fp16_cublas_tensor(kx, kw, k);
gemm_fp16_cublas_tensor(vx, vw, v);
gemm_fp16_cublas_tensor(rx, rw, r);
at::sigmoid_(r);
element_wise(WkvForwardOne{data_ptr<float>(t_first), data_ptr<float>(k),
@@ -118,7 +118,7 @@ Tensor att_one(Tensor x, Tensor ln_w, Tensor ln_b, Tensor sx, Tensor k_mix,
data_ptr<half>(r)},
x.numel());
gemm_fp16_cublas(r, ow, x_plus_out);
gemm_fp16_cublas_tensor(r, ow, x_plus_out);
x_plus_out += x;
return xx;
}

View File

@@ -0,0 +1,109 @@
#include "ATen/ATen.h"
#include <cuda_fp16.h>
#include <cuda_runtime.h>
#include <torch/extension.h>
#include "element_wise.h"
#include "util.h"
// Equivalent Python code:
// s1 = t_first * a + s
// s2 = a + t_decay * s
struct Fused1 {
const float *t_first;
const float *t_decay;
const float *a;
const float *s;
const int32_t inner_size;
/* out */ float *s1;
/* out */ float *s2;
__device__ void operator()(int i) const {
const int j = i / inner_size;
s1[i] = t_first[j] * a[i] + s[i];
s2[i] = a[i] + t_decay[j] * s[i];
}
};
/*
Equivalent Python code:
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
*/
struct Mix {
const half *xx;
const half *sx;
const half *k_mix;
const half *v_mix;
const half *r_mix;
/* out */ half *kx;
/* out */ half *vx;
/* out */ half *rx;
__device__ void operator()(int i) const {
half xx_ = xx[i];
half sx_ = sx[i];
half k_mix_ = k_mix[i];
half v_mix_ = v_mix[i];
half r_mix_ = r_mix[i];
kx[i] = __hadd(__hmul(xx_, k_mix_),
__hmul(sx_, __hsub(__float2half(1), k_mix_)));
vx[i] = __hadd(__hmul(xx_, v_mix_),
__hmul(sx_, __hsub(__float2half(1), v_mix_)));
rx[i] = __hadd(__hmul(xx_, r_mix_),
__hmul(sx_, __hsub(__float2half(1), r_mix_)));
}
};
using torch::Tensor;
void gemm_fp16_cublas_tensor(Tensor a, Tensor b, Tensor c);
Tensor att_one_v5(Tensor x, Tensor sx, Tensor s, Tensor ln_w, Tensor ln_b,
Tensor lx_w, Tensor lx_b, Tensor k_mix, Tensor v_mix,
Tensor r_mix, Tensor kw,
/* imm */ Tensor kx, Tensor vw, /* imm */ Tensor vx,
Tensor rw,
/* imm */ Tensor rx, Tensor ow, Tensor t_first,
/* imm */ Tensor k, Tensor t_decay, /* imm */ Tensor v,
/* imm */ Tensor r, /* imm */ Tensor s1,
/* out */ Tensor x_plus_out, /* out */ Tensor s2) {
Tensor xx = at::layer_norm(x, {x.size(-1)}, ln_w, ln_b);
element_wise(Mix{data_ptr<half>(xx), data_ptr<half>(sx),
data_ptr<half>(k_mix), data_ptr<half>(v_mix),
data_ptr<half>(r_mix), data_ptr<half>(kx),
data_ptr<half>(vx), data_ptr<half>(rx)},
x.numel());
int H = t_decay.size(0);
int S = x.size(-1) / H;
gemm_fp16_cublas_tensor(rx, rw, r);
r = at::reshape(r, {H, 1, S});
gemm_fp16_cublas_tensor(kx, kw, k);
k = at::reshape(k, {H, S, 1});
gemm_fp16_cublas_tensor(vx, vw, v);
v = at::reshape(v, {H, 1, S});
{
Tensor a = at::matmul(k, v);
// s1 = t_first * a + s
// s2 = a + t_decay * s
element_wise(Fused1{data_ptr<float>(t_first), data_ptr<float>(t_decay),
data_ptr<float>(a), data_ptr<float>(s),
static_cast<int32_t>(a.size(1) * a.size(2)),
data_ptr<float>(s1), data_ptr<float>(s2)},
a.numel());
}
Tensor out = at::matmul(r, s1);
out = at::flatten(out);
out = at::squeeze(at::group_norm(at::unsqueeze(out, 0), H, lx_w, lx_b), 0);
out = at::_cast_Half(out);
gemm_fp16_cublas_tensor(out, ow, x_plus_out);
x_plus_out += x;
return xx;
}

View File

@@ -8,7 +8,6 @@
using torch::Tensor;
void gemm_fp16_cublas(Tensor a, Tensor b, Tensor c);
void gemm_fp16_cublas(const void *a, const void *b, void *c, int m,
int n, int k, bool output_fp32);

View File

@@ -70,11 +70,59 @@ void gemm_fp16_cublas(const void *a, const void *b, void *c, int ori_m,
cuda_c_data_type, cublas_ldc, compute_type, algo));
}
void gemm_fp16_cublas(torch::Tensor a, torch::Tensor b, torch::Tensor c) {
// comptiable with rwkv one mode, 1-D tensor * 2-D tensor
const int m = a.dense_dim() == 1 ? 1 : a.size(0);
const int n = b.size(1);
const int k = b.size(0);
gemm_fp16_cublas(a.data_ptr(), b.data_ptr(), c.data_ptr(), m, n, k,
c.dtype() == torch::kFloat32);
/*
NOTE: blas gemm is column-major by default, but we need row-major output.
The data of row-major, transposed matrix is exactly the same as the
column-major, non-transposed matrix, and C = A * B ---> C^T = B^T * A^T
*/
void gemm_fp16_cublas_tensor(torch::Tensor a, torch::Tensor b, torch::Tensor c) {
if (a.sizes().size() == 1) {
assert(b.sizes().size() == 2);
a = at::unsqueeze(a, 0);
}
const auto cuda_data_type = CUDA_R_16F;
const auto cuda_c_data_type =
c.dtype() == torch::kFloat32 ? CUDA_R_32F : CUDA_R_16F;
const auto compute_type = CUDA_R_32F;
const float sp_alpha = 1.f;
// swap a and b, and use CUBLAS_OP_N. see the notes above
std::swap(a, b);
const cublasOperation_t cublas_trans_a = CUBLAS_OP_N;
const cublasOperation_t cublas_trans_b = CUBLAS_OP_N;
// m = (B^T).size(0) = B.size(1), and = A.size(1) after swap,
// negative axis is used because of the existence of batch matmul.
const int m = a.size(-1);
const int k = a.size(-2);
const int n = b.size(-2);
const int cublas_lda = m;
const int cublas_ldb = k;
const int cublas_ldc = m;
cublasHandle_t cublas_handle = get_cublas_handle();
#if CUDA_VERSION >= 11000
cublasGemmAlgo_t algo = CUBLAS_GEMM_DEFAULT;
#else
cublasGemmAlgo_t algo = CUBLAS_GEMM_DFALT_TENSOR_OP;
#endif
const float sp_beta = 0.f;
if (a.sizes().size() == 2 && b.sizes().size() == 2) {
CUBLAS_CHECK(cublasGemmEx(
cublas_handle, cublas_trans_a, cublas_trans_b, m, n, k, &sp_alpha,
a.data_ptr(), cuda_data_type, cublas_lda, b.data_ptr(), cuda_data_type,
cublas_ldb, &sp_beta, c.data_ptr(), cuda_c_data_type, cublas_ldc,
compute_type, algo));
} else {
// batch matmul
assert(a.sizes().size() == 3 && b.sizes().size() == 3);
const long long int cublas_stride_a = m * k;
const long long int cublas_stride_b = k * n;
const long long int cublas_stride_c = m * n;
CUBLAS_CHECK(cublasGemmStridedBatchedEx(
cublas_handle, cublas_trans_a, cublas_trans_b, m,
n, k, &sp_alpha, a.data_ptr(), cuda_data_type, cublas_lda,
cublas_stride_a, b.data_ptr(), cuda_data_type, cublas_ldb, cublas_stride_b,
&sp_beta, c.data_ptr(), cuda_c_data_type, cublas_ldc, cublas_stride_c,
a.size(0), compute_type, algo));
}
}

View File

@@ -118,7 +118,9 @@ void mm8_one(int64_t N, int64_t M,
using torch::Tensor;
void gemm_fp16_cublas(Tensor a, Tensor b, Tensor c);
#ifndef DISABLE_CUBLAS_GEMM
void gemm_fp16_cublas_tensor(Tensor a, Tensor b, Tensor c);
#endif
Tensor att_one(Tensor x, Tensor ln_w, Tensor ln_b, Tensor sx, Tensor k_mix,
Tensor v_mix, Tensor r_mix, Tensor kw,
@@ -134,6 +136,16 @@ Tensor att_seq(Tensor x, Tensor sx, Tensor ln_w, Tensor ln_b, Tensor k_mix,
Tensor ow, Tensor t_first, Tensor pp, Tensor aa, Tensor bb,
Tensor t_decay, /* imm */ Tensor buf, /* out */ Tensor x_plus_out);
Tensor att_one_v5(Tensor x, Tensor sx, Tensor s, Tensor ln_w, Tensor ln_b,
Tensor lx_w, Tensor lx_b, Tensor k_mix, Tensor v_mix,
Tensor r_mix, Tensor kw,
/* imm */ Tensor kx, Tensor vw, /* imm */ Tensor vx,
Tensor rw,
/* imm */ Tensor rx, Tensor ow, Tensor t_first,
/* imm */ Tensor k, Tensor t_decay, /* imm */ Tensor v,
/* imm */ Tensor r, /* imm */ Tensor s1,
/* out */ Tensor x_plus_out, /* out */ Tensor s2);
Tensor ffn_seq(Tensor x, Tensor sx, Tensor ln_w, Tensor ln_b, Tensor k_mix,
Tensor r_mix, Tensor kw, Tensor vw, Tensor rw,
/* imm */ Tensor buf,
@@ -148,8 +160,9 @@ PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("wkv_forward", &wkv_forward, "wkv forward");
m.def("mm8_seq", &mm8_seq, "mm8 seq");
m.def("mm8_one", &mm8_one, "mm8 one");
m.def("gemm_fp16_cublas", &gemm_fp16_cublas, "gemv fp16 cublas");
m.def("gemm_fp16_cublas", &gemm_fp16_cublas_tensor, "gemv fp16 cublas");
m.def("att_one", &att_one, "att one");
m.def("att_one_v5", &att_one_v5, "att one v5");
m.def("att_seq", &att_seq, "att seq");
m.def("ffn_seq", &ffn_seq, "ffn seq");
m.def("ffn_one", &ffn_one, "ffn one");
@@ -159,8 +172,9 @@ TORCH_LIBRARY(rwkv, m) {
m.def("wkv_forward", wkv_forward);
m.def("mm8_seq", mm8_seq);
m.def("mm8_one", mm8_one);
m.def("gemm_fp16_cublas", gemm_fp16_cublas);
m.def("gemm_fp16_cublas", gemm_fp16_cublas_tensor);
m.def("att_one", att_one);
m.def("att_one_v5", &att_one_v5);
m.def("att_seq", att_seq);
m.def("ffn_seq", ffn_seq);
m.def("ffn_one", ffn_one);

View File

@@ -3,7 +3,7 @@
########################################################################################################
from typing import Optional
import types, gc, os, time, re
import types, gc, os, time, re, platform
import torch
from torch.nn import functional as F
@@ -91,8 +91,10 @@ if os.environ.get("RWKV_CUDA_ON") == "1":
f"{current_path}/cuda/att_one.cu",
f"{current_path}/cuda/att_seq.cu",
f"{current_path}/cuda/ffn.cu",
f"{current_path}/cuda/att_one_v5.cu",
],
verbose=True,
extra_ldflags=["cublas.lib" if os.name == "nt" else ""],
extra_cuda_cflags=[
"-t 4",
"-std=c++17",
@@ -149,26 +151,40 @@ if os.environ.get("RWKV_CUDA_ON") == "1":
torch.ops.rwkv.mm8_one(N, M, x, w, mx, rx, my, ry, y)
return y.to(dtype=x.dtype)
else:
os.environ["RWKV_CUDA_ON"] = "0"
if os.environ.get("RWKV_CUDA_ON") == "1":
@MyStatic
def gemm(a, b, output_dtype: Optional[torch.dtype] = None):
if output_dtype is None:
output_dtype = a.dtype
if a.dtype == b.dtype == torch.float16 and a.device.type == "cuda":
assert len(b.shape) == 2
if len(a.shape) == 1:
assert len(b.shape) == 2
c = torch.empty((b.shape[-1],), dtype=output_dtype, device=a.device)
a = a.unsqueeze(0)
else:
c = torch.empty(
(a.shape[0], b.shape[-1]), dtype=output_dtype, device=a.device
)
assert len(a.shape) == len(b.shape)
assert len(a.shape) == 2 or len(a.shape) == 3
# torch.empty((*a.shape[:-1], b.shape[-1])) doesn't work with jit
if len(a.shape) == 2:
c = torch.empty(
(a.shape[0], b.shape[-1]), dtype=output_dtype, device=a.device
)
else:
c = torch.empty(
(a.shape[0], a.shape[1], b.shape[-1]),
dtype=output_dtype,
device=a.device,
)
torch.ops.rwkv.gemm_fp16_cublas(a, b, c)
return c
else:
return (a @ b).to(output_dtype)
else:
os.environ["RWKV_CUDA_ON"] = "0"
def gemm(a, b, output_dtype: Optional[torch.dtype] = None):
if output_dtype is None:
@@ -217,7 +233,7 @@ class RWKV(MyModule):
) # load model to CPU first
# it is supported to load a pure meta-tensor state dict (e.g. for quick testing)
for k, v in self.w.items():
if v.is_meta:
if isinstance(v, torch.Tensor) and v.is_meta:
# torch.zeros_like(v, device='cpu') doesn't produce an all-zero tensor
# if v is a meta tensor
self.w[k] = torch.zeros(v.shape, dtype=v.dtype, device="cpu")
@@ -247,9 +263,14 @@ class RWKV(MyModule):
args.n_embd = w["emb.weight"].shape[1]
args.n_layer = 0
keys = list(w.keys())
self.version = 4
for x in keys:
layer_id = int(x.split(".")[1]) if ("blocks." in x) else 0
args.n_layer = max(args.n_layer, layer_id + 1)
if "ln_x" in x:
self.version = 5
if self.version == 5 and "att.time_decay" in x:
args.n_head = w[x].shape[0]
####################### Compute strategy
@@ -352,6 +373,20 @@ class RWKV(MyModule):
del w["blocks.0.ln0.bias"]
print_need_newline = False
REAL_TIME_FIRST = False
for x in list(w.keys()):
if ".time_faaaa" in x:
REAL_TIME_FIRST = True
if REAL_TIME_FIRST:
w = {
k.replace(".time_faaaa", ".time_first")
if ".time_faaaa" in k
else k: v
for k, v in w.items()
}
self.w = w
keys = list(w.keys())
for x in keys:
w[x].requires_grad = False
@@ -382,8 +417,19 @@ class RWKV(MyModule):
w[x] = w[x].t()
if ".time_decay" in x: # need fp32 for this
w[x] = -torch.exp(w[x].float())
if self.version == 4:
w[x] = -torch.exp(w[x].float())
elif self.version == 5:
w[x] = torch.exp(-torch.exp(w[x].float())).reshape(-1, 1, 1)
elif ".time_first" in x: # need fp32 for this
if self.version == 4:
w[x] = w[x].float()
elif self.version == 5:
if REAL_TIME_FIRST:
w[x] = w[x].float().reshape(-1, 1, 1)
else:
w[x] = torch.exp(w[x].float()).reshape(-1, 1, 1)
elif ".ln_x" in x: # need fp32 for group_norm
w[x] = w[x].float()
else:
if (len(w[x].shape) == 2) and ("emb" not in x):
@@ -931,6 +977,147 @@ class RWKV(MyModule):
########################################################################################################
@MyFunction
def att_one_v5(
self,
x,
sx,
s,
ln_w,
ln_b,
lx_w,
lx_b,
k_mix,
v_mix,
r_mix,
t_decay,
t_first,
kw,
vw,
rw,
ow,
kmx,
krx,
kmy,
kry,
vmx,
vrx,
vmy,
vry,
rmx,
rrx,
rmy,
rry,
omx,
orx,
omy,
ory,
):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
H = t_decay.shape[0]
S = x.shape[-1] // H
r = gemm(rx, rw, output_dtype=torch.float32).view(H, 1, S)
k = gemm(kx, kw, output_dtype=torch.float32).view(H, S, 1)
v = gemm(vx, vw, output_dtype=torch.float32).view(H, 1, S)
a = gemm(k, v)
out = r @ (t_first * a + s)
s = a + t_decay * s
out = out.flatten()
out = F.group_norm(
out.unsqueeze(0), num_groups=H, weight=lx_w, bias=lx_b
).squeeze(0)
out = out.to(dtype=x.dtype)
out = gemm(out, ow)
return x + out, xx, s
@MyFunction
def att_seq_v5(
self,
x,
sx,
s,
ln_w,
ln_b,
lx_w,
lx_b,
k_mix,
v_mix,
r_mix,
t_decay,
t_first,
kw,
vw,
rw,
ow,
kmx,
krx,
kmy,
kry,
vmx,
vrx,
vmy,
vry,
rmx,
rrx,
rmy,
rry,
omx,
orx,
omy,
ory,
):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1, :]))
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
H = t_decay.shape[0]
S = x.shape[-1] // H
T = x.shape[0]
w = t_decay.reshape(-1, 1)
u = t_first.reshape(-1, 1)
ws = w.pow(T).reshape(H, 1, 1)
ind = torch.arange(T - 1, -1, -1, device=w.device).unsqueeze(0).repeat(H, 1)
w = w.repeat(1, T).pow(ind)
wk = w.reshape(H, 1, T)
wb = wk.transpose(-2, -1).flip(1)
w = torch.cat([w[:, 1:], u], dim=1)
w = F.pad(w, (0, T))
w = torch.tile(w, [T])
w = w[:, :-T].reshape(-1, T, 2 * T - 1)
w = w[:, :, T - 1 :].reshape(H, T, T)
r = gemm(rx, rw, output_dtype=torch.float32).view(T, H, S).transpose(0, 1)
k = (
gemm(kx, kw, output_dtype=torch.float32)
.view(T, H, S)
.transpose(0, 1)
.transpose(-2, -1)
)
v = gemm(vx, vw, output_dtype=torch.float32).view(T, H, S).transpose(0, 1)
out = ((r @ k) * w) @ v + (r @ s) * wb
s = ws * s + (k * wk) @ v
out = out.transpose(0, 1).contiguous().reshape(T, H * S)
out = F.group_norm(out, num_groups=H, weight=lx_w, bias=lx_b)
out = out.to(dtype=x.dtype)
out = gemm(out, ow)
return x + out, xx[-1, :], s
########################################################################################################
if os.environ["RWKV_CUDA_ON"] == "1":
@MyFunction
@@ -1140,7 +1327,7 @@ class RWKV(MyModule):
xx = torch.ops.rwkv.ffn_seq(
x, sx, ln_w, ln_b, k_mix, r_mix, kw, vw, rw, buf, x_plus_out
)
return x_plus_out, xx[-1:]
return x_plus_out, xx[-1, :]
@MyFunction
def cuda_att_one_fp16(
@@ -1220,6 +1407,86 @@ class RWKV(MyModule):
)
return x_plus_out_t, xx, t1_t, t2_t, p_t
@MyFunction
def cuda_att_one_v5_fp16(
self,
x,
sx,
s,
ln_w,
ln_b,
lx_w,
lx_b,
k_mix,
v_mix,
r_mix,
t_decay,
t_first,
kw,
vw,
rw,
ow,
kmx,
krx,
kmy,
kry,
vmx,
vrx,
vmy,
vry,
rmx,
rrx,
rmy,
rry,
omx,
orx,
omy,
ory,
):
kx = torch.empty_like(x)
vx = torch.empty_like(x)
rx = torch.empty_like(x)
H = t_decay.shape[0]
S = x.shape[-1] // H
r = torch.empty((H * S,), dtype=torch.float32, device=x.device)
k = torch.empty((H * S,), dtype=torch.float32, device=x.device)
v = torch.empty((H * S,), dtype=torch.float32, device=x.device)
s1 = torch.empty((H, S, S), dtype=torch.float32, device=x.device)
s2 = torch.empty((H, S, S), dtype=torch.float32, device=x.device)
x_plus_out = torch.empty_like(x)
xx = torch.ops.rwkv.att_one_v5(
x,
sx,
s,
ln_w,
ln_b,
lx_w,
lx_b,
k_mix,
v_mix,
r_mix,
kw,
kx,
vw,
vx,
rw,
rx,
ow,
t_first,
k,
t_decay,
v,
r,
s1,
x_plus_out,
s2,
)
return x_plus_out, xx, s2
@MyFunction
def cuda_ffn_one_fp16(
self,
@@ -1265,34 +1532,63 @@ class RWKV(MyModule):
args = self.args
if state == None:
state = [None] * args.n_layer * 5
for i in range(
args.n_layer
): # state: 0=att_xx 1=att_aa 2=att_bb 3=att_pp 4=ffn_xx
dd = self.strategy[i]
dev = dd.device
atype = dd.atype
state[i * 5 + 0] = torch.zeros(
args.n_embd, dtype=atype, requires_grad=False, device=dev
).contiguous()
state[i * 5 + 1] = torch.zeros(
args.n_embd, dtype=torch.float, requires_grad=False, device=dev
).contiguous()
state[i * 5 + 2] = torch.zeros(
args.n_embd, dtype=torch.float, requires_grad=False, device=dev
).contiguous()
state[i * 5 + 3] = (
torch.zeros(
if self.version == 4:
state = [None] * args.n_layer * 5
for i in range(
args.n_layer
): # state: 0=att_xx 1=att_aa 2=att_bb 3=att_pp 4=ffn_xx
dd = self.strategy[i]
dev = dd.device
atype = dd.atype
state[i * 5 + 0] = torch.zeros(
args.n_embd, dtype=atype, requires_grad=False, device=dev
).contiguous()
state[i * 5 + 1] = torch.zeros(
args.n_embd,
dtype=torch.float,
requires_grad=False,
device=dev,
).contiguous()
- 1e30
)
state[i * 5 + 4] = torch.zeros(
args.n_embd, dtype=atype, requires_grad=False, device=dev
).contiguous()
state[i * 5 + 2] = torch.zeros(
args.n_embd,
dtype=torch.float,
requires_grad=False,
device=dev,
).contiguous()
state[i * 5 + 3] = (
torch.zeros(
args.n_embd,
dtype=torch.float,
requires_grad=False,
device=dev,
).contiguous()
- 1e30
)
state[i * 5 + 4] = torch.zeros(
args.n_embd, dtype=atype, requires_grad=False, device=dev
).contiguous()
elif self.version == 5:
state = [None] * args.n_layer * 3
for i in range(args.n_layer): # state: 0=att_xx 1=att_kv 2=ffn_xx
dd = self.strategy[i]
dev = dd.device
atype = dd.atype
state[i * 3 + 0] = torch.zeros(
args.n_embd, dtype=atype, requires_grad=False, device=dev
).contiguous()
state[i * 3 + 1] = torch.zeros(
(
args.n_head,
args.n_embd // args.n_head,
args.n_embd // args.n_head,
),
dtype=torch.float,
requires_grad=False,
device=dev,
).contiguous()
state[i * 3 + 2] = torch.zeros(
args.n_embd, dtype=atype, requires_grad=False, device=dev
).contiguous()
seq_mode = len(tokens) > 1
@@ -1317,9 +1613,13 @@ class RWKV(MyModule):
ATT = self.cuda_att_seq_i8
else:
ATT = self.cuda_att_seq_naive
if self.version == 5:
ATT = self.att_seq_v5
else:
ATT = self.att_one if wtype != torch.uint8 else self.att_one_i8
FFN = self.ffn_one if wtype != torch.uint8 else self.ffn_one_i8
if self.version == 5:
ATT = self.att_one_v5
if (
"cuda" in str(dev)
and os.environ["RWKV_CUDA_ON"] == "1"
@@ -1327,6 +1627,8 @@ class RWKV(MyModule):
):
ATT = self.cuda_att_one_fp16
FFN = self.cuda_ffn_one_fp16
if self.version == 5:
ATT = self.cuda_att_one_v5_fp16
x = x.to(dtype=atype, device=dev)
@@ -1355,46 +1657,82 @@ class RWKV(MyModule):
orx = w[f"{att}output.weight_rx"] if wtype == torch.uint8 else x
omy = w[f"{att}output.weight_my"] if wtype == torch.uint8 else x
ory = w[f"{att}output.weight_ry"] if wtype == torch.uint8 else x
(
x,
state[i * 5 + 0],
state[i * 5 + 1],
state[i * 5 + 2],
state[i * 5 + 3],
) = ATT(
x,
state[i * 5 + 0],
state[i * 5 + 1],
state[i * 5 + 2],
state[i * 5 + 3],
w[f"{bbb}ln1.weight"],
w[f"{bbb}ln1.bias"],
w[f"{att}time_mix_k"],
w[f"{att}time_mix_v"],
w[f"{att}time_mix_r"],
w[f"{att}time_decay"],
w[f"{att}time_first"],
kw,
vw,
rw,
ow,
kmx,
krx,
kmy,
kry,
vmx,
vrx,
vmy,
vry,
rmx,
rrx,
rmy,
rry,
omx,
orx,
omy,
ory,
)
if self.version == 4:
(
x,
state[i * 5 + 0],
state[i * 5 + 1],
state[i * 5 + 2],
state[i * 5 + 3],
) = ATT(
x,
state[i * 5 + 0],
state[i * 5 + 1],
state[i * 5 + 2],
state[i * 5 + 3],
w[f"{bbb}ln1.weight"],
w[f"{bbb}ln1.bias"],
w[f"{att}time_mix_k"],
w[f"{att}time_mix_v"],
w[f"{att}time_mix_r"],
w[f"{att}time_decay"],
w[f"{att}time_first"],
kw,
vw,
rw,
ow,
kmx,
krx,
kmy,
kry,
vmx,
vrx,
vmy,
vry,
rmx,
rrx,
rmy,
rry,
omx,
orx,
omy,
ory,
)
elif self.version == 5:
x, state[i * 3 + 0], state[i * 3 + 1] = ATT(
x,
state[i * 3 + 0],
state[i * 3 + 1],
w[f"{bbb}ln1.weight"],
w[f"{bbb}ln1.bias"],
w[f"{att}ln_x.weight"],
w[f"{att}ln_x.bias"],
w[f"{att}time_mix_k"],
w[f"{att}time_mix_v"],
w[f"{att}time_mix_r"],
w[f"{att}time_decay"],
w[f"{att}time_first"],
kw,
vw,
rw,
ow,
kmx,
krx,
kmy,
kry,
vmx,
vrx,
vmy,
vry,
rmx,
rrx,
rmy,
rry,
omx,
orx,
omy,
ory,
)
if dd.stream:
del kw, vw, rw, ow
@@ -1417,9 +1755,13 @@ class RWKV(MyModule):
rrx = w[f"{ffn}receptance.weight_rx"] if wtype == torch.uint8 else x
rmy = w[f"{ffn}receptance.weight_my"] if wtype == torch.uint8 else x
rry = w[f"{ffn}receptance.weight_ry"] if wtype == torch.uint8 else x
x, state[i * 5 + 4] = FFN(
if self.version == 4:
offset = i * 5 + 4
elif self.version == 5:
offset = i * 3 + 2
x, state[offset] = FFN(
x,
state[i * 5 + 4],
state[offset],
w[f"{bbb}ln2.weight"],
w[f"{bbb}ln2.bias"],
w[f"{ffn}time_mix_k"],

Binary file not shown.

View File

@@ -0,0 +1,86 @@
#include <cublas_v2.h>
#include <cuda.h>
#include <cuda_fp16.h>
#include <cuda_runtime.h>
#include <torch/extension.h>
#define CUBLAS_CHECK(condition) \
for (cublasStatus_t _cublas_check_status = (condition); \
_cublas_check_status != CUBLAS_STATUS_SUCCESS;) \
throw std::runtime_error("cuBLAS error " + \
std::to_string(_cublas_check_status) + " at " + \
std::to_string(__LINE__));
#define CUDA_CHECK(condition) \
for (cudaError_t _cuda_check_status = (condition); \
_cuda_check_status != cudaSuccess;) \
throw std::runtime_error( \
"CUDA error " + std::string(cudaGetErrorString(_cuda_check_status)) + \
" at " + std::to_string(__LINE__));
cublasHandle_t get_cublas_handle() {
static cublasHandle_t cublas_handle = []() {
cublasHandle_t handle = nullptr;
CUBLAS_CHECK(cublasCreate(&handle));
#if CUDA_VERSION < 11000
CUBLAS_CHECK(cublasSetMathMode(handle, CUBLAS_TENSOR_OP_MATH));
#else
CUBLAS_CHECK(cublasSetMathMode(handle, CUBLAS_DEFAULT_MATH));
#endif // CUDA_VERSION < 11000
return handle;
}();
return cublas_handle;
}
/*
NOTE: blas gemm is column-major by default, but we need row-major output.
The data of row-major, transposed matrix is exactly the same as the
column-major, non-transposed matrix, and C = A * B ---> C^T = B^T * A^T
*/
void gemm_fp16_cublas(torch::Tensor a, torch::Tensor b, torch::Tensor c) {
const auto cuda_data_type = CUDA_R_16F;
const auto cuda_c_data_type =
c.dtype() == torch::kFloat32 ? CUDA_R_32F : CUDA_R_16F;
const auto compute_type = CUDA_R_32F;
const float sp_alpha = 1.f;
// swap a and b, and use CUBLAS_OP_N. see the notes above
std::swap(a, b);
const cublasOperation_t cublas_trans_a = CUBLAS_OP_N;
const cublasOperation_t cublas_trans_b = CUBLAS_OP_N;
// m = (B^T).size(0) = B.size(1), and = A.size(1) after swap,
// negative axis is used because of the existence of batch matmul.
const int m = a.size(-1);
const int k = a.size(-2);
const int n = b.size(-2);
const int cublas_lda = m;
const int cublas_ldb = k;
const int cublas_ldc = m;
cublasHandle_t cublas_handle = get_cublas_handle();
#if CUDA_VERSION >= 11000
cublasGemmAlgo_t algo = CUBLAS_GEMM_DEFAULT;
#else
cublasGemmAlgo_t algo = CUBLAS_GEMM_DFALT_TENSOR_OP;
#endif
const float sp_beta = 0.f;
if (a.sizes().size() == 2 && b.sizes().size() == 2) {
CUBLAS_CHECK(cublasGemmEx(
cublas_handle, cublas_trans_a, cublas_trans_b, m, n, k, &sp_alpha,
a.data_ptr(), cuda_data_type, cublas_lda, b.data_ptr(), cuda_data_type,
cublas_ldb, &sp_beta, c.data_ptr(), cuda_c_data_type, cublas_ldc,
compute_type, algo));
} else {
// batch matmul
assert(a.sizes().size() == 3 && b.sizes().size() == 3);
const long long int cublas_stride_a = m * k;
const long long int cublas_stride_b = k * n;
const long long int cublas_stride_c = m * n;
CUBLAS_CHECK(cublasGemmStridedBatchedEx(
cublas_handle, cublas_trans_a, cublas_trans_b, m,
n, k, &sp_alpha, a.data_ptr(), cuda_data_type, cublas_lda,
cublas_stride_a, b.data_ptr(), cuda_data_type, cublas_ldb, cublas_stride_b,
&sp_beta, c.data_ptr(), cuda_c_data_type, cublas_ldc, cublas_stride_c,
a.size(0), compute_type, algo));
}
}

View File

@@ -0,0 +1,246 @@
#include <stdio.h>
#include <assert.h>
#include "ATen/ATen.h"
#include <cuda_fp16.h>
#define MIN_VALUE (-1e38)
typedef at::Half fp16;
__half *cast(fp16 *ptr) {
return reinterpret_cast<__half *>(ptr);
}
template <typename F>
__global__ void kernel_wkv_forward(const int B, const int T, const int C,
const float *__restrict__ const _w, const float *__restrict__ const _u, const F *__restrict__ const _k, const F *__restrict__ const _v,
F *__restrict__ const _y, float *__restrict__ const _aa, float *__restrict__ const _bb, float *__restrict__ const _pp) {
const int idx = blockIdx.x * blockDim.x + threadIdx.x;
const int _b = idx / C;
const int _c = idx % C;
const int _offset = _b * T * C + _c;
const int _state_offset = _b * C + _c;
float u = _u[_c];
float w = _w[_c];
const F *__restrict__ const k = _k + _offset;
const F *__restrict__ const v = _v + _offset;
F *__restrict__ const y = _y + _offset;
float aa = _aa[_state_offset];
float bb = _bb[_state_offset];
float pp = _pp[_state_offset];
for (int i = 0; i < T; i++) {
const int ii = i * C;
const float kk = float(k[ii]);
const float vv = float(v[ii]);
float ww = u + kk;
float p = max(pp, ww);
float e1 = exp(pp - p);
float e2 = exp(ww - p);
y[ii] = F((e1 * aa + e2 * vv) / (e1 * bb + e2));
ww = w + pp;
p = max(ww, kk);
e1 = exp(ww - p);
e2 = exp(kk - p);
aa = e1 * aa + e2 * vv;
bb = e1 * bb + e2;
pp = p;
}
_aa[_state_offset] = aa;
_bb[_state_offset] = bb;
_pp[_state_offset] = pp;
}
template <typename F>
void cuda_wkv_forward(int B, int T, int C, float *w, float *u, F *k, F *v, F *y, float *aa, float *bb, float *pp) {
dim3 threadsPerBlock( min(C, 32) );
assert(B * C % threadsPerBlock.x == 0);
dim3 numBlocks(B * C / threadsPerBlock.x);
kernel_wkv_forward<<<numBlocks, threadsPerBlock>>>(B, T, C, w, u, k, v, y, aa, bb, pp);
}
template void cuda_wkv_forward<fp16>(
int B, int T, int C,
float *w, float *u, fp16 *k, fp16 *v, fp16 *y,
float *aa, float *bb, float *pp);
template void cuda_wkv_forward<float>(
int B, int T, int C,
float *w, float *u, float *k, float *v, float *y,
float *aa, float *bb, float *pp);
__global__ void kernel_mm_seq_fp32i8(
const int B, const int N, const int M,
const float *__restrict__ const x, const int x_stride,
const uint8_t *__restrict__ const w, const int w_stride,
const float *__restrict__ const mx,
const float *__restrict__ const rx,
const float *__restrict__ const my,
const float *__restrict__ const ry,
float *__restrict__ const y, const int y_stride) {
const int i = blockIdx.x * blockDim.x + threadIdx.x;
const int k = blockIdx.y * blockDim.y + threadIdx.y;
if (i < B && k < M) {
float y_local = 0;
for (int j = 0; j < N; ++j) {
y_local += x[i * x_stride + j] * (
(float(w[j * w_stride + k]) + 0.5f)
* rx[k] * ry[j] + mx[k] + my[j]
);
}
y[i * y_stride + k] = y_local;
}
}
template <typename F>
void cuda_mm8_seq(int B, int N, int M,
F *x, int x_stride,
uint8_t *w, int w_stride,
F *mx, F *rx,
F *my, F *ry,
F *y, int y_stride);
template <>
void cuda_mm8_seq<float>(int B, int N, int M,
float *x, int x_stride,
uint8_t *w, int w_stride,
float *mx, float *rx,
float *my, float *ry,
float *y, int y_stride) {
dim3 blockSize(1, 128);
dim3 gridSize((B + blockSize.x - 1) / blockSize.x, (M + blockSize.y - 1) / blockSize.y);
kernel_mm_seq_fp32i8<<<gridSize, blockSize>>>(
B, N, M, x, x_stride, w, w_stride,
mx, rx, my, ry, y, y_stride);
}
__global__ void kernel_mm_seq_fp16i8(
const int B, const int N, const int M,
const __half *__restrict__ const x, const int x_stride,
const uint8_t *__restrict__ const w, const int w_stride,
const __half *__restrict__ const mx,
const __half *__restrict__ const rx,
const __half *__restrict__ const my,
const __half *__restrict__ const ry,
__half *__restrict__ const y, const int y_stride) {
const int i = blockIdx.x * blockDim.x + threadIdx.x;
const int k = blockIdx.y * blockDim.y + threadIdx.y;
if (i < B && k < M) {
float y_local = 0;
for (int j = 0; j < N; ++j) {
y_local += __half2float(x[i * x_stride + j]) * (
(float(w[j * w_stride + k]) + 0.5f)
* __half2float(rx[k]) * __half2float(ry[j])
+ __half2float(mx[k]) + __half2float(my[j])
);
}
y[i * y_stride + k] = __float2half(y_local);
}
}
template <>
void cuda_mm8_seq<fp16>(int B, int N, int M,
fp16 *x, int x_stride,
uint8_t *w, int w_stride,
fp16 *mx, fp16 *rx,
fp16 *my, fp16 *ry,
fp16 *y, int y_stride) {
dim3 blockSize(1, 128);
dim3 gridSize((B + blockSize.x - 1) / blockSize.x, (M + blockSize.y - 1) / blockSize.y);
kernel_mm_seq_fp16i8<<<gridSize, blockSize>>>(
B, N, M, cast(x), x_stride, w, w_stride,
cast(mx), cast(rx), cast(my), cast(ry), cast(y), y_stride);
}
#define MM8_ONE_JSPLIT 24
#define MM8_ONE_TILE 1024
__global__ void kernel_mm_one_fp32i8(
const int N, const int M,
const float *__restrict__ const x,
const uint8_t *__restrict__ const w, const int w_stride,
const float *__restrict__ const mx,
const float *__restrict__ const rx,
const float *__restrict__ const my,
const float *__restrict__ const ry,
float *__restrict__ const y) {
const int k = blockIdx.y * blockDim.y + threadIdx.y;
const int j0 = min(N, blockIdx.x * ((N + MM8_ONE_JSPLIT - 1) / MM8_ONE_JSPLIT));
const int j1 = min(N, (blockIdx.x + 1) * ((N + MM8_ONE_JSPLIT - 1) / MM8_ONE_JSPLIT));
if (k < M) {
float y_local = 0;
for (int j = j0; j < j1; ++j) {
y_local += x[j] * (
(float(w[j * w_stride + k]) + 0.5f)
* rx[k] * ry[j] + mx[k] + my[j]
);
}
atomicAdd(&y[k], y_local);
}
}
template <typename F>
void cuda_mm8_one(int N, int M,
F *x,
uint8_t *w, int w_stride,
F *mx, F *rx,
F *my, F *ry,
float *y);
template <>
void cuda_mm8_one<float>(int N, int M,
float *x,
uint8_t *w, int w_stride,
float *mx, float *rx,
float *my, float *ry,
float *y) {
dim3 blockSize(1, MM8_ONE_TILE);
dim3 gridSize(MM8_ONE_JSPLIT, (M + blockSize.y - 1) / blockSize.y);
kernel_mm_one_fp32i8<<<gridSize, blockSize>>>(
N, M, x, w, w_stride,
mx, rx, my, ry, y);
}
__global__ void kernel_mm_one_fp16i8(
const int N, const int M,
const __half *__restrict__ const x,
const uint8_t *__restrict__ const w, const int w_stride,
const __half *__restrict__ const mx,
const __half *__restrict__ const rx,
const __half *__restrict__ const my,
const __half *__restrict__ const ry,
float *__restrict__ const y) {
const int k = blockIdx.y * blockDim.y + threadIdx.y;
const int j0 = min(N, blockIdx.x * ((N + MM8_ONE_JSPLIT - 1) / MM8_ONE_JSPLIT));
const int j1 = min(N, (blockIdx.x + 1) * ((N + MM8_ONE_JSPLIT - 1) / MM8_ONE_JSPLIT));
if (k < M) {
float y_local = 0;
for (int j = j0; j < j1; ++j) {
y_local += __half2float(x[j]) * (
(float(w[j * w_stride + k]) + 0.5f)
* __half2float(rx[k]) * __half2float(ry[j])
+ __half2float(mx[k]) + __half2float(my[j])
);
}
atomicAdd(&y[k], y_local);
}
}
template <>
void cuda_mm8_one<fp16>(int N, int M,
fp16 *x,
uint8_t *w, int w_stride,
fp16 *mx, fp16 *rx,
fp16 *my, fp16 *ry,
float *y) {
dim3 blockSize(1, MM8_ONE_TILE);
dim3 gridSize(MM8_ONE_JSPLIT, (M + blockSize.y - 1) / blockSize.y);
kernel_mm_one_fp16i8<<<gridSize, blockSize>>>(
N, M, cast(x), w, w_stride,
cast(mx), cast(rx), cast(my), cast(ry), y);
}

88
backend-python/rwkv_pip/cuda/rwkv5.cu vendored Normal file
View File

@@ -0,0 +1,88 @@
#include <stdio.h>
#include <assert.h>
#include "ATen/ATen.h"
typedef at::BFloat16 bf16;
typedef at::Half fp16;
typedef float fp32;
template <typename F>
__global__ void kernel_forward(const int B, const int T, const int C, const int H, float *__restrict__ _state,
const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const F *__restrict__ _u,
F *__restrict__ const _y)
{
const int b = blockIdx.x / H;
const int h = blockIdx.x % H;
const int i = threadIdx.x;
_w += h*_N_;
_u += h*_N_;
_state += h*_N_*_N_ + i*_N_; // wrong if B > 1 !!!
__shared__ float r[_N_], k[_N_], u[_N_], w[_N_];
float state[_N_];
#pragma unroll
for (int j = 0; j < _N_; j++)
state[j] = _state[j];
__syncthreads();
u[i] = float(_u[i]);
w[i] = _w[i];
__syncthreads();
for (int t = b*T*C + h*_N_ + i; t < (b+1)*T*C + h*_N_ + i; t += C)
{
__syncthreads();
r[i] = float(_r[t]);
k[i] = float(_k[t]);
__syncthreads();
const float v = float(_v[t]);
float y = 0;
#pragma unroll
for (int j = 0; j < _N_; j+=4)
{
const float4& r_ = (float4&)(r[j]);
const float4& k_ = (float4&)(k[j]);
const float4& w_ = (float4&)(w[j]);
const float4& u_ = (float4&)(u[j]);
float4& s = (float4&)(state[j]);
float4 x;
x.x = k_.x * v;
x.y = k_.y * v;
x.z = k_.z * v;
x.w = k_.w * v;
y += r_.x * (u_.x * x.x + s.x);
y += r_.y * (u_.y * x.y + s.y);
y += r_.z * (u_.z * x.z + s.z);
y += r_.w * (u_.w * x.w + s.w);
s.x = s.x * w_.x + x.x;
s.y = s.y * w_.y + x.y;
s.z = s.z * w_.z + x.z;
s.w = s.w * w_.w + x.w;
}
_y[t] = F(y);
}
#pragma unroll
for (int j = 0; j < _N_; j++)
_state[j] = state[j];
}
void cuda_forward_bf16(int B, int T, int C, int H, float *state, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y)
{
assert(H*_N_ == C);
kernel_forward<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, state, r, k, v, w, u, y);
}
void cuda_forward_fp16(int B, int T, int C, int H, float *state, fp16 *r, fp16 *k, fp16 *v, float *w, fp16 *u, fp16 *y)
{
assert(H*_N_ == C);
kernel_forward<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, state, r, k, v, w, u, y);
}
void cuda_forward_fp32(int B, int T, int C, int H, float *state, fp32 *r, fp32 *k, fp32 *v, float *w, fp32 *u, fp32 *y)
{
assert(H*_N_ == C);
kernel_forward<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, state, r, k, v, w, u, y);
}

View File

@@ -0,0 +1,30 @@
#include <torch/extension.h>
#include "ATen/ATen.h"
typedef at::BFloat16 bf16;
typedef at::Half fp16;
typedef float fp32;
void cuda_forward_bf16(int B, int T, int C, int H, float *state, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y);
void cuda_forward_fp16(int B, int T, int C, int H, float *state, fp16 *r, fp16 *k, fp16 *v, float *w, fp16 *u, fp16 *y);
void cuda_forward_fp32(int B, int T, int C, int H, float *state, fp32 *r, fp32 *k, fp32 *v, float *w, fp32 *u, fp32 *y);
void forward_bf16(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &state, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &y) {
cuda_forward_bf16(B, T, C, H, state.data_ptr<float>(), r.data_ptr<bf16>(), k.data_ptr<bf16>(), v.data_ptr<bf16>(), w.data_ptr<float>(), u.data_ptr<bf16>(), y.data_ptr<bf16>());
}
void forward_fp16(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &state, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &y) {
cuda_forward_fp16(B, T, C, H, state.data_ptr<float>(), r.data_ptr<fp16>(), k.data_ptr<fp16>(), v.data_ptr<fp16>(), w.data_ptr<float>(), u.data_ptr<fp16>(), y.data_ptr<fp16>());
}
void forward_fp32(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &state, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &y) {
cuda_forward_fp32(B, T, C, H, state.data_ptr<float>(), r.data_ptr<fp32>(), k.data_ptr<fp32>(), v.data_ptr<fp32>(), w.data_ptr<float>(), u.data_ptr<fp32>(), y.data_ptr<fp32>());
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("forward_bf16", &forward_bf16, "rwkv5 forward_bf16");
m.def("forward_fp16", &forward_fp16, "rwkv5 forward_fp16");
m.def("forward_fp32", &forward_fp32, "rwkv5 forward_fp32");
}
TORCH_LIBRARY(rwkv5, m) {
m.def("forward_bf16", forward_bf16);
m.def("forward_fp16", forward_fp16);
m.def("forward_fp32", forward_fp32);
}

141
backend-python/rwkv_pip/cuda/wrapper.cpp vendored Normal file
View File

@@ -0,0 +1,141 @@
#include <torch/extension.h>
#include "ATen/ATen.h"
#include <iostream>
#include <c10/cuda/CUDAGuard.h>
typedef at::Half fp16;
template <typename F>
void cuda_wkv_forward(int B, int T, int C,
float *w, float *u, F *k, F *v, F *y,
float *aa, float *bb, float *pp);
template <typename F>
void cuda_mm8_seq(int B, int N, int M,
F *x, int x_stride,
uint8_t *w, int w_stride,
F *mx, F *rx,
F *my, F *ry,
F *y, int y_stride);
template <typename F>
void cuda_mm8_one(int N, int M,
F *x,
uint8_t *w, int w_stride,
F *mx, F *rx,
F *my, F *ry,
float *y);
void wkv_forward(int64_t B, int64_t T, int64_t C,
torch::Tensor &w, torch::Tensor &u,
torch::Tensor &k, torch::Tensor &v, torch::Tensor &y,
torch::Tensor &aa, torch::Tensor &bb, torch::Tensor &pp) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(w));
switch (k.scalar_type()) {
case c10::ScalarType::Half:
cuda_wkv_forward(B, T, C,
w.data_ptr<float>(), u.data_ptr<float>(),
k.data_ptr<fp16>(), v.data_ptr<fp16>(), y.data_ptr<fp16>(),
aa.data_ptr<float>(), bb.data_ptr<float>(), pp.data_ptr<float>());
break;
case c10::ScalarType::Float:
cuda_wkv_forward(B, T, C,
w.data_ptr<float>(), u.data_ptr<float>(),
k.data_ptr<float>(), v.data_ptr<float>(), y.data_ptr<float>(),
aa.data_ptr<float>(), bb.data_ptr<float>(), pp.data_ptr<float>());
break;
default:
assert(false && "Only FP16 and FP32 are currently supported");
}
}
void mm8_seq(int64_t B, int64_t N, int64_t M,
torch::Tensor &x, torch::Tensor &w,
torch::Tensor &mx, torch::Tensor &rx,
torch::Tensor &my, torch::Tensor &ry,
torch::Tensor &y) {
assert(x.stride(1) == 1);
assert(w.stride(1) == 1);
assert(mx.stride(0) == 1 && rx.stride(0) == 1);
assert(my.stride(0) == 1 && ry.stride(0) == 1);
assert(y.stride(1) == 1);
const at::cuda::OptionalCUDAGuard device_guard(device_of(w));
switch (x.scalar_type()) {
case c10::ScalarType::Half:
cuda_mm8_seq(
B, N, M,
x.data_ptr<fp16>(), x.stride(0),
w.data_ptr<uint8_t>(), w.stride(0),
mx.data_ptr<fp16>(), rx.data_ptr<fp16>(),
my.data_ptr<fp16>(), ry.data_ptr<fp16>(),
y.data_ptr<fp16>(), y.stride(0));
break;
case c10::ScalarType::Float:
cuda_mm8_seq(
B, N, M,
x.data_ptr<float>(), x.stride(0),
w.data_ptr<uint8_t>(), w.stride(0),
mx.data_ptr<float>(), rx.data_ptr<float>(),
my.data_ptr<float>(), ry.data_ptr<float>(),
y.data_ptr<float>(), y.stride(0));
break;
default:
assert(false && "Only FP16 and FP32 are currently supported");
}
}
void mm8_one(int64_t N, int64_t M,
torch::Tensor &x, torch::Tensor &w,
torch::Tensor &mx, torch::Tensor &rx,
torch::Tensor &my, torch::Tensor &ry,
torch::Tensor &y) {
assert(x.stride(0) == 1);
assert(w.stride(1) == 1);
assert(mx.stride(0) == 1 && rx.stride(0) == 1);
assert(my.stride(0) == 1 && ry.stride(0) == 1);
assert(y.stride(0) == 1);
const at::cuda::OptionalCUDAGuard device_guard(device_of(w));
switch (x.scalar_type()) {
case c10::ScalarType::Half:
cuda_mm8_one(
N, M,
x.data_ptr<fp16>(),
w.data_ptr<uint8_t>(), w.stride(0),
mx.data_ptr<fp16>(), rx.data_ptr<fp16>(),
my.data_ptr<fp16>(), ry.data_ptr<fp16>(),
y.data_ptr<float>());
break;
case c10::ScalarType::Float:
cuda_mm8_one(
N, M,
x.data_ptr<float>(),
w.data_ptr<uint8_t>(), w.stride(0),
mx.data_ptr<float>(), rx.data_ptr<float>(),
my.data_ptr<float>(), ry.data_ptr<float>(),
y.data_ptr<float>());
break;
default:
assert(false && "Only FP16 and FP32 are currently supported");
}
}
using torch::Tensor;
#ifndef DISABLE_CUBLAS_GEMM
void gemm_fp16_cublas(Tensor a, Tensor b, Tensor c);
#endif
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("wkv_forward", &wkv_forward, "wkv forward");
m.def("mm8_seq", &mm8_seq, "mm8 seq");
m.def("mm8_one", &mm8_one, "mm8 one");
#ifndef DISABLE_CUBLAS_GEMM
m.def("gemm_fp16_cublas", &gemm_fp16_cublas, "gemv fp16 cublas");
#endif
}
TORCH_LIBRARY(rwkv, m) {
m.def("wkv_forward", wkv_forward);
m.def("mm8_seq", mm8_seq);
m.def("mm8_one", mm8_one);
#ifndef DISABLE_CUBLAS_GEMM
m.def("gemm_fp16_cublas", gemm_fp16_cublas);
#endif
}

1965
backend-python/rwkv_pip/model.py vendored Normal file

File diff suppressed because it is too large Load Diff

BIN
backend-python/rwkv_pip/rwkv5.pyd vendored Normal file

Binary file not shown.

View File

@@ -16,6 +16,7 @@ class PIPELINE_ARGS:
top_k=0,
alpha_frequency=0.2,
alpha_presence=0.2,
alpha_decay=0.996,
token_ban=[],
token_stop=[],
chunk_len=256,
@@ -25,6 +26,7 @@ class PIPELINE_ARGS:
self.top_k = top_k
self.alpha_frequency = alpha_frequency # Frequency Penalty (as in GPT-3)
self.alpha_presence = alpha_presence # Presence Penalty (as in GPT-3)
self.alpha_decay = alpha_decay # gradually decay the penalty
self.token_ban = token_ban # ban the generation of some tokens
self.token_stop = token_stop # stop generation whenever you see any token here
self.chunk_len = (
@@ -33,7 +35,7 @@ class PIPELINE_ARGS:
class PIPELINE:
def __init__(self, model, WORD_NAME):
def __init__(self, model, WORD_NAME: str):
self.model = model
if WORD_NAME == "cl100k_base":
import tiktoken
@@ -47,9 +49,15 @@ class PIPELINE:
os.path.dirname(os.path.abspath(__file__)) + "/rwkv_vocab_v20230424.txt"
)
else:
from tokenizers import Tokenizer
if WORD_NAME.endswith(".txt"):
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from rwkv_tokenizer import TRIE_TOKENIZER
self.tokenizer = Tokenizer.from_file(WORD_NAME)
self.tokenizer = TRIE_TOKENIZER(WORD_NAME)
else:
from tokenizers import Tokenizer
self.tokenizer = Tokenizer.from_file(WORD_NAME)
def refine_context(self, context):
context = context.strip().split("\n")
@@ -78,7 +86,7 @@ class PIPELINE:
sorted_ids = np.argsort(probs)
sorted_probs = probs[sorted_ids][::-1]
cumulative_probs = np.cumsum(sorted_probs)
cutoff = float(sorted_probs[np.argmax(cumulative_probs > top_p)])
cutoff = float(sorted_probs[np.argmax(cumulative_probs >= top_p)])
probs[probs < cutoff] = 0
if top_k < len(probs) and top_k > 0:
probs[sorted_ids[:-top_k]] = 0
@@ -92,7 +100,7 @@ class PIPELINE:
sorted_probs = probs[sorted_ids]
sorted_probs = torch.flip(sorted_probs, dims=(0,))
cumulative_probs = torch.cumsum(sorted_probs, dim=-1).cpu().numpy()
cutoff = float(sorted_probs[np.argmax(cumulative_probs > top_p)])
cutoff = float(sorted_probs[np.argmax(cumulative_probs >= top_p)])
probs[probs < cutoff] = 0
if top_k < len(probs) and top_k > 0:
probs[sorted_ids[:-top_k]] = 0
@@ -127,10 +135,13 @@ class PIPELINE:
if token in args.token_stop:
break
all_tokens += [token]
for xxx in occurrence:
occurrence[xxx] *= args.alpha_decay
if token not in occurrence:
occurrence[token] = 1
else:
occurrence[token] += 1
# print(occurrence) # debug
# output
tmp = self.decode(all_tokens[out_last:])

BIN
backend-python/rwkv_pip/wkv_cuda.pyd vendored Normal file

Binary file not shown.

View File

@@ -4,7 +4,7 @@ import os
import pathlib
import copy
import re
from typing import Dict, Iterable, List, Tuple, Union
from typing import Dict, Iterable, List, Tuple, Union, Type
from utils.log import quick_log
from fastapi import HTTPException
from pydantic import BaseModel, Field
@@ -21,33 +21,21 @@ os.environ["TORCH_EXTENSIONS_DIR"] = f"{pathlib.Path(__file__).parent.parent.res
class RWKVType(Enum):
NoneType = auto()
Raven = auto()
World = auto()
Music = auto()
class AbstractRWKV(ABC):
def __init__(self, model: str, strategy: str, tokens_path: str):
rwkv_beta = global_var.get(global_var.Args).rwkv_beta
# dynamic import to make RWKV_CUDA_ON work
if rwkv_beta:
from rwkv_pip.beta.model import (
RWKV as Model,
)
else:
from rwkv.model import (
RWKV as Model,
)
from rwkv_pip.utils import PIPELINE
filename, _ = os.path.splitext(os.path.basename(model))
self.name = filename
self.model = Model(model, strategy)
self.pipeline = PIPELINE(self.model, tokens_path)
def __init__(self, model, pipeline):
self.name = "rwkv"
self.model = model
self.pipeline = pipeline
self.model_state = None
self.model_tokens = []
self.rwkv_type: RWKVType = None
self.rwkv_type: RWKVType = RWKVType.NoneType
self.tokenizer_len = len(model.w["emb.weight"])
self.max_tokens_per_generation = 500
self.temperature = 1
@@ -348,8 +336,8 @@ class AbstractRWKV(ABC):
class TextRWKV(AbstractRWKV):
def __init__(self, model: str, strategy: str, tokens_path: str) -> None:
super().__init__(model, strategy, tokens_path)
def __init__(self, model, pipeline) -> None:
super().__init__(model, pipeline)
self.CHUNK_LEN = 256
@@ -361,16 +349,16 @@ class TextRWKV(AbstractRWKV):
self.penalty_alpha_frequency = 1
self.interface = ":"
if "world" in self.name.lower():
self.rwkv_type = RWKVType.World
self.user = "Question"
self.bot = "Answer"
self.END_OF_LINE = 11
else:
if self.tokenizer_len < 65536:
self.rwkv_type = RWKVType.Raven
self.user = "Bob"
self.bot = "Alice"
self.END_OF_LINE = 187
else:
self.rwkv_type = RWKVType.World
self.user = "User"
self.bot = "Assistant"
self.END_OF_LINE = 11
self.AVOID_REPEAT_TOKENS = []
AVOID_REPEAT = ""
@@ -469,8 +457,8 @@ The following is a coherent verbose detailed conversation between a girl named {
class MusicRWKV(AbstractRWKV):
def __init__(self, model: str, strategy: str, tokens_path: str):
super().__init__(model, strategy, tokens_path)
def __init__(self, model, pipeline):
super().__init__(model, pipeline)
self.max_tokens_per_generation = 500
self.temperature = 1
@@ -510,6 +498,52 @@ class MusicRWKV(AbstractRWKV):
return " " + delta
def get_tokenizer(tokenizer_len: int):
tokenizer_dir = f"{pathlib.Path(__file__).parent.parent.resolve()}/rwkv_pip/"
if tokenizer_len < 50277:
return tokenizer_dir + "tokenizer-midi.json"
elif tokenizer_len < 65536:
return tokenizer_dir + "20B_tokenizer.json"
else:
return "rwkv_vocab_v20230424"
def RWKV(model: str, strategy: str, tokenizer: Union[str, None]) -> AbstractRWKV:
rwkv_beta = global_var.get(global_var.Args).rwkv_beta
# dynamic import to make RWKV_CUDA_ON work
if rwkv_beta:
from rwkv_pip.beta.model import (
RWKV as Model,
)
else:
from rwkv_pip.model import (
RWKV as Model,
)
from rwkv_pip.utils import PIPELINE
filename, _ = os.path.splitext(os.path.basename(model))
model = Model(model, strategy)
if not tokenizer:
tokenizer = get_tokenizer(len(model.w["emb.weight"]))
pipeline = PIPELINE(model, tokenizer)
rwkv_map: dict[str, Type[AbstractRWKV]] = {
"20B_tokenizer": TextRWKV,
"rwkv_vocab_v20230424": TextRWKV,
"tokenizer-midi": MusicRWKV,
}
tokenizer_name = os.path.splitext(os.path.basename(tokenizer))[0]
rwkv: AbstractRWKV
if tokenizer_name in rwkv_map:
rwkv = rwkv_map[tokenizer_name](model, pipeline)
else:
rwkv = TextRWKV(model, pipeline)
rwkv.name = filename
return rwkv
class ModelConfigBody(BaseModel):
max_tokens: int = Field(default=None, gt=0, le=102400)
temperature: float = Field(default=None, ge=0, le=2)
@@ -518,7 +552,7 @@ class ModelConfigBody(BaseModel):
frequency_penalty: float = Field(default=None, ge=-2, le=2)
class Config:
schema_extra = {
json_schema_extra = {
"example": {
"max_tokens": 1000,
"temperature": 1.2,

Binary file not shown.

Binary file not shown.

View File

@@ -1,734 +0,0 @@
########################################################################################################
# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
########################################################################################################
import types, gc, os, time, re
import torch
from torch.nn import functional as F
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.allow_tf32 = True
torch.backends.cuda.matmul.allow_tf32 = True
current_path = os.path.dirname(os.path.abspath(__file__))
# https://zhuanlan.zhihu.com/p/612879065
def LoadPreCompileLibrary(file):
import importlib
import os
import torch
# load the custom_op_library and register the custom ops
lib_dir = os.path.dirname(__file__)
if os.name == "nt":
# Register the main torchvision library location on the default DLL path
import ctypes
import sys
kernel32 = ctypes.WinDLL("kernel32.dll", use_last_error=True)
with_load_library_flags = hasattr(kernel32, "AddDllDirectory")
prev_error_mode = kernel32.SetErrorMode(0x0001)
if with_load_library_flags:
kernel32.AddDllDirectory.restype = ctypes.c_void_p
if sys.version_info >= (3, 8):
os.add_dll_directory(lib_dir)
elif with_load_library_flags:
res = kernel32.AddDllDirectory(lib_dir)
if res is None:
err = ctypes.WinError(ctypes.get_last_error())
err.strerror += f' Error adding "{lib_dir}" to the DLL directories.'
raise ValueError(err)
kernel32.SetErrorMode(prev_error_mode)
loader_details = (
importlib.machinery.ExtensionFileLoader,
importlib.machinery.EXTENSION_SUFFIXES,
)
extfinder = importlib.machinery.FileFinder(lib_dir, loader_details)
ext_specs = extfinder.find_spec(file)
if ext_specs is None:
return False
try:
torch.ops.load_library(ext_specs.origin)
except OSError as exc:
return False
return True
########################################################################################################
if os.environ.get('RWKV_JIT_ON') != '0':
os.environ["RWKV_JIT_ON"] = '1'
MyModule = torch.jit.ScriptModule
MyFunction = torch.jit.script_method
MyStatic = torch.jit.script
else:
MyModule = torch.nn.Module
def __nop(ob):
return ob
MyFunction = __nop
MyStatic = __nop
if os.environ.get('RWKV_CUDA_ON') == '1':
if LoadPreCompileLibrary('wkv_cuda') is False:
from torch.utils.cpp_extension import load
load(
name=f"wkv_cuda",
sources=[f"{current_path}/cuda/wrapper.cpp", f"{current_path}/cuda/operators.cu"],
verbose=True,
extra_cuda_cflags=["-t 4", "-std=c++17", "--use_fast_math", "-O3", "--extra-device-vectorization"],
is_python_module=False)
@MyStatic
def cuda_wkv(T: int, C: int, w, u, k, v, aa, bb, pp):
assert 1 * C % min(C, 32) == 0
assert k.dtype == v.dtype == torch.float16 or k.dtype == v.dtype == torch.float32
assert w.dtype == u.dtype == aa.dtype == bb.dtype == pp.dtype == torch.float32
w = w.contiguous()
u = u.contiguous()
k = k.contiguous()
v = v.contiguous()
y = torch.empty((T, C), device=w.device, memory_format=torch.contiguous_format, dtype=k.dtype)
torch.ops.rwkv.wkv_forward(1, T, C, w, u, k, v, y, aa, bb, pp)
return y, aa, bb, pp
@MyStatic
def cuda_mm8_seq(B: int, N: int, M: int, x, w, mx, rx, my, ry):
assert x.dtype == mx.dtype == rx.dtype == my.dtype == ry.dtype
assert x.dtype == torch.float32 or x.dtype == torch.float16
assert w.dtype == torch.uint8
assert x.shape == [B, N]
assert w.shape == [N, M]
assert rx.shape == mx.shape == [M]
assert ry.shape == my.shape == [N, 1]
y = torch.empty((B, M), device=w.device, dtype=x.dtype)
torch.ops.rwkv.mm8_seq(B, N, M, x, w, mx, rx, my, ry, y)
return y
@MyStatic
def cuda_mm8_one(N: int, M: int, x, w, mx, rx, my, ry):
assert x.dtype == mx.dtype == rx.dtype == my.dtype == ry.dtype
assert x.dtype == torch.float32 or x.dtype == torch.float16
assert w.dtype == torch.uint8
assert x.shape == [N]
assert w.shape == [N, M]
assert rx.shape == mx.shape == [M]
assert ry.shape == my.shape == [N, 1]
y = torch.zeros((M,), device=w.device, dtype=torch.float32)
torch.ops.rwkv.mm8_one(N, M, x, w, mx, rx, my, ry, y)
return y.to(dtype=x.dtype)
else:
os.environ["RWKV_CUDA_ON"] = '0'
########################################################################################################
class RWKV(MyModule):
def __init__(self, model, strategy, verbose = True, convert_and_save_and_exit = None):
super().__init__()
if verbose:
prxxx = lambda *args, **kwargs: print(*args, **kwargs)
else:
prxxx = lambda *args, **kwargs: None
STRATEGY_REGEX = r"^(?:(?:^|->) *(?:cuda(?::[\d]+)?|cpu|mps) (?:fp(?:16|32)|bf16)(?:i8|i4|i3)?(?: \*[\d]+\+?)? *)+$"
if not re.match(STRATEGY_REGEX, strategy):
raise ValueError("Invalid strategy. Please read https://pypi.org/project/rwkv/")
strategy = ('->'.join([x.strip() for x in strategy.split('->')])).replace('->', ' -> ')
self.args = types.SimpleNamespace()
args = self.args
args.MODEL_NAME = model
args.strategy_string = strategy
# Rescale for fp16 mode: set x = x/2 every X layer (to avoid fp16 overflow)
self.RESCALE_LAYER = 6 if 'fp16' in strategy else 0
prxxx(f'RWKV_JIT_ON {os.environ["RWKV_JIT_ON"]} RWKV_CUDA_ON {os.environ["RWKV_CUDA_ON"]} RESCALE_LAYER {self.RESCALE_LAYER}\n')
args.MODEL_NAME = args.MODEL_NAME.strip()
if not args.MODEL_NAME.endswith('.pth'):
args.MODEL_NAME += '.pth'
prxxx(f'Loading {args.MODEL_NAME} ...')
with torch.no_grad():
self.w = torch.load(args.MODEL_NAME, map_location='cpu') # load model to CPU first
gc.collect()
w = self.w
ALREADY_CONVERTED = False
if '_strategy' in w:
ALREADY_CONVERTED = True
assert convert_and_save_and_exit == None # you should only convert a raw model
prxxx(f"Converted model: strategy {w['_strategy']}, version {w['_version']}\n")
assert w['_strategy'] == args.strategy_string # if you are using a new strategy, re-convert the model
assert float(w['_version']) >= 0.7 # sometimes you should re-convert using latest convert_model.py
assert w['_rescale_layer'] == self.RESCALE_LAYER
del w['_strategy']
del w['_version']
del w['_rescale_layer']
args.n_embd = w['emb.weight'].shape[1]
args.n_layer = 0
keys = list(w.keys())
for x in keys:
layer_id = int(x.split('.')[1]) if ('blocks.' in x) else 0
args.n_layer = max(args.n_layer, layer_id+1)
####################### Compute strategy
s = [x.strip().split(' ') for x in strategy.split('->')]
plan = [0] * len(s)
stream_i = -1
stream_count = 0
to_allocate = args.n_layer + 1
allocated = 0
free_slots = 0
for i in range(len(s)):
si = s[i]
si1 = si[1]
if si1.startswith('fp32'): si[1] = [torch.float]
elif si1.startswith('fp16'): si[1] = [torch.float16]
elif si1.startswith('bf16'): si[1] = [torch.bfloat16]
if si1.endswith('i8'): si[1] += [torch.uint8]
else: si[1] += [si[1][0]]
if len(si) > 2:
ss = si[2]
assert ss.startswith('*')
if ss.endswith('+'):
plan[i] = int(ss[1:-1])
stream_i = i
else:
plan[i] = int(ss[1:])
allocated += plan[i]
if allocated >= to_allocate:
plan[i] += to_allocate - allocated
break
else:
free_slots += 1
if stream_i < 0:
if free_slots > 0 and to_allocate > allocated:
for i in range(len(s)):
if plan[i] == 0:
plan[i] = (to_allocate - allocated) // free_slots
allocated += plan[i]
free_slots -= 1
if to_allocate > allocated:
plan[len(s)-1] += to_allocate - allocated
else:
if to_allocate > allocated:
stream_count = to_allocate - allocated
plan[stream_i] += stream_count
prxxx(f'Strategy: (total {args.n_layer}+1={args.n_layer+1} layers)')
for i in range(len(s)):
ss = s[i]
if i != stream_i:
prxxx(f'* {ss[0]} {str(ss[1]).replace("torch.","")}, store {plan[i]} layers')
else:
prxxx(f'* {ss[0]} {str(ss[1]).replace("torch.","")}, store {plan[i]-stream_count} layers, stream {stream_count} layers')
plan[i] += (0 if i == 0 else plan[i-1])
self.strategy = [None] * (args.n_layer + 1)
strategy = self.strategy
for n in range(args.n_layer + 1):
for i in range(len(s)):
if n < plan[i]:
strategy[n] = types.SimpleNamespace()
strategy[n].device = s[i][0]
strategy[n].atype = s[i][1][0]
strategy[n].wtype = s[i][1][1]
strategy[n].stream = False
if i == stream_i and n >= (plan[i] - stream_count):
strategy[n].stream = True
break
prxxx(f"{n}-{strategy[n].device}-{str(strategy[n].atype).replace('torch.','')}-{str(strategy[n].wtype).replace('torch.','')}{'-stream' if strategy[n].stream else ''}",end=' ')
prxxx()
####################### Load weights to self.w
if not ALREADY_CONVERTED:
try: # precompute embedding
w['emb.weight'] = F.layer_norm(w['emb.weight'], (args.n_embd,), weight=w['blocks.0.ln0.weight'], bias=w['blocks.0.ln0.bias'])
except:
w['emb.weight'] = F.layer_norm(w['emb.weight'].float(), (args.n_embd,), weight=w['blocks.0.ln0.weight'].float(), bias=w['blocks.0.ln0.bias'].float())
del w['blocks.0.ln0.weight']
del w['blocks.0.ln0.bias']
print_need_newline = False
keys = list(w.keys())
for x in keys:
w[x].requires_grad = False
layer_id = int(x.split('.')[1]) if ('blocks.' in x) else 0
if ('ln_out.' in x) or ('head.' in x):
layer_id = args.n_layer
dd = strategy[layer_id]
DEVICE = dd.device
ATYPE = dd.atype
WTYPE = dd.wtype
if not ALREADY_CONVERTED:
if self.RESCALE_LAYER > 0:
if 'att.output.weight' in x:
w[x] = w[x] / (2 ** int(layer_id // self.RESCALE_LAYER))
if 'ffn.value.weight' in x:
w[x] = w[x] / (2 ** int(layer_id // self.RESCALE_LAYER))
if '.time_' in x:
w[x] = w[x].squeeze()
if 'key.weight' in x or 'value.weight' in x or 'receptance.weight' in x or 'output.weight' in x or 'head.weight' in x:
w[x] = w[x].t()
if '.time_decay' in x: # need fp32 for this
w[x] = -torch.exp(w[x].float())
elif '.time_first' in x: # need fp32 for this
w[x] = w[x].float()
else:
if (len(w[x].shape) == 2) and ('emb' not in x):
if WTYPE != torch.uint8:
w[x] = w[x].to(dtype=WTYPE)
else:
w[x] = w[x].float()
if w[x].shape[0] > w[x].shape[1]:
w[x+'_my'] = torch.amin(w[x], dim=1).unsqueeze(1)
w[x] = w[x] - w[x+'_my']
w[x+'_mx'] = torch.amin(w[x], dim=0)
w[x] = w[x] - w[x+'_mx']
w[x+'_rx'] = torch.amax(w[x], dim=0)
w[x] = w[x] / w[x+'_rx']
w[x+'_ry'] = torch.amax(w[x], dim=1).unsqueeze(1)
w[x] = w[x] / w[x+'_ry']
else:
w[x+'_mx'] = torch.amin(w[x], dim=0)
w[x] = w[x] - w[x+'_mx']
w[x+'_my'] = torch.amin(w[x], dim=1).unsqueeze(1)
w[x] = w[x] - w[x+'_my']
w[x+'_rx'] = torch.amax(w[x], dim=0)
w[x] = w[x] / w[x+'_rx']
w[x+'_ry'] = torch.amax(w[x], dim=1).unsqueeze(1)
w[x] = w[x] / w[x+'_ry']
w[x] = torch.clip(torch.floor(w[x] * 256), min=0, max=255).to(dtype=torch.uint8)
w[x+'_mx'] = w[x+'_mx'].to(dtype=ATYPE).contiguous()
w[x+'_rx'] = (w[x+'_rx'] / 16).to(dtype=ATYPE).contiguous()
w[x+'_my'] = w[x+'_my'].to(dtype=ATYPE).contiguous()
w[x+'_ry'] = (w[x+'_ry'] / 16).to(dtype=ATYPE).contiguous()
else:
w[x] = w[x].to(dtype=ATYPE)
if convert_and_save_and_exit == None:
if 'emb.' in x:
w[x] = w[x].contiguous()
elif (dd.stream) and (x.endswith('key.weight') or x.endswith('value.weight') or x.endswith('receptance.weight') or x.endswith('output.weight')):
try:
w[x] = w[x].contiguous().pin_memory() # if you see "CUDA error: out of memory" here, that's out of CPU RAM, not VRAM. Get more RAM :)
except:
print('Note: You are running out of RAM. Get more CPU RAM. Now this will run much slower.')
elif DEVICE != 'cpu':
w[x] = w[x].to(device=DEVICE).contiguous()
if (dd.stream) or (DEVICE != 'cpu'):
try:
w[x+'_mx'] = w[x+'_mx'].to(device=DEVICE).contiguous()
w[x+'_rx'] = w[x+'_rx'].to(device=DEVICE).contiguous()
w[x+'_my'] = w[x+'_my'].to(device=DEVICE).contiguous()
w[x+'_ry'] = w[x+'_ry'].to(device=DEVICE).contiguous()
except:
pass
if 'ffn.value.weight' in x:
gc.collect()
if 'cuda' in args.strategy_string:
torch.cuda.empty_cache()
shape = [i for i in w[x].shape if i != 1]
if len(shape) > 1:
shape = f" {str(shape[0]).rjust(5)} {str(shape[1]).rjust(5)}"
else:
shape = f" {str(shape[0]).rjust(5)} "
if layer_id == 0 or layer_id >= args.n_layer-1:
if print_need_newline:
prxxx('\n', end = '')
print_need_newline = False
dt = str(w[x].dtype).replace('torch.', '')
dt = dt.replace('float32', 'f32').replace('bfloat16', 'bf16').replace('float16', 'f16').replace('uint8', 'i8')
prxxx(x.ljust(32), dt.rjust(4), str(w[x].device).rjust(8), shape, ' (pinned)' if w[x].is_pinned() else '')
else:
print_need_newline = True
prxxx('.', end = '', flush = True)
if convert_and_save_and_exit:
w['_strategy'] = args.strategy_string
w['_rescale_layer'] = self.RESCALE_LAYER
w['_version'] = '0.7'
if not convert_and_save_and_exit.endswith('.pth'):
convert_and_save_and_exit += '.pth'
prxxx(f'Saving to {convert_and_save_and_exit}...')
torch.save(w, convert_and_save_and_exit)
prxxx(f'Converted and saved. Now this will exit.')
exit(0)
gc.collect()
if 'cuda' in args.strategy_string:
torch.cuda.empty_cache()
@MyFunction
def torch_mm8_seq(self, x, w, mx, rx, my, ry):
return x @ ((w.to(dtype=x.dtype) + 0.5) * ry * rx + my + mx)
@MyFunction
def torch_mm8_one(self, x, w, mx, rx, my, ry):
return x @ ((w.to(dtype=x.dtype) + 0.5) * ry * rx + my + mx)
if os.environ.get('RWKV_CUDA_ON') == '1':
@MyFunction
def mm8_seq(self, x, w, mx, rx, my, ry):
if w.device.type == 'cuda' and x.dtype == torch.float16:
B, N, M = x.shape[0], w.shape[0], w.shape[1]
return cuda_mm8_seq(B, N, M, x, w, mx, rx, my, ry)
else:
return self.torch_mm8_seq(x, w, mx, rx, my, ry)
@MyFunction
def mm8_one(self, x, w, mx, rx, my, ry):
if w.device.type == 'cuda':
N, M = w.shape[0], w.shape[1]
return cuda_mm8_one(N, M, x, w, mx, rx, my, ry)
else:
return self.torch_mm8_one(x, w, mx, rx, my, ry)
else:
@MyFunction
def mm8_seq(self, x, w, mx, rx, my, ry):
return self.torch_mm8_seq(x, w, mx, rx, my, ry)
@MyFunction
def mm8_one(self, x, w, mx, rx, my, ry):
return self.torch_mm8_one(x, w, mx, rx, my, ry)
########################################################################################################
@MyFunction
def ffn_one(self, x, sx, ln_w, ln_b, k_mix, r_mix, kw, vw, rw, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
kx = xx * k_mix + sx * (1 - k_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(rx @ rw)
vx = torch.square(torch.relu(kx @ kw))
out = r * (vx @ vw)
return x + out, xx
@MyFunction
def ffn_one_i8(self, x, sx, ln_w, ln_b, k_mix, r_mix, kw, vw, rw, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
kx = xx * k_mix + sx * (1 - k_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(self.mm8_one(rx, rw, rmx, rrx, rmy, rry))
vx = torch.square(torch.relu(self.mm8_one(kx, kw, kmx, krx, kmy, kry)))
out = r * (self.mm8_one(vx, vw, vmx, vrx, vmy, vry))
return x + out, xx
########################################################################################################
@MyFunction
def ffn_seq(self, x, sx, ln_w, ln_b, k_mix, r_mix, kw, vw, rw, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(rx @ rw)
vx = torch.square(torch.relu(kx @ kw))
out = r * (vx @ vw)
return x + out, xx[-1,:]
@MyFunction
def ffn_seq_i8(self, x, sx, ln_w, ln_b, k_mix, r_mix, kw, vw, rw, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(self.mm8_seq(rx, rw, rmx, rrx, rmy, rry))
vx = torch.square(torch.relu(self.mm8_seq(kx, kw, kmx, krx, kmy, kry)))
out = r * (self.mm8_seq(vx, vw, vmx, vrx, vmy, vry))
return x + out, xx[-1,:]
########################################################################################################
@MyFunction
def att_one(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(rx @ rw)
k = (kx @ kw).float()
v = (vx @ vw).float()
ww = t_first + k
p = torch.maximum(pp, ww)
e1 = torch.exp(pp - p)
e2 = torch.exp(ww - p)
wkv = ((e1 * aa + e2 * v) / (e1 * bb + e2)).to(dtype=x.dtype)
ww = t_decay + pp
p = torch.maximum(ww, k)
e1 = torch.exp(ww - p)
e2 = torch.exp(k - p)
out = (r * wkv) @ ow
return x + out, xx, e1 * aa + e2 * v, e1 * bb + e2, p
@MyFunction
def att_one_i8(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(self.mm8_one(rx, rw, rmx, rrx, rmy, rry))
k = (self.mm8_one(kx, kw, kmx, krx, kmy, kry)).float()
v = (self.mm8_one(vx, vw, vmx, vrx, vmy, vry)).float()
ww = t_first + k
p = torch.maximum(pp, ww)
e1 = torch.exp(pp - p)
e2 = torch.exp(ww - p)
wkv = ((e1 * aa + e2 * v) / (e1 * bb + e2)).to(dtype=x.dtype)
ww = t_decay + pp
p = torch.maximum(ww, k)
e1 = torch.exp(ww - p)
e2 = torch.exp(k - p)
out = self.mm8_one(r * wkv, ow, omx, orx, omy, ory)
return x + out, xx, e1 * aa + e2 * v, e1 * bb + e2, p
########################################################################################################
@MyFunction
def att_seq(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(rx @ rw)
k = (kx @ kw).float()
v = (vx @ vw).float()
T = x.shape[0]
for t in range(T):
kk = k[t]
vv = v[t]
ww = t_first + kk
p = torch.maximum(pp, ww)
e1 = torch.exp(pp - p)
e2 = torch.exp(ww - p)
sx[t] = ((e1 * aa + e2 * vv) / (e1 * bb + e2)).to(dtype=x.dtype)
ww = t_decay + pp
p = torch.maximum(ww, kk)
e1 = torch.exp(ww - p)
e2 = torch.exp(kk - p)
aa = e1 * aa + e2 * vv
bb = e1 * bb + e2
pp = p
out = (r * sx) @ ow
return x + out, xx[-1,:], aa, bb, pp
@MyFunction
def att_seq_i8(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
xx = F.layer_norm(x, (x.shape[-1],), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(self.mm8_seq(rx, rw, rmx, rrx, rmy, rry))
k = self.mm8_seq(kx, kw, kmx, krx, kmy, kry).float()
v = self.mm8_seq(vx, vw, vmx, vrx, vmy, vry).float()
T = x.shape[0]
for t in range(T):
kk = k[t]
vv = v[t]
ww = t_first + kk
p = torch.maximum(pp, ww)
e1 = torch.exp(pp - p)
e2 = torch.exp(ww - p)
sx[t] = ((e1 * aa + e2 * vv) / (e1 * bb + e2)).to(dtype=x.dtype)
ww = t_decay + pp
p = torch.maximum(ww, kk)
e1 = torch.exp(ww - p)
e2 = torch.exp(kk - p)
aa = e1 * aa + e2 * vv
bb = e1 * bb + e2
pp = p
out = self.mm8_seq(r * sx, ow, omx, orx, omy, ory)
return x + out, xx[-1,:], aa, bb, pp
########################################################################################################
if os.environ["RWKV_CUDA_ON"] == '1':
@MyFunction
def cuda_att_seq(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
T, C = x.size()
xx = F.layer_norm(x, (C,), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(rx @ rw)
k = kx @ kw
v = vx @ vw
y, aa, bb, pp = cuda_wkv(T, C, t_decay, t_first, k, v, aa, bb, pp)
out = (r * y) @ ow
return x + out, xx[-1,:], aa, bb, pp
@MyFunction
def cuda_att_seq_i8(self, x, sx, aa, bb, pp, ln_w, ln_b, k_mix, v_mix, r_mix, t_decay, t_first, kw, vw, rw, ow, kmx, krx, kmy, kry, vmx, vrx, vmy, vry, rmx, rrx, rmy, rry, omx, orx, omy, ory):
T, C = x.size()
xx = F.layer_norm(x, (C,), weight=ln_w, bias=ln_b)
sx = torch.cat((sx.unsqueeze(0), xx[:-1,:]))
kx = xx * k_mix + sx * (1 - k_mix)
vx = xx * v_mix + sx * (1 - v_mix)
rx = xx * r_mix + sx * (1 - r_mix)
r = torch.sigmoid(self.mm8_seq(rx, rw, rmx, rrx, rmy, rry))
k = self.mm8_seq(kx, kw, kmx, krx, kmy, kry)
v = self.mm8_seq(vx, vw, vmx, vrx, vmy, vry)
y, aa, bb, pp = cuda_wkv(T, C, t_decay, t_first, k, v, aa, bb, pp)
out = self.mm8_seq(r * y, ow, omx, orx, omy, ory)
return x + out, xx[-1,:], aa, bb, pp
########################################################################################################
def forward(self, tokens, state, full_output=False):
with torch.no_grad():
w = self.w
args = self.args
if state == None:
state = [None] * args.n_layer * 5
for i in range(args.n_layer): # state: 0=att_xx 1=att_aa 2=att_bb 3=att_pp 4=ffn_xx
dd = self.strategy[i]
dev = dd.device
atype = dd.atype
state[i*5+0] = torch.zeros(args.n_embd, dtype=atype, requires_grad=False, device=dev).contiguous()
state[i*5+1] = torch.zeros(args.n_embd, dtype=torch.float, requires_grad=False, device=dev).contiguous()
state[i*5+2] = torch.zeros(args.n_embd, dtype=torch.float, requires_grad=False, device=dev).contiguous()
state[i*5+3] = torch.zeros(args.n_embd, dtype=torch.float, requires_grad=False, device=dev).contiguous() - 1e30
state[i*5+4] = torch.zeros(args.n_embd, dtype=atype, requires_grad=False, device=dev).contiguous()
seq_mode = len(tokens) > 1
x = w['emb.weight'][tokens if seq_mode else tokens[0]]
for i in range(args.n_layer):
bbb = f'blocks.{i}.'
att = f'blocks.{i}.att.'
ffn = f'blocks.{i}.ffn.'
dd = self.strategy[i]
dev = dd.device
atype = dd.atype
wtype = dd.wtype
if seq_mode:
if 'cuda' in str(dev) and os.environ["RWKV_CUDA_ON"] == '1':
ATT = self.cuda_att_seq if wtype != torch.uint8 else self.cuda_att_seq_i8
else:
ATT = self.att_seq if wtype != torch.uint8 else self.att_seq_i8
FFN = self.ffn_seq if wtype != torch.uint8 else self.ffn_seq_i8
else:
ATT = self.att_one if wtype != torch.uint8 else self.att_one_i8
FFN = self.ffn_one if wtype != torch.uint8 else self.ffn_one_i8
x = x.to(dtype=atype, device=dev)
kw = w[f'{att}key.weight']
vw = w[f'{att}value.weight']
rw = w[f'{att}receptance.weight']
ow = w[f'{att}output.weight']
if dd.stream:
kw = kw.to(device=dev, non_blocking=True)
vw = vw.to(device=dev, non_blocking=True)
rw = rw.to(device=dev, non_blocking=True)
ow = ow.to(device=dev, non_blocking=True)
kmx = w[f'{att}key.weight_mx'] if wtype == torch.uint8 else x
krx = w[f'{att}key.weight_rx'] if wtype == torch.uint8 else x
kmy = w[f'{att}key.weight_my'] if wtype == torch.uint8 else x
kry = w[f'{att}key.weight_ry'] if wtype == torch.uint8 else x
vmx = w[f'{att}value.weight_mx'] if wtype == torch.uint8 else x
vrx = w[f'{att}value.weight_rx'] if wtype == torch.uint8 else x
vmy = w[f'{att}value.weight_my'] if wtype == torch.uint8 else x
vry = w[f'{att}value.weight_ry'] if wtype == torch.uint8 else x
rmx = w[f'{att}receptance.weight_mx'] if wtype == torch.uint8 else x
rrx = w[f'{att}receptance.weight_rx'] if wtype == torch.uint8 else x
rmy = w[f'{att}receptance.weight_my'] if wtype == torch.uint8 else x
rry = w[f'{att}receptance.weight_ry'] if wtype == torch.uint8 else x
omx = w[f'{att}output.weight_mx'] if wtype == torch.uint8 else x
orx = w[f'{att}output.weight_rx'] if wtype == torch.uint8 else x
omy = w[f'{att}output.weight_my'] if wtype == torch.uint8 else x
ory = w[f'{att}output.weight_ry'] if wtype == torch.uint8 else x
x, state[i*5+0], state[i*5+1], state[i*5+2], state[i*5+3] = ATT(
x, state[i*5+0], state[i*5+1], state[i*5+2], state[i*5+3],
w[f'{bbb}ln1.weight'], w[f'{bbb}ln1.bias'],
w[f'{att}time_mix_k'], w[f'{att}time_mix_v'], w[f'{att}time_mix_r'],
w[f'{att}time_decay'], w[f'{att}time_first'],
kw, vw, rw, ow,
kmx, krx, kmy, kry,
vmx, vrx, vmy, vry,
rmx, rrx, rmy, rry,
omx, orx, omy, ory,
)
if dd.stream:
del kw, vw, rw, ow
kw = w[f'{ffn}key.weight']
vw = w[f'{ffn}value.weight']
rw = w[f'{ffn}receptance.weight']
if dd.stream:
kw = kw.to(device=dev, non_blocking=True)
vw = vw.to(device=dev, non_blocking=True)
rw = rw.to(device=dev, non_blocking=True)
kmx = w[f'{ffn}key.weight_mx'] if wtype == torch.uint8 else x
krx = w[f'{ffn}key.weight_rx'] if wtype == torch.uint8 else x
kmy = w[f'{ffn}key.weight_my'] if wtype == torch.uint8 else x
kry = w[f'{ffn}key.weight_ry'] if wtype == torch.uint8 else x
vmx = w[f'{ffn}value.weight_mx'] if wtype == torch.uint8 else x
vrx = w[f'{ffn}value.weight_rx'] if wtype == torch.uint8 else x
vmy = w[f'{ffn}value.weight_my'] if wtype == torch.uint8 else x
vry = w[f'{ffn}value.weight_ry'] if wtype == torch.uint8 else x
rmx = w[f'{ffn}receptance.weight_mx'] if wtype == torch.uint8 else x
rrx = w[f'{ffn}receptance.weight_rx'] if wtype == torch.uint8 else x
rmy = w[f'{ffn}receptance.weight_my'] if wtype == torch.uint8 else x
rry = w[f'{ffn}receptance.weight_ry'] if wtype == torch.uint8 else x
x, state[i*5+4] = FFN(
x, state[i*5+4],
w[f'{bbb}ln2.weight'], w[f'{bbb}ln2.bias'],
w[f'{ffn}time_mix_k'], w[f'{ffn}time_mix_r'],
kw, vw, rw,
kmx, krx, kmy, kry,
vmx, vrx, vmy, vry,
rmx, rrx, rmy, rry,
)
if dd.stream:
del kw, vw, rw
if self.RESCALE_LAYER > 0:
if (i+1) % self.RESCALE_LAYER == 0:
x = x / 2
dd = self.strategy[args.n_layer]
x = x[-1,:] if (seq_mode and (not full_output)) else x
x = x.to(dtype=dd.atype, device=dd.device)
x = F.layer_norm(x, (args.n_embd,), weight=w['ln_out.weight'], bias=w['ln_out.bias'])
if w['head.weight'].dtype != torch.uint8:
x = x @ w['head.weight']
else:
if seq_mode and full_output:
x = self.mm8_seq(x, w['head.weight'], w['head.weight_mx'], w['head.weight_rx'], w['head.weight_my'], w['head.weight_ry'])
else:
x = self.mm8_one(x, w['head.weight'], w['head.weight_mx'], w['head.weight_rx'], w['head.weight_my'], w['head.weight_ry'])
return x.float(), state

View File

@@ -1,6 +1,6 @@
For Mac and Linux users, please manually install Python 3.10 (usually the latest systems come with it built-in). You can specify the Python interpreter to use in Settings.
对于Mac和Linux用户请手动安装 Python3.10 (通常最新的系统已经内置了). 你可以在设置中指定使用的Python解释器.
MacおよびLinuxのユーザーの方は、Python3.10を手動でインストールしてください(通常、最新のシステムには既に組み込まれています)。 設定メニューで使用するPythonインタプリタを指定することができます。
For Mac and Linux users, please manually install Python 3.10 (usually the latest systems come with it built-in). You can specify the Python interpreter to use in Settings. (which python3)
对于Mac和Linux用户请手动安装 Python3.10 (通常最新的系统已经内置了). 你可以在设置中指定使用的Python解释器. (which python3)
MacおよびLinuxのユーザーの方は、Python3.10を手動でインストールしてください(通常、最新のシステムには既に組み込まれています)。 設定メニューで使用するPythonインタプリタを指定することができます。 (which python3)
Please execute this program in an empty directory. All related dependencies will be placed in this directory.
请将本程序放在一个空目录内执行, 所有相关依赖均会放置于此目录.

View File

@@ -1,7 +1,5 @@
{"text": "1:This is the first document."}
{"text": "2:Hello\nWorld"}
{"text": "3:1+1=2\n1+2=3\n2+2=4"}
{"text": "4:You will be training the GPT version because it's paralleziable and faster to train."}
{"text": "5:Read the inference code in src/model.py and try using the final hidden state(.xx .aa .bb)"}
{"text": "6:You can fine-tune the model with longer ctxLen and it can quickly adapt to longer ctxLens."}
{"text": "7:Consider RWKV 14B. The state has 200 vectors, that is, 5 vectors for each block: fp16 (xx), fp32 (aa), fp32 (bb), fp32 (pp), fp16 (xx)."}
{"text": "The following is an epic science fiction masterpiece that is immortalized, with delicate descriptions and grand depictions of interstellar civilization wars.\nChapter 1.\nAs I sit down to write here amidst the shadows of vine-leaves under the blue sky of southern Italy, it comes to me with a certain quality of astonishment that my participation in these amazing adventures of Mr. Cavor was, after all, the outcome of the purest accident. It might have been any one. I fell into these things at a time when I thought myself removed from the slightest possibility of disturbing experiences. I had gone to Lympne because I had imagined it the most uneventful place in the world. “Here, at any rate,” said I, “I shall find peace and a chance to work!”"}
{"text": "Translate the following into Chinese.\n\nEnglish: What rooms do you have available?\nChinese: 你们有哪些房间可以提供"}
{"text": "User: Hello.\n\nAssistant: I'm here, meow~.\n\nUser: Can you tell some jokes?\n\nAssistant: Of course, master. What kind of jokes would you like to hear?"}
{"text": "Instruction: Write a story using the following information\n\nInput: A man named Alex chops a tree down\n\nResponse: Once upon a time, there was a man named Alex who lived in the heart of the forest. He had always been fascinated by trees and spent most of his days exploring the forest and learning about its many wonders. One day, while wandering through the woods, he stumbled upon an old oak tree that stood tall and proud in the middle of a clearing."}
{"text": "def get_args(args: Union[Sequence[str], None] = None):\n parser = argparse.ArgumentParser()\n group = parser.add_argument_group(title=\"server arguments\")\n group.add_argument(\n \"--port\",\n type=int,\n default=8000,\n help=\"port to run the server on (default: 8000)\",\n )\n group.add_argument(\n \"--host\",\n type=str,\n default=\"127.0.0.1\",\n help=\"host to run the server on (default: 127.0.0.1)\",\n )"}

View File

@@ -1,3 +1,5 @@
echo $@
if [[ ${cnMirror} == 1 ]]; then
export PIP_INDEX_URL="https://pypi.tuna.tsinghua.edu.cn/simple"
if grep -q "mirrors.aliyun.com" /etc/apt/sources.list; then

View File

@@ -184,7 +184,7 @@ if __name__ == "__main__":
args.num_sanity_val_steps = 0
args.check_val_every_n_epoch = int(1e20)
args.log_every_n_steps = int(1e20)
args.max_epochs = -1 # continue forever
args.max_epochs = args.epoch_count # continue forever
args.betas = (args.beta1, args.beta2)
args.real_bsz = int(args.num_nodes) * int(args.devices) * args.micro_bsz
os.environ["RWKV_T_MAX"] = str(args.ctx_len)
@@ -373,7 +373,7 @@ if __name__ == "__main__":
for param in module.parameters():
param.requires_grad = True
elif enable_time_finetune and any(
n.startswith("time") for n, _ in module.named_parameters()
n.startswith("time") for n, _ in module.named_parameters()
):
for pname, param in module.named_parameters():
if pname.startswith("time"):
@@ -381,7 +381,7 @@ if __name__ == "__main__":
param.requires_grad = True
if (
len(args.load_model) == 0 or args.my_pile_stage == 1
len(args.load_model) == 0 or args.my_pile_stage == 1
): # shall we build the initial weights?
init_weight_name = f"{args.proj_dir}/rwkv-init.pth"
generate_init_weight(model, init_weight_name) # save initial weights
@@ -423,8 +423,8 @@ if __name__ == "__main__":
)
if (
args.lr_init > 1e-4
or trainer.world_size * args.micro_bsz * trainer.accumulate_grad_batches < 8
args.lr_init > 1e-4
or trainer.world_size * args.micro_bsz * trainer.accumulate_grad_batches < 8
):
if "I_KNOW_WHAT_IM_DOING" in os.environ:
if trainer.global_rank == 0:
@@ -459,10 +459,10 @@ if __name__ == "__main__":
if "deepspeed" in args.strategy:
trainer.strategy.config["zero_optimization"]["allgather_bucket_size"] = (
args.ds_bucket_mb * 1000 * 1000
args.ds_bucket_mb * 1000 * 1000
)
trainer.strategy.config["zero_optimization"]["reduce_bucket_size"] = (
args.ds_bucket_mb * 1000 * 1000
args.ds_bucket_mb * 1000 * 1000
)
# must set shuffle=False, persistent_workers=False (because worker is in another thread)

View File

@@ -100,7 +100,7 @@
"Model Config Exception": "モデル設定例外",
"Use Gitee Updates Source": "Gitee更新ソースを使用",
"Use Custom CUDA kernel to Accelerate": "カスタムCUDAカーネルを使用して加速",
"Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues. If it fails to start, please turn off this option.": "このオプションを有効にすると、推論速度が大幅に向上し、一部のVRAMを節約できますが、互換性の問題が生じる可能性があります。起動に失敗した場合は、このオプションをオフにしてください。",
"Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues (output garbled). If it fails to start, please turn off this option, or try to upgrade your gpu driver.": "このオプションを有効にすると、推論速度が大幅に向上し、一部のVRAMを節約できますが、互換性の問題 (文字化けを出力する) が生じる可能性があります。起動に失敗した場合は、このオプションを無効にするか、GPUドライバーをアップグレードしてみてください。",
"Supported custom cuda file not found": "対応しているカスタムCUDAファイルが見つかりません",
"Failed to copy custom cuda file": "カスタムCUDAファイルのコピーに失敗しました",
"Downloading update, please wait. If it is not completed, please manually download the program from GitHub and replace the original program.": "更新をダウンロード中です、お待ちください。完了しない場合は、GitHubから手動でプログラムをダウンロードし、元のプログラムを置き換えてください。",
@@ -178,7 +178,7 @@
"Failed to import. Please copy a preset to the clipboard.": "インポートに失敗しました。プリセットをクリップボードにコピーしてください。",
"Clipboard is empty.": "クリップボードが空です。",
"Successfully copied to clipboard.": "クリップボードにコピーしました。",
"Edit Messages": "メッセージの編集",
"Edit Character Settings": "キャラクター設定を編集",
"Go Back": "戻る",
"Description": "説明",
"Avatar Url": "アバターURL",
@@ -226,14 +226,14 @@
"Please select a LoRA model": "LoRAモデルを選択してください",
"You are using sample data for training. For formal training, please make sure to create your own jsonl file.": "トレーニングにはサンプルデータを使用しています。正式なトレーニングのためには、自身でjsonlファイルを作成してください。",
"WSL is not running, please retry. If it keeps happening, it means you may be using an outdated version of WSL, run \"wsl --update\" to update.": "WSLが実行されていません、もう一度試してください。これが続く場合、古いバージョンのWSLを使用している可能性があります。\"wsl --update\"を実行して更新してください。",
"Memory is not enough, try to increase the virtual memory or use a smaller base model.": "メモリが不足しています、仮想メモリを増やすか小さなベースモデルを使用してみてください。",
"Memory is not enough, try to increase the virtual memory (Swap of WSL) or use a smaller base model.": "メモリが不足しています、仮想メモリ (WSL Swap) を増やすか小さなベースモデルを使用してみてください。",
"VRAM is not enough": "ビデオRAMが不足しています",
"Training data is not enough, reduce context length or add more data for training": "トレーニングデータが不足しています、コンテキストの長さを減らすか、トレーニング用のデータをさらに追加してください",
"You are using WSL 1 for training, please upgrade to WSL 2. e.g. Run \"wsl --set-version Ubuntu-22.04 2\"": "トレーニングにWSL 1を使用しています、WSL 2にアップグレードしてください。例:\"wsl --set-version Ubuntu-22.04 2\"を実行する",
"Matched CUDA is not installed": "対応するCUDAがインストールされていません",
"Failed to convert data": "データの変換に失敗しました",
"Failed to merge model": "モデルのマージに失敗しました",
"The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\nWhen you provide a directory path, all the txt files within that directory will be automatically converted into training data. This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\nThe jsonl format file can be referenced at https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl.\nYou can also write it similar to OpenAI's playground format, as shown in https://platform.openai.com/playground/p/default-chat.\nEven for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. If they are different dialogues or topics, they should be written in separate lines.": "データのパスはディレクトリまたはjsonl形式のファイルでなければなりません将来的にはより多くの形式がサポートされる予定です。ディレクトリパスを提供した場合、そのディレクトリ内のすべてのtxtファイルが自動的にトレーニングデータに変換されます。これは大規模なライティング、コード生成、または知識ベースのトレーニングで一般的に使用されます。jsonl形式のファイルは、https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl を参照してください。\nhttps://platform.openai.com/playground/p/default-chat のように、OpenAIのプレイグラウンド形式に似た形式で書くこともできます。複数ターンの対話であっても、一行で書く必要があり、行の区切りを示すために`\\n`を使用します。それらが異なる対話やトピックであれば、それらは別々の行に書かれるべきです。",
"The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\nWhen you provide a directory path, all the txt files within that directory will be automatically converted into training data. This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\nThe jsonl format file can be referenced at https://github.com/josStorer/RWKV-Runner/blob/master/finetune/data/sample.jsonl.\nYou can also write it similar to OpenAI's playground format, as shown in https://platform.openai.com/playground/p/default-chat.\nEven for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. If they are different dialogues or topics, they should be written in separate lines.": "データのパスはディレクトリまたはjsonl形式のファイルでなければなりません将来的にはより多くの形式がサポートされる予定です。ディレクトリパスを提供した場合、そのディレクトリ内のすべてのtxtファイルが自動的にトレーニングデータに変換されます。これは大規模なライティング、コード生成、または知識ベースのトレーニングで一般的に使用されます。jsonl形式のファイルは、https://github.com/josStorer/RWKV-Runner/blob/master/finetune/data/sample.jsonl を参照してください。\nhttps://platform.openai.com/playground/p/default-chat のように、OpenAIのプレイグラウンド形式に似た形式で書くこともできます。複数ターンの対話であっても、一行で書く必要があり、行の区切りを示すために`\\n`を使用します。それらが異なる対話やトピックであれば、それらは別々の行に書かれるべきです。",
"Size mismatch for blocks. You are attempting to continue training from the LoRA model, but it does not match the base model. Please set LoRA model to None.": "ブロックのサイズが一致しません。LoRAモデルからトレーニングを続けようとしていますが、それはベースモデルと一致しません。LoRAモデルをNoneに設定してください。",
"Instruction: Write a story using the following information\n\nInput: A man named Alex chops a tree down\n\nResponse:": "Instruction: Write a story using the following information\n\nInput: アレックスという男が木を切り倒す\n\nResponse:",
"Composition": "作曲",
@@ -244,5 +244,22 @@
"Failed to load local sound font, please check if the files exist - assets/sound-font": "ローカルサウンドフォントの読み込みに失敗しました、ファイルが存在するか確認してください - assets/sound-font",
"Please convert model to safe tensors format first": "モデルを安全なテンソル形式に変換してください",
"Convert To Safe Tensors Format": "安全なテンソル形式に変換",
"Please change Strategy to WebGPU to use safetensors format": "StrategyをWebGPUに変更して、安全なテンソル形式を使用してください"
"Please change Strategy to WebGPU to use safetensors format": "StrategyをWebGPUに変更して、安全なテンソル形式を使用してください",
"Preview Only": "プレビューのみ",
"RAM": "RAM",
"VRAM": "VRAM",
"GPU Usage": "GPU使用率",
"Use Custom Tokenizer": "カスタムトークナイザーを使用する",
"Tokenizer Path (e.g. backend-python/rwkv_pip/20B_tokenizer.json)": "トークナイザーパス (例: backend-python/rwkv_pip/20B_tokenizer.json)",
"User Name": "ユーザー名",
"Assistant Name": "アシスタント名",
"Insert default system prompt at the beginning": "最初にデフォルトのシステムプロンプトを挿入",
"Format Content": "内容フォーマットの規格化",
"Add An Attachment (Accepts pdf, txt)": "添付ファイルを追加 (pdf, txtを受け付けます)",
"Uploading Attachment": "添付ファイルアップロード中",
"Remove Attachment": "添付ファイルを削除",
"The content of file": "ファイル",
"is as follows. When replying to me, consider the file content and respond accordingly:": "の内容は以下の通りです。私に返信する際は、ファイルの内容を考慮して適切に返信してください:",
"What's the file name": "ファイル名は何ですか",
"The file name is: ": "ファイル名は次のとおりです: "
}

View File

@@ -100,7 +100,7 @@
"Model Config Exception": "模型配置异常",
"Use Gitee Updates Source": "使用Gitee更新源",
"Use Custom CUDA kernel to Accelerate": "使用自定义CUDA算子加速",
"Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues. If it fails to start, please turn off this option.": "开启这个选项能大大提升推理速度并节省显存,但可能存在兼容性问题,如果启动失败,请关闭此选项",
"Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues (output garbled). If it fails to start, please turn off this option, or try to upgrade your gpu driver.": "开启这个选项能大大提升推理速度并节省显存,但可能存在兼容性(回复乱码)问题,如果发生相关问题,请关闭此选项。或更新你的显卡驱动",
"Supported custom cuda file not found": "没有找到支持的自定义cuda文件",
"Failed to copy custom cuda file": "自定义cuda文件复制失败",
"Downloading update, please wait. If it is not completed, please manually download the program from GitHub and replace the original program.": "正在下载更新请等待。如果一直未完成请从Github手动下载并覆盖原程序",
@@ -178,7 +178,7 @@
"Failed to import. Please copy a preset to the clipboard.": "导入失败。请复制一个预设到剪贴板",
"Clipboard is empty.": "剪贴板没有内容",
"Successfully copied to clipboard.": "成功复制到剪贴板",
"Edit Messages": "编辑对话",
"Edit Character Settings": "编辑人设",
"Go Back": "返回",
"Description": "描述",
"Avatar Url": "头像图片地址",
@@ -226,14 +226,14 @@
"Please select a LoRA model": "请选择一个LoRA模型",
"You are using sample data for training. For formal training, please make sure to create your own jsonl file.": "你正在使用示例数据训练对于正式训练场合请务必创建你自己的jsonl训练数据",
"WSL is not running, please retry. If it keeps happening, it means you may be using an outdated version of WSL, run \"wsl --update\" to update.": "WSL没有运行请重试。如果一直出现此错误意味着你可能正在使用旧版本的WSL请在cmd执行\"wsl --update\"以更新",
"Memory is not enough, try to increase the virtual memory or use a smaller base model.": "内存不足,尝试增加虚拟内存,或使用一个更小规模的基底模型",
"Memory is not enough, try to increase the virtual memory (Swap of WSL) or use a smaller base model.": "内存不足,尝试增加虚拟内存(WSL Swap),或使用一个更小规模的基底模型",
"VRAM is not enough": "显存不足",
"Training data is not enough, reduce context length or add more data for training": "训练数据不足,请减小上下文长度或增加训练数据",
"You are using WSL 1 for training, please upgrade to WSL 2. e.g. Run \"wsl --set-version Ubuntu-22.04 2\"": "你正在使用WSL 1进行训练请升级到WSL 2。例如运行\"wsl --set-version Ubuntu-22.04 2\"",
"Matched CUDA is not installed": "未安装匹配的CUDA",
"Failed to convert data": "数据转换失败",
"Failed to merge model": "合并模型失败",
"The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\nWhen you provide a directory path, all the txt files within that directory will be automatically converted into training data. This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\nThe jsonl format file can be referenced at https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl.\nYou can also write it similar to OpenAI's playground format, as shown in https://platform.openai.com/playground/p/default-chat.\nEven for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. If they are different dialogues or topics, they should be written in separate lines.": "数据路径必须是一个文件夹或者jsonl格式文件 (未来会支持更多格式)\n\n当你填写的路径是一个文件夹时该文件夹内的所有txt文件会被自动转换为训练数据通常这用于大批量训练写作代码生成或知识库\n\njsonl文件的格式参考 https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl\n你也可以仿照openai的playground编写参考 https://platform.openai.com/playground/p/default-chat\n即使是多轮对话也必须写在一行用`\\n`表示换行,如果是不同对话或主题,则另起一行",
"The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\nWhen you provide a directory path, all the txt files within that directory will be automatically converted into training data. This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\nThe jsonl format file can be referenced at https://github.com/josStorer/RWKV-Runner/blob/master/finetune/data/sample.jsonl.\nYou can also write it similar to OpenAI's playground format, as shown in https://platform.openai.com/playground/p/default-chat.\nEven for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. If they are different dialogues or topics, they should be written in separate lines.": "数据路径必须是一个文件夹或者jsonl格式文件 (未来会支持更多格式)\n\n当你填写的路径是一个文件夹时该文件夹内的所有txt文件会被自动转换为训练数据通常这用于大批量训练写作代码生成或知识库\n\njsonl文件的格式参考 https://github.com/josStorer/RWKV-Runner/blob/master/finetune/data/sample.jsonl 以及 https://zhuanlan.zhihu.com/p/643433851\n你也可以仿照openai的playground编写参考 https://platform.openai.com/playground/p/default-chat\n即使是多轮对话也必须写在一行用`\\n`表示换行,如果是不同对话或主题,则另起一行",
"Size mismatch for blocks. You are attempting to continue training from the LoRA model, but it does not match the base model. Please set LoRA model to None.": "尺寸不匹配块。你正在尝试从LoRA模型继续训练但该LoRA模型与基底模型不匹配请将LoRA模型设为空",
"Instruction: Write a story using the following information\n\nInput: A man named Alex chops a tree down\n\nResponse:": "Instruction: Write a story using the following information\n\nInput: 艾利克斯砍倒了一棵树\n\nResponse:",
"Composition": "作曲",
@@ -244,5 +244,22 @@
"Failed to load local sound font, please check if the files exist - assets/sound-font": "加载本地音色资源失败,请检查文件是否存在 - assets/sound-font",
"Please convert model to safe tensors format first": "请先将模型转换为Safetensors格式",
"Convert To Safe Tensors Format": "转换为Safetensors格式",
"Please change Strategy to WebGPU to use safetensors format": "请将Strategy改为WebGPU以使用safetensors格式"
"Please change Strategy to WebGPU to use safetensors format": "请将Strategy改为WebGPU以使用safetensors格式",
"Preview Only": "仅预览",
"RAM": "内存",
"VRAM": "显存",
"GPU Usage": "GPU占用",
"Use Custom Tokenizer": "使用自定义Tokenizer",
"Tokenizer Path (e.g. backend-python/rwkv_pip/20B_tokenizer.json)": "Tokenizer路径 (例如: backend-python/rwkv_pip/20B_tokenizer.json)",
"User Name": "用户名称",
"Assistant Name": "AI名称",
"Insert default system prompt at the beginning": "在开头自动插入默认系统提示",
"Format Content": "规范格式",
"Add An Attachment (Accepts pdf, txt)": "添加一个附件 (支持pdf, txt)",
"Uploading Attachment": "正在上传附件",
"Remove Attachment": "移除附件",
"The content of file": "文件",
"is as follows. When replying to me, consider the file content and respond accordingly:": "内容如下。回复时考虑文件内容并做出相应回复:",
"What's the file name": "文件名是什么",
"The file name is: ": "文件名是:"
}

View File

@@ -1,17 +1,11 @@
import React, { FC, MouseEventHandler, ReactElement } from 'react';
import commonStore, { ModelStatus } from '../stores/commonStore';
import {
AddToDownloadList,
CopyFile,
FileExists,
StartServer,
StartWebGPUServer
} from '../../wailsjs/go/backend_golang/App';
import { AddToDownloadList, FileExists, StartServer, StartWebGPUServer } from '../../wailsjs/go/backend_golang/App';
import { Button } from '@fluentui/react-components';
import { observer } from 'mobx-react-lite';
import { exit, getStatus, readRoot, switchModel, updateConfig } from '../apis';
import { toast } from 'react-toastify';
import { checkDependencies, getStrategy, getSupportedCustomCudaFile, toastWithButton } from '../utils';
import { checkDependencies, getStrategy, toastWithButton } from '../utils';
import { useTranslation } from 'react-i18next';
import { ToolTipButton } from './ToolTipButton';
import { Play16Regular, Stop16Regular } from '@fluentui/react-icons';
@@ -119,9 +113,10 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
const startServer = webgpu ?
(_: string, port: number, host: string) => StartWebGPUServer(port, host)
: StartServer;
const isUsingCudaBeta = modelConfig.modelParameters.device === 'CUDA-Beta';
startServer(commonStore.settings.customPythonPath, port, commonStore.settings.host !== '127.0.0.1' ? '0.0.0.0' : '127.0.0.1',
modelConfig.modelParameters.device === 'CUDA-Beta'
isUsingCudaBeta
).catch((e) => {
const errMsg = e.message || e;
if (errMsg.includes('path contains space'))
@@ -162,22 +157,26 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
if ((modelConfig.modelParameters.device.includes('CUDA') || modelConfig.modelParameters.device === 'Custom')
&& modelConfig.modelParameters.useCustomCuda && !strategy.includes('fp32')) {
if (commonStore.platform === 'windows') {
customCudaFile = getSupportedCustomCudaFile();
if (customCudaFile) {
FileExists('./py310/Lib/site-packages/rwkv/model.py').then((exist) => {
// defensive measure. As Python has already been launched, will only take effect the next time it runs.
if (!exist) CopyFile('./backend-python/wkv_cuda_utils/wkv_cuda_model.py', './py310/Lib/site-packages/rwkv/model.py');
});
await CopyFile(customCudaFile, './py310/Lib/site-packages/rwkv/wkv_cuda.pyd').catch(() => {
FileExists('./py310/Lib/site-packages/rwkv/wkv_cuda.pyd').then((exist) => {
if (!exist) {
customCudaFile = '';
toast(t('Failed to copy custom cuda file'), { type: 'error' });
}
});
});
} else
toast(t('Supported custom cuda file not found'), { type: 'warning' });
// this part is currently unused because there's no longer a need to use different kernels for different GPUs, but it might still be needed in the future
//
// customCudaFile = getSupportedCustomCudaFile(isUsingCudaBeta);
// if (customCudaFile) {
// let kernelTargetPath: string;
// if (isUsingCudaBeta)
// kernelTargetPath = './backend-python/rwkv_pip/beta/wkv_cuda.pyd';
// else
// kernelTargetPath = './backend-python/rwkv_pip/wkv_cuda.pyd';
// await CopyFile(customCudaFile, kernelTargetPath).catch(() => {
// FileExists(kernelTargetPath).then((exist) => {
// if (!exist) {
// customCudaFile = '';
// toast(t('Failed to copy custom cuda file'), { type: 'error' });
// }
// });
// });
// } else
// toast(t('Supported custom cuda file not found'), { type: 'warning' });
customCudaFile = 'any';
} else {
customCudaFile = 'any';
}
@@ -186,6 +185,7 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
switchModel({
model: modelPath,
strategy: strategy,
tokenizer: modelConfig.modelParameters.useCustomTokenizer ? modelConfig.modelParameters.customTokenizer : undefined,
customCuda: customCudaFile !== ''
}).then(async (r) => {
if (r.ok) {
@@ -211,7 +211,7 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
'invalid header or archive is corrupted': 'The model file is corrupted, please download again.',
'no NVIDIA driver': 'Found no NVIDIA driver, please install the latest driver.',
'CUDA out of memory': 'VRAM is not enough, please reduce stored layers or use a lower precision in Configs page.',
'Ninja is required to load C++ extensions': 'Failed to enable custom CUDA kernel, ninja is required to load C++ extensions. You may be using the CPU version of PyTorch, please reinstall PyTorch with CUDA. Or if you are using a custom Python interpreter, you must compile the CUDA kernel by yourself or disable Custom CUDA kernel acceleration.'
'Ninja is required to load C++ extensions': 'Failed to enable custom CUDA kernel, ninja is required to load C++ extensions. You may be using the CPU version of PyTorch, please reinstall PyTorch with CUDA. Or if you are using a custom Python interpreter, you must compile the CUDA kernel by yourself or disable Custom CUDA kernel acceleration.',
};
const matchedError = Object.entries(errorsMap).find(([key, _]) => error.includes(key));
const message = matchedError ? t(matchedError[1]) : error;

View File

@@ -10,14 +10,22 @@ import { KebabHorizontalIcon, PencilIcon, SyncIcon, TrashIcon } from '@primer/oc
import logo from '../assets/images/logo.png';
import MarkdownRender from '../components/MarkdownRender';
import { ToolTipButton } from '../components/ToolTipButton';
import { ArrowCircleUp28Regular, Delete28Regular, RecordStop28Regular, Save28Regular } from '@fluentui/react-icons';
import {
ArrowCircleUp28Regular,
ArrowClockwise16Regular,
Attach16Regular,
Delete28Regular,
Dismiss16Regular,
RecordStop28Regular,
Save28Regular
} from '@fluentui/react-icons';
import { CopyButton } from '../components/CopyButton';
import { ReadButton } from '../components/ReadButton';
import { toast } from 'react-toastify';
import { WorkHeader } from '../components/WorkHeader';
import { DialogButton } from '../components/DialogButton';
import { OpenFileFolder, OpenSaveFileDialog } from '../../wailsjs/go/backend_golang/App';
import { toastWithButton } from '../utils';
import { OpenFileFolder, OpenOpenFileDialog, OpenSaveFileDialog } from '../../wailsjs/go/backend_golang/App';
import { absPathAsset, bytesToReadable, toastWithButton } from '../utils';
import { PresetsButton } from './PresetsManager/PresetsButton';
import { useMediaQuery } from 'usehooks-ts';
@@ -57,7 +65,7 @@ export type ConversationMessage = {
content: string;
}
let chatSseController: AbortController | null = null;
let chatSseControllers: { [id: string]: AbortController } = {};
const MoreUtilsButton: FC<{ uuid: string, setEditing: (editing: boolean) => void }> = observer(({
uuid,
@@ -114,6 +122,13 @@ const ChatMessageItem: FC<{
}
};
let avatarImg: string | undefined;
if (commonStore.activePreset && messageItem.sender === botName) {
avatarImg = absPathAsset(commonStore.activePreset.avatarImg);
} else if (messageItem.avatarImg) {
avatarImg = messageItem.avatarImg;
}
return <div
className={classnames(
'flex gap-2 mb-2 overflow-hidden',
@@ -131,7 +146,7 @@ const ChatMessageItem: FC<{
<Avatar
color={messageItem.color}
name={messageItem.sender}
image={(commonStore.activePreset && messageItem.sender === botName) ? { src: commonStore.activePreset.avatarImg } : messageItem.avatarImg ? { src: messageItem.avatarImg } : undefined}
image={avatarImg ? { src: avatarImg } : undefined}
/>
<div
className={classnames(
@@ -149,6 +164,10 @@ const ChatMessageItem: FC<{
value={messageItem.content}
onChange={(e) => {
messageItem.content = e.target.value;
commonStore.conversation[uuid].type = MessageType.Normal;
commonStore.conversation[uuid].done = true;
commonStore.setConversation(commonStore.conversation);
commonStore.setConversationOrder([...commonStore.conversationOrder]);
}}
onBlur={() => {
setEditingInner(false);
@@ -166,6 +185,10 @@ const ChatMessageItem: FC<{
messageItem.sender === botName && uuid !== welcomeUuid &&
<ToolTipButton desc={t('Retry')} size="small" appearance="subtle"
icon={<SyncIcon />} onClick={() => {
if (uuid in chatSseControllers) {
chatSseControllers[uuid].abort();
delete chatSseControllers[uuid];
}
onSubmit(null, uuid, null, uuid, false);
}} />
}
@@ -187,15 +210,7 @@ const ChatPanel: FC = observer(() => {
const currentConfig = commonStore.getCurrentModelConfig();
const apiParams = currentConfig.apiParameters;
const port = apiParams.apiPort;
let lastMessageId: string;
let generating: boolean = false;
if (commonStore.conversationOrder.length > 0) {
lastMessageId = commonStore.conversationOrder[commonStore.conversationOrder.length - 1];
const lastMessage = commonStore.conversation[lastMessageId];
if (lastMessage.sender === botName)
generating = !lastMessage.done;
}
const generating: boolean = Object.keys(chatSseControllers).length > 0;
useEffect(() => {
if (inputRef.current)
@@ -267,6 +282,16 @@ const ChatPanel: FC = observer(() => {
let targetRange = commonStore.conversationOrder.slice(startIndex, endIndex);
const messages: ConversationMessage[] = [];
if (commonStore.attachmentContent) {
messages.push({
role: 'user',
content: t('The content of file') + ` "${commonStore.attachmentName}" `
+ t('is as follows. When replying to me, consider the file content and respond accordingly:')
+ '\n\n' + commonStore.attachmentContent
});
messages.push({ role: 'user', content: t('What\'s the file name') });
messages.push({ role: 'assistant', content: t('The file name is: ') + commonStore.attachmentName });
}
targetRange.forEach((uuid, index) => {
if (uuid === welcomeUuid)
return;
@@ -296,7 +321,8 @@ const ChatPanel: FC = observer(() => {
commonStore.setConversationOrder(commonStore.conversationOrder);
setTimeout(scrollToBottom);
let answer = '';
chatSseController = new AbortController();
const chatSseController = new AbortController();
chatSseControllers[answerId] = chatSseController;
fetchEventSource( // https://api.openai.com/v1/chat/completions || http://127.0.0.1:${port}/chat/completions
commonStore.settings.apiUrl ?
commonStore.settings.apiUrl + '/v1/chat/completions' :
@@ -312,7 +338,10 @@ const ChatPanel: FC = observer(() => {
stream: true,
model: commonStore.settings.apiChatModelName, // 'gpt-3.5-turbo'
temperature: apiParams.temperature,
top_p: apiParams.topP
top_p: apiParams.topP,
user_name: commonStore.activePreset?.userName,
assistant_name: commonStore.activePreset?.assistantName,
presystem: commonStore.activePreset?.presystem
}),
signal: chatSseController?.signal,
onmessage(e) {
@@ -347,6 +376,8 @@ const ChatPanel: FC = observer(() => {
}
},
onclose() {
if (answerId! in chatSseControllers)
delete chatSseControllers[answerId!];
console.log('Connection closed');
},
onerror(err) {
@@ -377,33 +408,123 @@ const ChatPanel: FC = observer(() => {
size={mq ? 'large' : 'small'} shape="circular" appearance="subtle" title={t('Clear')}
contentText={t('Are you sure you want to clear the conversation? It cannot be undone.')}
onConfirm={() => {
if (generating)
chatSseController?.abort();
if (generating) {
for (const id in chatSseControllers) {
chatSseControllers[id].abort();
}
chatSseControllers = {};
}
commonStore.setConversation({});
commonStore.setConversationOrder([]);
}} />
<Textarea
ref={inputRef}
style={{ minWidth: 0 }}
className="grow"
resize="vertical"
placeholder={t('Type your message here')!}
value={commonStore.currentInput}
onChange={(e) => commonStore.setCurrentInput(e.target.value)}
onKeyDown={handleKeyDownOrClick}
/>
<div className="relative flex grow">
<Textarea
ref={inputRef}
style={{ minWidth: 0 }}
className="grow"
resize="vertical"
placeholder={t('Type your message here')!}
value={commonStore.currentInput}
onChange={(e) => commonStore.setCurrentInput(e.target.value)}
onKeyDown={handleKeyDownOrClick}
/>
<div className="absolute right-2 bottom-2">
{!commonStore.attachmentContent ?
<ToolTipButton
desc={commonStore.attachmentUploading ?
t('Uploading Attachment') :
t('Add An Attachment (Accepts pdf, txt)')}
icon={commonStore.attachmentUploading ?
<ArrowClockwise16Regular className="animate-spin" />
: <Attach16Regular />}
size="small" shape="circular" appearance="secondary"
onClick={() => {
if (commonStore.status.status === ModelStatus.Offline && !commonStore.settings.apiUrl) {
toast(t('Please click the button in the top right corner to start the model'), { type: 'warning' });
return;
}
if (commonStore.attachmentUploading)
return;
OpenOpenFileDialog('*.txt;*.pdf').then(async filePath => {
if (!filePath)
return;
commonStore.setAttachmentUploading(true);
// Both are slow. Communication between frontend and backend is slow. Use AssetServer Handler to read the file.
// const blob = new Blob([atob(info.content as unknown as string)]); // await fetch(`data:application/octet-stream;base64,${info.content}`).then(r => r.blob());
const blob = await fetch(absPathAsset(filePath)).then(r => r.blob());
const attachmentName = filePath.split(/[\\/]/).pop();
const urlPath = `/file-to-text?file_name=${attachmentName}`;
const bodyForm = new FormData();
bodyForm.append('file_data', blob, attachmentName);
fetch(commonStore.settings.apiUrl ?
commonStore.settings.apiUrl + urlPath :
`http://127.0.0.1:${port}${urlPath}`, {
method: 'POST',
body: bodyForm
}).then(async r => {
if (r.status === 200) {
const pages = (await r.json()).pages as any[];
let attachmentContent: string;
if (pages.length === 1)
attachmentContent = pages[0].page_content;
else
attachmentContent = pages.map((p, i) => `Page ${i + 1}:\n${p.page_content}`).join('\n\n');
commonStore.setAttachmentName(attachmentName!);
commonStore.setAttachmentSize(blob.size);
commonStore.setAttachmentContent(attachmentContent);
} else {
toast(r.statusText + '\n' + (await r.text()), {
type: 'error'
});
}
commonStore.setAttachmentUploading(false);
}
).catch(e => {
commonStore.setAttachmentUploading(false);
toast(t('Error') + ' - ' + (e.message || e), { type: 'error', autoClose: 2500 });
});
}).catch(e => {
toast(t('Error') + ' - ' + (e.message || e), { type: 'error', autoClose: 2500 });
});
}}
/> :
<div>
<ToolTipButton
text={
commonStore.attachmentName.replace(
new RegExp('(^[^\\.]{5})[^\\.]+'), '$1...')
}
desc={`${commonStore.attachmentName} (${bytesToReadable(commonStore.attachmentSize)})`}
size="small" shape="circular" appearance="secondary" />
<ToolTipButton desc={t('Remove Attachment')}
icon={<Dismiss16Regular />}
size="small" shape="circular" appearance="subtle"
onClick={() => {
commonStore.setAttachmentName('');
commonStore.setAttachmentSize(0);
commonStore.setAttachmentContent('');
}} />
</div>
}
</div>
</div>
<ToolTipButton desc={generating ? t('Stop') : t('Send')}
icon={generating ? <RecordStop28Regular /> : <ArrowCircleUp28Regular />}
size={mq ? 'large' : 'small'} shape="circular" appearance="subtle"
onClick={(e) => {
if (generating) {
chatSseController?.abort();
if (lastMessageId) {
commonStore.conversation[lastMessageId].type = MessageType.Error;
commonStore.conversation[lastMessageId].done = true;
commonStore.setConversation(commonStore.conversation);
commonStore.setConversationOrder([...commonStore.conversationOrder]);
for (const id in chatSseControllers) {
chatSseControllers[id].abort();
commonStore.conversation[id].type = MessageType.Error;
commonStore.conversation[id].done = true;
}
chatSseControllers = {};
commonStore.setConversation(commonStore.conversation);
commonStore.setConversationOrder([...commonStore.conversationOrder]);
} else {
handleKeyDownOrClick(e);
}
@@ -414,8 +535,8 @@ const ChatPanel: FC = observer(() => {
onClick={() => {
let savedContent: string = '';
const isWorldModel = commonStore.getCurrentModelConfig().modelParameters.modelName.toLowerCase().includes('world');
const user = isWorldModel ? 'Question' : 'Bob';
const bot = isWorldModel ? 'Answer' : 'Alice';
const user = isWorldModel ? 'User' : 'Bob';
const bot = isWorldModel ? 'Assistant' : 'Alice';
commonStore.conversationOrder.forEach((uuid) => {
if (uuid === welcomeUuid)
return;

View File

@@ -269,6 +269,13 @@ const CompletionPanel: FC = observer(() => {
} />
</div>
<div className="grow" />
<div className="flex justify-between gap-2">
<Button className="grow" onClick={() => {
const newPrompt = prompt.replace(/\n+\ /g, '\n').split('\n').map((line) => line.trim()).join('\n');
setPrompt(newPrompt);
commonStore.setCompletionSubmittedPrompt(newPrompt);
}}>{t('Format Content')}</Button>
</div>
<div className="flex justify-between gap-2">
<ToolTipButton desc={t('Regenerate')} icon={<ArrowSync20Regular />} onClick={() => {
completionSseController?.abort();

View File

@@ -319,7 +319,7 @@ const CompositionPanel: FC = observer(() => {
toastWithButton(t('File Saved'), t('Open'), () => {
OpenFileFolder(path, false);
});
}).catch((e: any) => {
}).catch((e) => {
toast(t('Error') + ' - ' + (e.message || e), { type: 'error', autoClose: 2500 });
});
} else {

View File

@@ -1,6 +1,19 @@
import { Dropdown, Input, Label, Option, Select, Switch, Text } from '@fluentui/react-components';
import {
Accordion,
AccordionHeader,
AccordionItem,
AccordionPanel,
Checkbox,
Dropdown,
Input,
Label,
Option,
Select,
Switch,
Text
} from '@fluentui/react-components';
import { AddCircle20Regular, DataUsageSettings20Regular, Delete20Regular, Save20Regular } from '@fluentui/react-icons';
import React, { FC } from 'react';
import React, { FC, useEffect, useRef } from 'react';
import { Section } from '../components/Section';
import { Labeled } from '../components/Labeled';
import { ToolTipButton } from '../components/ToolTipButton';
@@ -43,6 +56,8 @@ export type ModelParameters = {
maxStoredLayers: number;
useCustomCuda?: boolean;
customStrategy?: string;
useCustomTokenizer?: boolean;
customTokenizer?: string;
}
export type ModelConfig = {
@@ -57,10 +72,16 @@ export const Configs: FC = observer(() => {
const [selectedIndex, setSelectedIndex] = React.useState(commonStore.currentModelConfigIndex);
const [selectedConfig, setSelectedConfig] = React.useState(commonStore.modelConfigs[selectedIndex]);
const [displayStrategyImg, setDisplayStrategyImg] = React.useState(false);
const advancedHeaderRef = useRef<HTMLDivElement>(null);
const mq = useMediaQuery('(min-width: 640px)');
const navigate = useNavigate();
const port = selectedConfig.apiParameters.apiPort;
useEffect(() => {
if (advancedHeaderRef.current)
(advancedHeaderRef.current.firstElementChild as HTMLElement).style.padding = '0';
}, []);
const updateSelectedIndex = (newIndex: number) => {
setSelectedIndex(newIndex);
setSelectedConfig(commonStore.modelConfigs[newIndex]);
@@ -130,7 +151,7 @@ export const Configs: FC = observer(() => {
setSelectedIndex(0);
setSelectedConfig(commonStore.modelConfigs[0]);
}} />
<ToolTipButton desc={mq ? '' : t('Save Config')} icon={<Save20Regular />} text={mq ? t('Save Config') : ''}
<ToolTipButton desc={mq ? '' : t('Save Config')} icon={<Save20Regular />} text={mq ? t('Save Config') : null}
onClick={onClickSave} />
</div>
<div className="flex items-center gap-4">
@@ -402,7 +423,7 @@ export const Configs: FC = observer(() => {
{
(selectedConfig.modelParameters.device.includes('CUDA') || selectedConfig.modelParameters.device === 'Custom') &&
<Labeled label={t('Use Custom CUDA kernel to Accelerate')}
desc={t('Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues. If it fails to start, please turn off this option.')}
desc={t('Enabling this option can greatly improve inference speed and save some VRAM, but there may be compatibility issues (output garbled). If it fails to start, please turn off this option, or try to upgrade your gpu driver.')}
content={
<Switch checked={selectedConfig.modelParameters.useCustomCuda}
onChange={(e, data) => {
@@ -412,6 +433,40 @@ export const Configs: FC = observer(() => {
}} />
} />
}
{selectedConfig.modelParameters.device !== 'WebGPU' &&
<Accordion className="sm:col-span-2" collapsible
openItems={!commonStore.modelParamsCollapsed && 'advanced'}
onToggle={(e, data) => {
if (data.value === 'advanced')
commonStore.setModelParamsCollapsed(!commonStore.modelParamsCollapsed);
}}>
<AccordionItem value="advanced">
<AccordionHeader ref={advancedHeaderRef} size="small">{t('Advanced')}</AccordionHeader>
<AccordionPanel>
<div className="flex flex-col">
<div className="flex grow">
<Checkbox className="select-none"
size="large" label={t('Use Custom Tokenizer')}
checked={selectedConfig.modelParameters.useCustomTokenizer}
onChange={(_, data) => {
setSelectedConfigModelParams({
useCustomTokenizer: data.checked as boolean
});
}} />
<Input className="grow"
placeholder={t('Tokenizer Path (e.g. backend-python/rwkv_pip/20B_tokenizer.json)')!}
value={selectedConfig.modelParameters.customTokenizer}
onChange={(e, data) => {
setSelectedConfigModelParams({
customTokenizer: data.value
});
}} />
</div>
</div>
</AccordionPanel>
</AccordionItem>
</Accordion>
}
</div>
}
/>

View File

@@ -1,10 +1,10 @@
import React, { FC } from 'react';
import React, { FC, useEffect } from 'react';
import { useTranslation } from 'react-i18next';
import { Page } from '../components/Page';
import { observer } from 'mobx-react-lite';
import commonStore from '../stores/commonStore';
import { Divider, Field, ProgressBar } from '@fluentui/react-components';
import { bytesToGb, bytesToKb, bytesToMb } from '../utils';
import { bytesToGb, bytesToKb, bytesToMb, refreshLocalModels } from '../utils';
import { ToolTipButton } from '../components/ToolTipButton';
import { Folder20Regular, Pause20Regular, Play20Regular } from '@fluentui/react-icons';
import { AddToDownloadList, OpenFileFolder, PauseDownload } from '../../wailsjs/go/backend_golang/App';
@@ -23,6 +23,12 @@ export type DownloadStatus = {
export const Downloads: FC = observer(() => {
const { t } = useTranslation();
const finishedModelsLen = commonStore.downloadList.filter((status) => status.done && status.name.endsWith('.pth')).length;
useEffect(() => {
if (finishedModelsLen > 0)
refreshLocalModels({ models: commonStore.modelSourceList }, false);
console.log('finishedModelsLen:', finishedModelsLen);
}, [finishedModelsLen]);
let displayList = commonStore.downloadList.slice();
const downloadListNames = displayList.map(s => s.name);

View File

@@ -36,6 +36,7 @@ import { ClipboardGetText, ClipboardSetText } from '../../../wailsjs/runtime';
import { toast } from 'react-toastify';
import { CustomToastContainer } from '../../components/CustomToastContainer';
import { v4 as uuid } from 'uuid';
import { absPathAsset } from '../../utils';
export type PresetType = 'chat' | 'completion' | 'chatInCompletion'
@@ -56,6 +57,9 @@ export type Preset = {
stop: string,
injectStart: string,
injectEnd: string,
presystem?: boolean,
userName?: string,
assistantName?: string
}
export const defaultPreset: Preset = {
@@ -121,7 +125,7 @@ export const PresetCard: FC<{
const { t } = useTranslation();
return <PresetCardFrame onClick={onClick}>
<img src={avatarImg} className="rounded-xl select-none ml-auto mr-auto h-28" />
<img src={absPathAsset(avatarImg)} className="rounded-xl select-none ml-auto mr-auto h-28" />
<Text size={400}>{name}</Text>
<Text size={200} style={{
overflow: 'hidden', textOverflow: 'ellipsis',
@@ -164,8 +168,14 @@ export const ChatPresetEditor: FC<{
const importPreset = () => {
ClipboardGetText().then((text) => {
try {
if (!text.trim().startsWith('{'))
text = new TextDecoder().decode(
new Uint8Array(atob(text)
.split('')
.map((c) => c.charCodeAt(0))));
const preset = JSON.parse(text);
setEditingPreset(preset);
setEditingMessages(false);
toast(t('Imported successfully'), {
type: 'success',
autoClose: 1000
@@ -239,7 +249,7 @@ export const ChatPresetEditor: FC<{
<Button appearance="subtle" icon={<Dismiss20Regular />} />
</DialogTrigger>
</div>
<img src={editingPreset.avatarImg} className="rounded-xl select-none ml-auto mr-auto h-28" />
<img src={absPathAsset(editingPreset.avatarImg)} className="rounded-xl select-none ml-auto mr-auto h-28" />
<Labeled flex breakline label={t('Name')}
content={
<div className="flex gap-2">
@@ -250,14 +260,41 @@ export const ChatPresetEditor: FC<{
}} />
<Button onClick={() => {
setEditingMessages(!editingMessages);
}}>{!editingMessages ? t('Edit Messages') : t('Go Back')}</Button>
}}>{!editingMessages ? t('Edit Character Settings') : t('Go Back')}</Button>
</div>
} />
{
editingMessages ?
<MessagesEditor /> :
<div className="flex flex-col gap-1">
<Labeled flex spaceBetween label={t('Insert default system prompt at the beginning')}
content={
<Switch checked={editingPreset.presystem === undefined ? true : editingPreset.presystem}
onChange={(e, data) => {
setEditingPreset({
presystem: data.checked
});
}} />
} />
<Labeled flex breakline label={t('User Name')}
content={
<Input placeholder="User" value={editingPreset.userName} onChange={(e, data) => {
setEditingPreset({
userName: data.value
});
}} />
} />
<Labeled flex breakline label={t('Assistant Name')}
content={
<Input placeholder="Assistant" value={editingPreset.assistantName} onChange={(e, data) => {
setEditingPreset({
assistantName: data.value
});
}} />
} />
<MessagesEditor />
</div> :
<div className="flex flex-col gap-1 p-2 overflow-x-hidden overflow-y-auto">
<Labeled flex breakline label={t('Description')}
<Labeled flex breakline label={`${t('Description')} (${t('Preview Only')})`}
content={
<Input value={editingPreset.desc} onChange={(e, data) => {
setEditingPreset({

View File

@@ -154,7 +154,7 @@ const showError = (e: any) => {
};
const errorsMap = Object.entries({
'python3 ./finetune/lora/train.py': 'Memory is not enough, try to increase the virtual memory or use a smaller base model.',
'python3 ./finetune/lora/train.py': 'Memory is not enough, try to increase the virtual memory (Swap of WSL) or use a smaller base model.',
'cuda out of memory': 'VRAM is not enough',
'valueerror: high <= 0': 'Training data is not enough, reduce context length or add more data for training',
'+= \'+ptx\'': 'You are using WSL 1 for training, please upgrade to WSL 2. e.g. Run "wsl --set-version Ubuntu-22.04 2"',
@@ -219,7 +219,7 @@ const Terminal: FC = observer(() => {
WslStart().then(() => {
addWslMessage('WSL> ' + input);
setInput('');
WslCommand(input).catch(showError);
WslCommand(input).then(WindowShow).catch(showError);
}).catch(showError);
}
};
@@ -414,7 +414,7 @@ const LoraFinetune: FC = observer(() => {
contentText={t('The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\n' +
'When you provide a directory path, all the txt files within that directory will be automatically converted into training data. ' +
'This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\n' +
'The jsonl format file can be referenced at https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl.\n' +
'The jsonl format file can be referenced at https://github.com/josStorer/RWKV-Runner/blob/master/finetune/data/sample.jsonl.\n' +
'You can also write it similar to OpenAI\'s playground format, as shown in https://platform.openai.com/playground/p/default-chat.\n' +
'Even for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. ' +
'If they are different dialogues or topics, they should be written in separate lines.')} />

View File

@@ -2,11 +2,12 @@ import commonStore, { Platform } from './stores/commonStore';
import { GetPlatform, ListDirFiles, ReadJson } from '../wailsjs/go/backend_golang/App';
import { Cache, checkUpdate, downloadProgramFiles, LocalConfig, refreshLocalModels, refreshModels } from './utils';
import { getStatus } from './apis';
import { EventsOn } from '../wailsjs/runtime';
import { EventsOn, WindowSetTitle } from '../wailsjs/runtime';
import manifest from '../../manifest.json';
import { defaultModelConfigs, defaultModelConfigsMac } from './pages/defaultConfigs';
import { Preset } from './pages/PresetsManager/PresetsButton';
import { wslHandler } from './pages/Train';
import { t } from 'i18next';
export async function startup() {
downloadProgramFiles();
@@ -23,6 +24,8 @@ export async function startup() {
initPresets();
initHardwareMonitor();
await GetPlatform().then(p => commonStore.setPlatform(p as Platform));
await initConfig();
@@ -117,3 +120,20 @@ async function initLocalModelsNotify() {
refreshLocalModels({ models: commonStore.modelSourceList }, false); //TODO fix bug that only add models
});
}
type monitorData = {
usedMemory: number;
totalMemory: number;
gpuUsage: number;
gpuPower: number;
usedVram: number;
totalVram: number;
}
async function initHardwareMonitor() {
EventsOn('monitor', (data: string) => {
const results: monitorData = JSON.parse(data);
if (results)
WindowSetTitle(`RWKV-Runner (${t('RAM')}: ${results.usedMemory.toFixed(1)}/${results.totalMemory.toFixed(1)} GB, ${t('VRAM')}: ${(results.usedVram / 1024).toFixed(1)}/${(results.totalVram / 1024).toFixed(1)} GB, ${t('GPU Usage')}: ${results.gpuUsage}%)`);
});
}

View File

@@ -54,6 +54,10 @@ class CommonStore {
conversation: Conversation = {};
conversationOrder: string[] = [];
activePreset: Preset | null = null;
attachmentUploading: boolean = false;
attachmentName: string = '';
attachmentSize: number = 0;
attachmentContent: string = '';
// completion
completionPreset: CompletionPreset | null = null;
completionGenerating: boolean = false;
@@ -74,6 +78,7 @@ class CommonStore {
// configs
currentModelConfigIndex: number = 0;
modelConfigs: ModelConfig[] = [];
modelParamsCollapsed: boolean = true;
// models
modelSourceManifestList: string = 'https://cdn.jsdelivr.net/gh/josstorer/RWKV-Runner@master/manifest.json;';
modelSourceList: ModelSourceItem[] = [];
@@ -259,6 +264,10 @@ class CommonStore {
this.advancedCollapsed = value;
}
setModelParamsCollapsed(value: boolean) {
this.modelParamsCollapsed = value;
}
setLastUnfinishedModelDownloads(value: DownloadStatus[]) {
this.lastUnfinishedModelDownloads = value;
}
@@ -320,6 +329,22 @@ class CommonStore {
setLoraModels(value: string[]) {
this.loraModels = value;
}
setAttachmentUploading(value: boolean) {
this.attachmentUploading = value;
}
setAttachmentName(value: string) {
this.attachmentName = value;
}
setAttachmentSize(value: number) {
this.attachmentSize = value;
}
setAttachmentContent(value: string) {
this.attachmentContent = value;
}
}
export default new CommonStore();

View File

@@ -1,6 +1,5 @@
import {
AddToDownloadList,
CopyFile,
DeleteFile,
DepCheck,
InstallPyDep,
@@ -184,7 +183,7 @@ export const getStrategy = (modelConfig: ModelConfig | undefined = undefined) =>
case 'CUDA':
case 'CUDA-Beta':
if (avoidOverflow)
strategy = 'cuda fp32 *1 -> ';
strategy = params.useCustomCuda ? 'cuda fp16 *1 -> ' : 'cuda fp32 *1 -> ';
strategy += 'cuda ';
strategy += params.precision === 'fp16' ? 'fp16' : params.precision === 'int8' ? 'fp16i8' : 'fp32';
if (params.storedLayers < params.maxStoredLayers)
@@ -283,6 +282,21 @@ export function bytesToKb(size: number) {
return (size / 1024).toFixed(2);
}
export function bytesToReadable(size: number) {
if (size < 1024) return size + ' B';
else if (size < 1024 * 1024) return bytesToKb(size) + ' KB';
else if (size < 1024 * 1024 * 1024) return bytesToMb(size) + ' MB';
else return bytesToGb(size) + ' GB';
}
export function absPathAsset(path: string) {
if ((path.length > 0 && path[0] === '/') ||
(path.length > 1 && path[1] === ':')) {
return '=>' + path;
}
return path;
}
export async function checkUpdate(notifyEvenLatest: boolean = false) {
fetch(!commonStore.settings.giteeUpdatesSource ?
'https://api.github.com/repos/josstorer/RWKV-Runner/releases/latest' :
@@ -402,8 +416,6 @@ export const checkDependencies = async (navigate: NavigateFunction) => {
return false;
}
commonStore.setDepComplete(true);
if (commonStore.platform === 'windows')
CopyFile('./backend-python/wkv_cuda_utils/wkv_cuda_model.py', './py310/Lib/site-packages/rwkv/model.py');
}
return true;
};
@@ -428,12 +440,16 @@ export function toastWithButton(text: string, buttonText: string, onClickButton:
return id;
}
export function getSupportedCustomCudaFile() {
export function getSupportedCustomCudaFile(isBeta: boolean) {
if ([' 10', ' 16', ' 20', ' 30', 'MX', 'Tesla P', 'Quadro P', 'NVIDIA P', 'TITAN X', 'TITAN RTX', 'RTX A',
'Quadro RTX 4000', 'Quadro RTX 5000', 'Tesla T4', 'NVIDIA A10', 'NVIDIA A40'].some(v => commonStore.status.device_name.includes(v)))
return './backend-python/wkv_cuda_utils/wkv_cuda10_30.pyd';
return isBeta ?
'./backend-python/wkv_cuda_utils/beta/wkv_cuda10_30.pyd' :
'./backend-python/wkv_cuda_utils/wkv_cuda10_30.pyd';
else if ([' 40', 'RTX 5000 Ada', 'RTX 6000 Ada', 'RTX TITAN Ada', 'NVIDIA L40'].some(v => commonStore.status.device_name.includes(v)))
return './backend-python/wkv_cuda_utils/wkv_cuda40.pyd';
return isBeta ?
'./backend-python/wkv_cuda_utils/beta/wkv_cuda40.pyd' :
'./backend-python/wkv_cuda_utils/wkv_cuda40.pyd';
else
return '';
}

View File

@@ -34,6 +34,8 @@ export function MergeLora(arg1:string,arg2:boolean,arg3:number,arg4:string,arg5:
export function OpenFileFolder(arg1:string,arg2:boolean):Promise<void>;
export function OpenOpenFileDialog(arg1:string):Promise<string>;
export function OpenSaveFileDialog(arg1:string,arg2:string,arg3:string):Promise<string>;
export function OpenSaveFileDialogBytes(arg1:string,arg2:string,arg3:Array<number>):Promise<string>;

View File

@@ -66,6 +66,10 @@ export function OpenFileFolder(arg1, arg2) {
return window['go']['backend_golang']['App']['OpenFileFolder'](arg1, arg2);
}
export function OpenOpenFileDialog(arg1) {
return window['go']['backend_golang']['App']['OpenOpenFileDialog'](arg1);
}
export function OpenSaveFileDialog(arg1, arg2, arg3) {
return window['go']['backend_golang']['App']['OpenSaveFileDialog'](arg1, arg2, arg3);
}

11
go.mod
View File

@@ -4,15 +4,16 @@ go 1.20
require (
github.com/cavaliergopher/grab/v3 v3.0.1
github.com/fsnotify/fsnotify v1.6.0
github.com/minio/selfupdate v0.6.0
github.com/nyaosorg/go-windows-su v0.2.1
github.com/ubuntu/gowsl v0.0.0-20230615094051-94945650cc1e
github.com/wailsapp/wails/v2 v2.5.1
github.com/wailsapp/wails/v2 v2.6.0
)
require (
aead.dev/minisign v0.2.0 // indirect
github.com/bep/debounce v1.2.1 // indirect
github.com/fsnotify/fsnotify v1.6.0
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/jchv/go-winloader v0.0.0-20210711035445-715c2860da7e // indirect
@@ -22,8 +23,7 @@ require (
github.com/leaanthony/gosod v1.0.3 // indirect
github.com/leaanthony/slicer v1.6.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.18 // indirect
github.com/nyaosorg/go-windows-su v0.2.1
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/rivo/uniseg v0.4.4 // indirect
@@ -33,9 +33,10 @@ require (
github.com/ubuntu/decorate v0.0.0-20230125165522-2d5b0a9bb117 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/fasttemplate v1.2.2 // indirect
github.com/wailsapp/go-webview2 v1.0.1 // indirect
github.com/wailsapp/mimetype v1.4.1 // indirect
golang.org/x/crypto v0.9.0 // indirect
golang.org/x/exp v0.0.0-20230515195305-f3d0a9c9a5cc // indirect
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 // indirect
golang.org/x/net v0.10.0 // indirect
golang.org/x/sys v0.9.0 // indirect
golang.org/x/text v0.9.0 // indirect

14
go.sum
View File

@@ -36,8 +36,8 @@ github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxec
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.18 h1:DOKFKCQ7FNG2L1rbrmstDN4QVRdS89Nkh85u68Uwp98=
github.com/mattn/go-isatty v0.0.18/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/minio/selfupdate v0.6.0 h1:i76PgT0K5xO9+hjzKcacQtO7+MjJ4JKA8Ak8XQ9DDwU=
github.com/minio/selfupdate v0.6.0/go.mod h1:bO02GTIPCMQFTEvE5h4DjYB58bCoZ35XLeBf0buTDdM=
github.com/nyaosorg/go-windows-su v0.2.1 h1:5V0XavLyjOqPUp7psxxCvBISaneU4XmFPSMlejSl5sc=
@@ -69,17 +69,19 @@ github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyC
github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo=
github.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/wailsapp/go-webview2 v1.0.1 h1:dEJIeEApW/MhO2tTMISZBFZPuW7kwrFA1NtgFB1z1II=
github.com/wailsapp/go-webview2 v1.0.1/go.mod h1:Uk2BePfCRzttBBjFrBmqKGJd41P6QIHeV9kTgIeOZNo=
github.com/wailsapp/mimetype v1.4.1 h1:pQN9ycO7uo4vsUUuPeHEYoUkLVkaRntMnHJxVwYhwHs=
github.com/wailsapp/mimetype v1.4.1/go.mod h1:9aV5k31bBOv5z6u+QP8TltzvNGJPmNJD4XlAL3U+j3o=
github.com/wailsapp/wails/v2 v2.5.1 h1:mfG+2kWqQXYOwdgI43HEILjOZDXbk5woPYI3jP2b+js=
github.com/wailsapp/wails/v2 v2.5.1/go.mod h1:jbOZbcr/zm79PxXxAjP8UoVlDd9wLW3uDs+isIthDfs=
github.com/wailsapp/wails/v2 v2.6.0 h1:EyH0zR/EO6dDiqNy8qU5spaXDfkluiq77xrkabPYD4c=
github.com/wailsapp/wails/v2 v2.6.0/go.mod h1:WBG9KKWuw0FKfoepBrr/vRlyTmHaMibWesK3yz6nNiM=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20211209193657-4570a0811e8b/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.9.0 h1:LF6fAI+IutBocDJ2OT0Q1g8plpYljMZ4+lty+dsqw3g=
golang.org/x/crypto v0.9.0/go.mod h1:yrmDGqONDYtNj3tH8X9dzUun2m2lzPa9ngI6/RUPGR0=
golang.org/x/exp v0.0.0-20230515195305-f3d0a9c9a5cc h1:mCRnTeVUjcrhlRmO0VK8a6k6Rrf6TF9htwo2pJVSjIU=
golang.org/x/exp v0.0.0-20230515195305-f3d0a9c9a5cc/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 h1:k/i9J1pBpvlfR+9QsetwPyERsqu1GIbi967PQMq3Ivc=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20210505024714-0287a6fb4125/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=

22
main.go
View File

@@ -27,6 +27,7 @@ func NewFileLoader() *FileLoader {
func (h *FileLoader) ServeHTTP(res http.ResponseWriter, req *http.Request) {
var err error
requestedFilename := strings.TrimPrefix(req.URL.Path, "/")
requestedFilename = strings.TrimPrefix(requestedFilename, "=>") // absolute path
println("Requesting file:", requestedFilename)
fileData, err := os.ReadFile(requestedFilename)
if err != nil {
@@ -43,7 +44,7 @@ var assets embed.FS
//go:embed all:py310/Lib/site-packages/cyac
var cyac embed.FS
//go:embed all:py310/Lib/site-packages/cyac-1.7.dist-info
//go:embed all:py310/Lib/site-packages/cyac-1.9.dist-info
var cyacInfo embed.FS
//go:embed backend-python
@@ -61,8 +62,12 @@ var midi embed.FS
//go:embed assets/sound-font
var midiAssets embed.FS
//go:embed components
var components embed.FS
func main() {
if buildInfo, ok := debug.ReadBuildInfo(); !ok || strings.Contains(buildInfo.String(), "-ldflags") {
os.RemoveAll("./py310/Lib/site-packages/cyac-1.7.dist-info")
backend.CopyEmbed(cyac)
backend.CopyEmbed(cyacInfo)
backend.CopyEmbed(py)
@@ -70,6 +75,7 @@ func main() {
backend.CopyEmbed(finetune)
backend.CopyEmbed(midi)
backend.CopyEmbed(midiAssets)
backend.CopyEmbed(components)
}
// Create an instance of the app structure
@@ -89,11 +95,12 @@ func main() {
// Create application with options
err = wails.Run(&options.App{
Title: "RWKV-Runner",
Width: 1024,
Height: 680,
MinWidth: 375,
MinHeight: 640,
Title: "RWKV-Runner",
Width: 1024,
Height: 680,
MinWidth: 375,
MinHeight: 640,
EnableDefaultContextMenu: true,
Windows: &windows.Options{
ZoomFactor: zoomFactor,
IsZoomControlEnabled: true,
@@ -102,7 +109,8 @@ func main() {
Assets: assets,
Handler: NewFileLoader(),
},
OnStartup: app.OnStartup,
OnStartup: app.OnStartup,
OnBeforeClose: app.OnBeforeClose,
Bind: []any{
app,
},

View File

@@ -1,5 +1,5 @@
{
"version": "1.4.3",
"version": "1.4.8",
"introduction": {
"en": "RWKV is an open-source, commercially usable large language model with high flexibility and great potential for development.\n### About This Tool\nThis tool aims to lower the barrier of entry for using large language models, making it accessible to everyone. It provides fully automated dependency and model management. You simply need to click and run, following the instructions, to deploy a local large language model. The tool itself is very compact and only requires a single executable file for one-click deployment.\nAdditionally, this tool offers an interface that is fully compatible with the OpenAI API. This means you can use any ChatGPT client as a client for RWKV, enabling capability expansion beyond just chat functionality.\n### Preset Configuration Rules at the Bottom\nThis tool comes with a series of preset configurations to reduce complexity. The naming rules for each configuration represent the following in order: device - required VRAM/memory - model size - model language.\nFor example, \"GPU-8G-3B-EN\" indicates that this configuration is for a graphics card with 8GB of VRAM, a model size of 3 billion parameters, and it uses an English language model.\nLarger model sizes have higher performance and VRAM requirements. Among configurations with the same model size, those with higher VRAM usage will have faster runtime.\nFor example, if you have 12GB of VRAM but running the \"GPU-12G-7B-EN\" configuration is slow, you can downgrade to \"GPU-8G-3B-EN\" for a significant speed improvement.\n### About RWKV\nRWKV is an RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's 100% attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the \"GPT\" mode to quickly compute the hidden state for the \"RNN\" mode.<br/>So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, \"infinite\" ctx_len, and free sentence embedding (using the final hidden state).",
"zh": "RWKV是一个开源且允许商用的大语言模型灵活性很高且极具发展潜力。\n### 关于本工具\n本工具旨在降低大语言模型的使用门槛做到人人可用本工具提供了全自动化的依赖和模型管理你只需要直接点击运行跟随引导即可完成本地大语言模型的部署工具本身体积极小只需要一个exe即可完成一键部署。\n此外本工具提供了与OpenAI API完全兼容的接口这意味着你可以把任意ChatGPT客户端用作RWKV的客户端实现能力拓展而不局限于聊天。\n### 底部的预设配置规则\n本工具内置了一系列预设配置以降低使用难度每个配置名的规则依次代表着设备-所需显存/内存-模型规模-模型语言。\n例如GPU-8G-3B-CN表示该配置用于显卡需要8G显存模型规模为30亿参数使用的是中文模型。\n模型规模越大性能要求越高显存要求也越高而同样模型规模的配置中显存占用越高的运行速度越快。\n例如当你有12G显存但运行GPU-12G-7B-CN配置速度比较慢可降级成GPU-8G-3B-CN将会大幅提速。\n### 关于RWKV\nRWKV是具有Transformer级别LLM性能的RNN也可以像GPT Transformer一样直接进行训练可并行化。而且它是100% attention-free的。你只需在位置t处获得隐藏状态即可计算位置t + 1处的状态。你可以使用“GPT”模式快速计算用于“RNN”模式的隐藏状态。\n因此它将RNN和Transformer的优点结合起来 - 高性能、快速推理、节省显存、快速训练、“无限”上下文长度以及免费的语句嵌入(使用最终隐藏状态)。"
@@ -15,6 +15,19 @@
}
],
"models": [
{
"name": "RWKV-5-World-1B5-v2-20231025-ctx4096.pth",
"desc": {
"en": "RWKV-5 Global Languages 1.5B v2",
"zh": "RWKV-5 全球语言 1.5B v2",
"ja": "RWKV-5 グローバル言語 1.5B v2"
},
"size": 3155590194,
"SHA256": "5a89f56be7f82ab9dd0835af9a6838f788477471616c02f7b041e3aea0c57435",
"lastUpdated": "2023-10-26T05:49:30",
"url": "https://huggingface.co/BlinkDL/rwkv-5-world/blob/main/RWKV-5-World-1B5-v2-20231025-ctx4096.pth",
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-world/resolve/main/RWKV-5-World-1B5-v2-20231025-ctx4096.pth"
},
{
"name": "RWKV-4-World-CHNtuned-0.1B-v1-20230617-ctx4096.pth",
"desc": {
@@ -301,6 +314,58 @@
"url": "https://huggingface.co/BlinkDL/rwkv-4-world/blob/main/RWKV-4-World-7B-v1-20230626-ctx4096.pth",
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-4-world/resolve/main/RWKV-4-World-7B-v1-20230626-ctx4096.pth"
},
{
"name": "RWKV-claude-4-World-7B-20230805-ctx65k.pth",
"desc": {
"en": "Global Languages 7B v1 Ctx65k Claude Like",
"zh": "全球语言 7B v1 65k上下文 Claude功能",
"ja": "グローバル言語 7B v1 65kコンテキスト Claude機能"
},
"size": 15035391533,
"SHA256": "8cd25f8a1ab58965993cc47b3b2f99585836eed008a2e44526c258189ea751a6",
"lastUpdated": "2023-08-05T08:52:20",
"url": "https://huggingface.co/xiaol/RWKV-claude-4-World-7B-65k/blob/main/RWKV-claude-4-World-7B-20230805-ctx65k.pth",
"downloadUrl": "https://huggingface.co/xiaol/RWKV-claude-4-World-7B-65k/resolve/main/RWKV-claude-4-World-7B-20230805-ctx65k.pth"
},
{
"name": "RWKV-toolformer-translation-japanese-chinese-english-7B-World-20230815-ctx128k.pth",
"desc": {
"en": "Global Languages 7B v1 Ctx128k Toolformer",
"zh": "全球语言 7B v1 128k上下文 Toolformer",
"ja": "グローバル言語 7B v1 128kコンテキスト Toolformer"
},
"size": 15035391533,
"SHA256": "648a3b21055bdab77021ce278da80fbada8dcaae0b3d41d1eca9aa194c1fd25f",
"lastUpdated": "2023-08-15T07:18:23",
"url": "https://huggingface.co/xiaol/RWKV-toolformer-translation-japanese-chinese-english-7B-World-128k/blob/main/RWKV-toolformer-translation-japanese-chinese-english-7B-World-20230815-ctx128k.pth",
"downloadUrl": "https://huggingface.co/xiaol/RWKV-toolformer-translation-japanese-chinese-english-7B-World-128k/resolve/main/RWKV-toolformer-translation-japanese-chinese-english-7B-World-20230815-ctx128k.pth"
},
{
"name": "RWKV-code-4-World-7B-20230820-ctx32k.pth",
"desc": {
"en": "Global Languages 7B v1 Ctx32k Code Ability",
"zh": "全球语言 7B v1 32k上下文 代码能力",
"ja": "グローバル言語 7B v1 32kコンテキスト コード能力"
},
"size": 15035391533,
"SHA256": "19666620437ae3a5fb06e16a52729d67e449fca155fab3d5861ffe9ecf247404",
"lastUpdated": "2023-08-20T05:00:17",
"url": "https://huggingface.co/xiaol/RWKV-Code-7B-world-32k/blob/main/RWKV-code-4-World-7B-20230820-ctx32k.pth",
"downloadUrl": "https://huggingface.co/xiaol/RWKV-Code-7B-world-32k/resolve/main/RWKV-code-4-World-7B-20230820-ctx32k.pth"
},
{
"name": "wizard-rwkv-4-world-ctx32k.pth",
"desc": {
"en": "Global Languages 7B v1 Ctx32k Wikipedia",
"zh": "全球语言 7B v1 32k上下文 维基百科",
"ja": "グローバル言語 7B v1 32kコンテキスト ウィキペディア"
},
"size": 15035391538,
"SHA256": "c5d991f315a1676d4bed93dd91f803b1376096e7a4af5bf72b339d055f53bac7",
"lastUpdated": "2023-07-29T03:21:47",
"url": "https://huggingface.co/xiaol/wizard-rwkv-world-7B-ctx32k/blob/main/wizard-rwkv-4-world-ctx32k.pth",
"downloadUrl": "https://huggingface.co/xiaol/wizard-rwkv-world-7B-ctx32k/resolve/main/wizard-rwkv-4-world-ctx32k.pth"
},
{
"name": "RWKV-4-World-CHNtuned-7B-v1-20230709-ctx4096.pth",
"desc": {
@@ -327,6 +392,45 @@
"url": "https://huggingface.co/xiaol/readflow-rwkv-4-world-ctx32k/blob/main/Readflow-RWKV-4-World-CHNtuned-7B-v1-20230709-ctx32k.pth",
"downloadUrl": "https://huggingface.co/xiaol/readflow-rwkv-4-world-ctx32k/resolve/main/Readflow-RWKV-4-World-CHNtuned-7B-v1-20230709-ctx32k.pth"
},
{
"name": "novel-RWKV-4-World-CHNtuned-7B-v1-20230709-ctx32k.pth",
"desc": {
"en": "Global Languages 7B v1 Enhanced Chinese Ctx32k Novel Outline Ability",
"zh": "全球语言 7B v1 中文增强 32k上下文 小说大纲扩写",
"ja": "グローバル言語 7B v1 中国語強化 32kコンテキスト 小説のあらすじを書く"
},
"size": 15035391538,
"SHA256": "0fe2415ce61af52a8c38c071b475c01b4c9f8a4f2b4aaed6181f0334f3faf7f4",
"lastUpdated": "2023-07-28T13:30:59",
"url": "https://huggingface.co/xiaol/ruotangwx-rwkv-7b-novel-32k/blob/main/novel-RWKV-4-World-CHNtuned-7B-v1-20230709-ctx32k.pth",
"downloadUrl": "https://huggingface.co/xiaol/ruotangwx-rwkv-7b-novel-32k/resolve/main/novel-RWKV-4-World-CHNtuned-7B-v1-20230709-ctx32k.pth"
},
{
"name": "chatgal-RWKV-4-World-CHNtuned-7B-v1-20230709-ctx32k-1000.pth",
"desc": {
"en": "Global Languages 7B v1 Enhanced Chinese Ctx32k GalGame 1000",
"zh": "全球语言 7B v1 中文增强 32k上下文 GalGame 1000",
"ja": "グローバル言語 7B v1 中国語強化 32kコンテキスト GalGame 1000"
},
"size": 15035391543,
"SHA256": "aaed29cfd1bddee47c48f564aa800eb001f62fd03290d772647d5678e40d66e8",
"lastUpdated": "2023-07-21T08:59:18",
"url": "https://huggingface.co/xiaol/chatgal-rwkv-7b-world-32k/blob/main/chatgal-RWKV-4-World-CHNtuned-7B-v1-20230709-ctx32k-1000.pth",
"downloadUrl": "https://huggingface.co/xiaol/chatgal-rwkv-7b-world-32k/resolve/main/chatgal-RWKV-4-World-CHNtuned-7B-v1-20230709-ctx32k-1000.pth"
},
{
"name": "chatgal-RWKV-4-World-CHNtuned-7B-v1-20230709-ctx32k-500.pth",
"desc": {
"en": "Global Languages 7B v1 Enhanced Chinese Ctx32k GalGame 500",
"zh": "全球语言 7B v1 中文增强 32k上下文 GalGame 500",
"ja": "グローバル言語 7B v1 中国語強化 32kコンテキスト GalGame 500"
},
"size": 15035391538,
"SHA256": "b5d347d5dedb4f398ec31489ab87b75b1dee772ae7d0a34c26635cf5d95c8794",
"lastUpdated": "2023-07-21T07:31:05",
"url": "https://huggingface.co/xiaol/chatgal-rwkv-7b-world-32k/blob/main/chatgal-RWKV-4-World-CHNtuned-7B-v1-20230709-ctx32k-500.pth",
"downloadUrl": "https://huggingface.co/xiaol/chatgal-rwkv-7b-world-32k/resolve/main/chatgal-RWKV-4-World-CHNtuned-7B-v1-20230709-ctx32k-500.pth"
},
{
"name": "RWKV-4-World-JPNtuned-7B-v1-20230718-ctx4096.pth",
"desc": {
@@ -340,6 +444,19 @@
"url": "https://huggingface.co/BlinkDL/rwkv-4-world/blob/main/RWKV-4-World-JPNtuned-7B-v1-20230718-ctx4096.pth",
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-4-world/resolve/main/RWKV-4-World-JPNtuned-7B-v1-20230718-ctx4096.pth"
},
{
"name": "RWKV-novel-4-World-7B-20230810-ctx128k.pth",
"desc": {
"en": "Global Languages Writer 7B v1 Ctx128k",
"zh": "全球语言写作 7B v1 128k上下文",
"ja": "グローバル言語ライター 7B v1 128kコンテキスト"
},
"size": 15035391533,
"SHA256": "5e429c49e4cab2f29a93f87a80635422c8710d70e5b1d962c078e47d957389c8",
"lastUpdated": "2023-08-10T06:30:32",
"url": "https://huggingface.co/xiaol/rwkv-7B-world-novel-128k/blob/main/RWKV-novel-4-World-7B-20230810-ctx128k.pth",
"downloadUrl": "https://huggingface.co/xiaol/rwkv-7B-world-novel-128k/resolve/main/RWKV-novel-4-World-7B-20230810-ctx128k.pth"
},
{
"name": "RWKV-4-Novel-7B-v1-ChnEng-ChnPro-20230410-ctx4096.pth",
"desc": {
@@ -403,8 +520,8 @@
{
"name": "RWKV-4-Raven-1B5-v11-Eng99%-Other1%-20230425-ctx4096.pth",
"desc": {
"en": "English 1.5B v11",
"zh": "英文 1.5B v11"
"en": "English 1.5B v11 (Old Model)",
"zh": "英文 1.5B v11 (旧模型)"
},
"size": 3030279730,
"SHA256": "4ac715aecc5b1c90e8e37eebb8163392699066ec23b18144416e91cb4e78675a",
@@ -416,8 +533,8 @@
{
"name": "RWKV-4-Raven-1B5-v12-Eng98%-Other2%-20230520-ctx4096.pth",
"desc": {
"en": "English 1B5 v12",
"zh": "英文 1B5 v12"
"en": "English 1B5 v12 (Old Model)",
"zh": "英文 1B5 v12 (旧模型)"
},
"size": 3030279730,
"SHA256": "6bbbffb3ee2372dfa9ef49c599e9a2bc0a01b94b6a264ba9bf5bd524fc38f723",
@@ -428,8 +545,8 @@
{
"name": "RWKV-4-Raven-3B-v11-Eng99%-Other1%-20230425-ctx4096.pth",
"desc": {
"en": "English 3B v11",
"zh": "英文 3B v11"
"en": "English 3B v11 (Old Model)",
"zh": "英文 3B v11 (旧模型)"
},
"size": 5969345074,
"SHA256": "982ad3d794efe58992db23c6d694c57a9e62d54718264ec6d6acfae5eb0eea12",
@@ -441,8 +558,8 @@
{
"name": "RWKV-4-Raven-3B-v12-Eng98%-Other2%-20230520-ctx4096.pth",
"desc": {
"en": "English 3B v12",
"zh": "英文 3B v12"
"en": "English 3B v12 (Old Model)",
"zh": "英文 3B v12 (旧模型)"
},
"size": 5969345074,
"SHA256": "1eea1845acfe9729dfdaec66a8d1aeb91a1287d94bebbca5529c13c050540b33",
@@ -453,8 +570,8 @@
{
"name": "RWKV-4-Raven-3B-v11-Eng49%-Chn49%-Jpn1%-Other1%-20230429-ctx4096.pth",
"desc": {
"en": "Chinese 3B v11",
"zh": "中文 3B v11"
"en": "Chinese 3B v11 (Old Model)",
"zh": "中文 3B v11 (旧模型)"
},
"size": 5969345074,
"SHA256": "af12300d9875e0e166c23d6e9b20928db435073060bf1d36f874060de92ada98",
@@ -466,8 +583,8 @@
{
"name": "RWKV-4-Raven-3B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230527-ctx4096.pth",
"desc": {
"en": "Chinese 3B v12",
"zh": "中文 3B v12"
"en": "Chinese 3B v12 (Old Model)",
"zh": "中文 3B v12 (旧模型)"
},
"size": 5969345330,
"SHA256": "c0abb4b745ba3523b9d8b3e1293110867ee55b1ef3dc8c122212f78396755721",
@@ -478,8 +595,8 @@
{
"name": "RWKV-4-Raven-7B-v11x-Eng99%-Other1%-20230429-ctx8192.pth",
"desc": {
"en": "English 7B v11x",
"zh": "英文 7B v11x"
"en": "English 7B v11x (Old Model)",
"zh": "英文 7B v11x (旧模型)"
},
"size": 14785389874,
"SHA256": "f00d5c75b453f2b20ad875fb5a324564c34024eea25a015f5eb441e4f364c3fe",
@@ -491,8 +608,8 @@
{
"name": "RWKV-4-Raven-7B-v12-Eng98%-Other2%-20230521-ctx8192.pth",
"desc": {
"en": "English 7B v12",
"zh": "英文 7B v12"
"en": "English 7B v12 (Old Model)",
"zh": "英文 7B v12 (旧模型)"
},
"size": 14785389618,
"SHA256": "5a725eaeb9e09b724de6c97e6845dd0283097c7920acd05b46852ab7afa9ec32",
@@ -503,8 +620,8 @@
{
"name": "RWKV-4-Raven-7B-v10x-Eng49%-Chn50%-Other1%-20230423-ctx4096.pth",
"desc": {
"en": "Chinese 7B v10x",
"zh": "中文 7B v10x"
"en": "Chinese 7B v10x (Old Model)",
"zh": "中文 7B v10x (旧模型)"
},
"size": 14785389874,
"SHA256": "7aaf40bb3d440a949db3a146b0a5bbb3e925942b496775b51f5630a582fc236d",
@@ -516,8 +633,8 @@
{
"name": "RWKV-4-Raven-7B-v11-Eng49%-Chn49%-Jpn1%-Other1%-20230430-ctx8192.pth",
"desc": {
"en": "Chinese 7B v11",
"zh": "中文 7B v11"
"en": "Chinese 7B v11 (Old Model)",
"zh": "中文 7B v11 (旧模型)"
},
"size": 14785389874,
"SHA256": "9e67a74964abcb4463711e447ddf47735561d7b40592d2d02b29d2e796a4fd14",
@@ -529,8 +646,8 @@
{
"name": "RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth",
"desc": {
"en": "Chinese 7B v12",
"zh": "中文 7B v12"
"en": "Chinese 7B v12 (Old Model)",
"zh": "中文 7B v12 (旧模型)"
},
"size": 14785389874,
"SHA256": "6d4a089ff36d5d9d96b669d425fc5e4e3959cab426535b52e2364df08f58b407",
@@ -541,8 +658,8 @@
{
"name": "RWKV-4-Raven-14B-v11x-Eng99%-Other1%-20230501-ctx8192.pth",
"desc": {
"en": "English 14B v11x",
"zh": "英文 14B v11x"
"en": "English 14B v11x (Old Model)",
"zh": "英文 14B v11x (旧模型)"
},
"size": 28297309490,
"SHA256": "c4bc72406c3c62613e8e2592e8d07ac045f8a88381c728f8eb60af890e299f4d",
@@ -554,8 +671,8 @@
{
"name": "RWKV-4-Raven-14B-v12-Eng98%-Other2%-20230523-ctx8192.pth",
"desc": {
"en": "English 14B v12",
"zh": "英文 14B v12"
"en": "English 14B v12 (Old Model)",
"zh": "英文 14B v12 (旧模型)"
},
"size": 28297309490,
"SHA256": "1193b5a9ceab572e4dbb9ed1d798eab7bf4793d18904d08bd4bf183579338ae7",
@@ -588,6 +705,32 @@
"lastUpdated": "2023-07-17T15:02:08",
"url": "https://huggingface.co/BlinkDL/rwkv-4-music/blob/main/RWKV-4-MIDI-560M-v1-20230717-ctx4096.pth",
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-4-music/resolve/main/RWKV-4-MIDI-560M-v1-20230717-ctx4096.pth"
},
{
"name": "RWKV-5-MIDI-120M-v1-20230728-ctx4096.pth",
"desc": {
"en": "RWKV-5 Music 120M v1",
"zh": "RWKV-5 作曲 120M v1",
"ja": "RWKV-5 作曲 120M v1"
},
"size": 245070513,
"SHA256": "c43d4a2ee7a71a331d05d6cd818dd75f7c48c716e4b98c58e4d27231614b0144",
"lastUpdated": "2023-07-29T02:17:27",
"url": "https://huggingface.co/BlinkDL/rwkv-5-music/blob/main/RWKV-5-MIDI-120M-v1-20230728-ctx4096.pth",
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-music/resolve/main/RWKV-5-MIDI-120M-v1-20230728-ctx4096.pth"
},
{
"name": "RWKV-5-MIDI-560M-v1-20230902-ctx4096.pth",
"desc": {
"en": "RWKV-5 Music 560M v1",
"zh": "RWKV-5 作曲 560M v1",
"ja": "RWKV-5 作曲 560M v1"
},
"size": 1179631346,
"SHA256": "cb4f2fd8956ca8496d6b2e33bff290c2047759b6fe74884903dbf9c73a11cc77",
"lastUpdated": "2023-09-03T04:48:41",
"url": "https://huggingface.co/BlinkDL/rwkv-5-music/blob/main/RWKV-5-MIDI-560M-v1-20230902-ctx4096.pth",
"downloadUrl": "https://huggingface.co/BlinkDL/rwkv-5-music/resolve/main/RWKV-5-MIDI-560M-v1-20230902-ctx4096.pth"
}
]
}