Compare commits

...

18 Commits

Author SHA1 Message Date
josc146
18d58ce124 release v1.6.1 2023-12-12 22:38:18 +08:00
josc146
b8f8837a8f allow overriding Core API URL 2023-12-12 22:37:36 +08:00
josc146
0c796c8cfc allow playing mid with external player 2023-12-12 22:13:09 +08:00
josc146
b14fbc29b7 rwkv.cpp(ggml) support 2023-12-12 20:29:55 +08:00
josc146
6e29f97881 update readme 2023-12-11 17:23:09 +08:00
josc146
a164939161 add crash.log 2023-12-11 12:02:24 +08:00
github-actions[bot]
09ab11ef01 release v1.6.0 2023-12-10 15:46:50 +00:00
josc146
ac34edec7f release v1.6.0 2023-12-10 23:46:25 +08:00
josc146
6dd8ffa037 bump to wails v2.7.1 2023-12-10 23:43:40 +08:00
josc146
eaed3f40a2 improve current instrument display 2023-12-10 23:37:23 +08:00
josc146
e48f39375e add midi tracks to webUI 2023-12-10 23:08:44 +08:00
josc146
9b7b651ef9 feat: import midi file 2023-12-10 22:38:31 +08:00
josc146
b5623cb9c2 fix generation instrumentType 2023-12-10 22:32:06 +08:00
josc146
144d12b463 chore 2023-12-10 21:13:36 +08:00
josc146
fa452f5518 bump to wails v2.7.0 2023-12-09 14:56:48 +08:00
josc146
a159d21d45 Update README_JA.md 2023-12-09 13:09:53 +08:00
josc146
3a00bbf44d update readme 2023-12-09 12:56:15 +08:00
github-actions[bot]
9f5e94fa8f release v1.5.9 2023-12-08 11:22:21 +00:00
47 changed files with 1826 additions and 300 deletions

1
.gitattributes vendored
View File

@@ -3,6 +3,7 @@ backend-python/wkv_cuda_utils/** linguist-vendored
backend-python/get-pip.py linguist-vendored
backend-python/convert_model.py linguist-vendored
backend-python/convert_safetensors.py linguist-vendored
backend-python/convert_pytorch_to_ggml.py linguist-vendored
backend-python/utils/midi.py linguist-vendored
build/** linguist-vendored
finetune/lora/** linguist-vendored

View File

@@ -65,7 +65,10 @@ jobs:
Copy-Item -Path "${{ steps.cp310.outputs.python-path }}/../libs" -Destination "py310/libs" -Recurse
./py310/python -m pip install cyac==1.9
go install github.com/wailsapp/wails/v2/cmd/wails@latest
del ./backend-python/rwkv_pip/cpp/librwkv.dylib
del ./backend-python/rwkv_pip/cpp/librwkv.so
(Get-Content -Path ./backend-golang/app.go) -replace "//go:custom_build windows ", "" | Set-Content -Path ./backend-golang/app.go
(Get-Content -Path ./backend-golang/utils.go) -replace "//go:custom_build windows ", "" | Set-Content -Path ./backend-golang/utils.go
make
Rename-Item -Path "build/bin/RWKV-Runner.exe" -NewName "RWKV-Runner_windows_x64.exe"
@@ -93,6 +96,8 @@ jobs:
rm ./backend-python/rwkv_pip/rwkv6.pyd
rm ./backend-python/rwkv_pip/beta/wkv_cuda.pyd
rm ./backend-python/get-pip.py
rm ./backend-python/rwkv_pip/cpp/librwkv.dylib
rm ./backend-python/rwkv_pip/cpp/rwkv.dll
make
mv build/bin/RWKV-Runner build/bin/RWKV-Runner_linux_x64
@@ -117,6 +122,8 @@ jobs:
rm ./backend-python/rwkv_pip/rwkv6.pyd
rm ./backend-python/rwkv_pip/beta/wkv_cuda.pyd
rm ./backend-python/get-pip.py
rm ./backend-python/rwkv_pip/cpp/rwkv.dll
rm ./backend-python/rwkv_pip/cpp/librwkv.so
make
cp build/darwin/Readme_Install.txt build/bin/Readme_Install.txt
cp build/bin/RWKV-Runner.app/Contents/MacOS/RWKV-Runner build/bin/RWKV-Runner_darwin_universal

1
.gitignore vendored
View File

@@ -8,6 +8,7 @@ __pycache__
*.st
*.safetensors
*.bin
*.mid
/config.json
/cache.json
/presets.json

View File

@@ -1,17 +1,8 @@
## Changes
- add web-rwkv-converter (Safetensors Convert no longer depends on Python) (WebGPU Server 0.3.3)
- model tags classifier
- improve presets interaction
- better state cache
- better customCuda condition
- add python-3.10.11-embed-amd64.zip cnMirror
- always reset to activePreset
- for devices that gpu is not supported, use cpu to merge lora
- RWKV_RESCALE_LAYER 999 for music model
- disable hashed assets
- fix webWails undefined functions
- fix damaged logo
- rwkv.cpp(ggml) support
- allow playing mid with external player
- allow overriding Core API URL
- chore
## Install

113
README.md
View File

@@ -21,7 +21,7 @@ English | [简体中文](README_ZH.md) | [日本語](README_JA.md)
[![MacOS][MacOS-image]][MacOS-url]
[![Linux][Linux-image]][Linux-url]
[FAQs](https://github.com/josStorer/RWKV-Runner/wiki/FAQs) | [Preview](#Preview) | [Download][download-url] | [Server-Deploy-Examples](https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples)
[FAQs](https://github.com/josStorer/RWKV-Runner/wiki/FAQs) | [Preview](#Preview) | [Download][download-url] | [Simple Deploy Example](#Simple-Deploy-Example) | [Server Deploy Examples](https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples) | [MIDI Hardware Input](#MIDI-Input)
[license-image]: http://img.shields.io/badge/license-MIT-blue.svg
@@ -57,20 +57,49 @@ English | [简体中文](README_ZH.md) | [日本語](README_JA.md)
## Features
- RWKV model management and one-click startup
- Fully compatible with the OpenAI API, making every ChatGPT client an RWKV client. After starting the model,
- RWKV model management and one-click startup.
- Front-end and back-end separation, if you don't want to use the client, also allows for separately deploying the
front-end service, or the back-end inference service, or the back-end inference service with a WebUI.
[Simple Deploy Example](#Simple-Deploy-Example) | [Server Deploy Examples](https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples)
- Compatible with the OpenAI API, making every ChatGPT client an RWKV client. After starting the model,
open http://127.0.0.1:8000/docs to view more details.
- Automatic dependency installation, requiring only a lightweight executable program
- Configs with 2G to 32G VRAM are included, works well on almost all computers
- User-friendly chat and completion interaction interface included
- Easy-to-understand and operate parameter configuration
- Built-in model conversion tool
- Built-in download management and remote model inspection
- Built-in one-click LoRA Finetune
- Can also be used as an OpenAI ChatGPT and GPT-Playground client
- Multilingual localization
- Theme switching
- Automatic updates
- Automatic dependency installation, requiring only a lightweight executable program.
- Pre-set multi-level VRAM configs, works well on almost all computers. In Configs page, switch Strategy to WebGPU, it
can also run on AMD, Intel, and other graphics cards.
- User-friendly chat, completion, and composition interaction interface included. Also supports chat presets, attachment
uploads, MIDI hardware input, and track editing.
[Preview](#Preview) | [MIDI Hardware Input](#MIDI-Input)
- Built-in WebUI option, one-click start of Web service, sharing your hardware resources.
- Easy-to-understand and operate parameter configuration, along with various operation guidance prompts.
- Built-in model conversion tool.
- Built-in download management and remote model inspection.
- Built-in one-click LoRA Finetune. (Windows Only)
- Can also be used as an OpenAI ChatGPT and GPT-Playground client. (Fill in the API URL and API Key in Settings page)
- Multilingual localization.
- Theme switching.
- Automatic updates.
## Simple Deploy Example
```bash
git clone https://github.com/josStorer/RWKV-Runner
# Then
cd RWKV-Runner
python ./backend-python/main.py #The backend inference service has been started, request /switch-model API to load the model, refer to the API documentation: http://127.0.0.1:8000/docs
# Or
cd RWKV-Runner/frontend
npm ci
npm run build #Compile the frontend
cd ..
python ./backend-python/webui_server.py #Start the frontend service separately
# Or
python ./backend-python/main.py --webui #Start the frontend and backend service at the same time
# Help Info
python ./backend-python/main.py -h
```
## API Concurrency Stress Testing
@@ -133,6 +162,48 @@ for i in np.argsort(embeddings_cos_sim)[::-1]:
print(f"{embeddings_cos_sim[i]:.10f} - {values[i]}")
```
## MIDI Input
Tip: You can download https://github.com/josStorer/sgm_plus and unzip it to the program's `assets/sound-font` directory
to use it as an offline sound source. Please note that if you are compiling the program from source code, do not place
it in the source code directory.
### USB MIDI Connection
- USB MIDI devices are plug-and-play, and you can select your input device in the Composition page
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/13bb92c3-4504-482d-ab82-026ac6c31095)
### Mac MIDI Bluetooth Connection
- For Mac users who want to use Bluetooth input,
please install [Bluetooth MIDI Connect](https://apps.apple.com/us/app/bluetooth-midi-connect/id1108321791), then click
the tray icon to connect after launching,
afterwards, you can select your input device in the Composition page.
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/c079a109-1e3d-45c1-bbf5-eed85da1550e)
### Windows MIDI Bluetooth Connection
- Windows seems to have implemented Bluetooth MIDI support only for UWP (Universal Windows Platform) apps. Therefore, it
requires multiple steps to establish a connection. We need to create a local virtual MIDI device and then launch a UWP
application. Through this UWP application, we will redirect Bluetooth MIDI input to the virtual MIDI device, and then
this software will listen to the input from the virtual MIDI device.
- So, first, you need to
download [loopMIDI](https://www.tobias-erichsen.de/wp-content/uploads/2020/01/loopMIDISetup_1_0_16_27.zip)
to create a virtual MIDI device. Click the plus sign in the bottom left corner to create the device.
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/b75998ff-115c-4ddd-b97c-deeb5c106255)
- Next, you need to download [Bluetooth LE Explorer](https://apps.microsoft.com/detail/9N0ZTKF1QD98) to discover and
connect to Bluetooth MIDI devices. Click "Start" to search for devices, and then click "Pair" to bind the MIDI device.
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/c142c3ea-a973-4531-9807-4c385d640a2b)
- Finally, you need to install [MIDIberry](https://apps.microsoft.com/detail/9N39720H2M05),
This UWP application can redirect Bluetooth MIDI input to the virtual MIDI device. After launching it, double-click
your actual Bluetooth MIDI device name in the input field, and in the output field, double-click the virtual MIDI
device name we created earlier.
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/5ad6a1d9-4f68-4d95-ae17-4296107d1669)
- Now, you can select the virtual MIDI device as the input in the Composition page. Bluetooth LE Explorer no longer
needs to run, and you can also close the loopMIDI window, it will run automatically in the background. Just keep
MIDIberry open.
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/1c371821-c7b7-4c18-8e42-9e315efbe427)
## Related Repositories:
- RWKV-4-World: https://huggingface.co/BlinkDL/rwkv-4-world/tree/main
@@ -146,27 +217,35 @@ for i in np.argsort(embeddings_cos_sim)[::-1]:
### Homepage
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/d7f24d80-f382-428d-8b28-edf87e1549e2)
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/c9b9cdd0-63f9-4319-9f74-5bf5d7df5a67)
### Chat
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/80009872-528f-4932-aeb2-f724fa892e7c)
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/e98c9038-3323-47b0-8edb-d639fafd37b2)
### Completion
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/bf49de8e-3b89-4543-b1ef-7cd4b19a1836)
### Composition
Tip: You can download https://github.com/josStorer/sgm_plus and unzip it to the program's `assets/sound-font` directory
to use it as an offline sound source. Please note that if you are compiling the program from source code, do not place
it in the source code directory.
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/e8ad908d-3fd2-4e92-bcdb-96815cb836ee)
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/b2ce4761-9e75-477e-a182-d0255fb8ac76)
### Configuration
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/48befdc6-e03c-4851-9bee-22f77ee2640e)
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/f41060dc-5517-44af-bb3f-8ef71720016d)
### Model Management
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/367fe4f8-cc12-475f-9371-3cf62cdbf293)
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/b1581147-a6ce-4493-8010-e33c0ddeca0a)
### Download Management

View File

@@ -21,7 +21,7 @@
[![MacOS][MacOS-image]][MacOS-url]
[![Linux][Linux-image]][Linux-url]
[FAQs](https://github.com/josStorer/RWKV-Runner/wiki/FAQs) | [プレビュー](#Preview) | [ダウンロード][download-url] | [サーバーデプロイ例](https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples)
[FAQs](https://github.com/josStorer/RWKV-Runner/wiki/FAQs) | [プレビュー](#Preview) | [ダウンロード][download-url] | [シンプルなデプロイの例](#Simple-Deploy-Example) | [サーバーデプロイ例](https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples) | [MIDIハードウェア入力](#MIDI-Input)
[license-image]: http://img.shields.io/badge/license-MIT-blue.svg
@@ -58,20 +58,47 @@
## 特徴
- RWKV モデル管理とワンクリック起動
- OpenAI API と完全に互換性があり、すべての ChatGPT クライアントを RWKV クライアントにします。モデル起動後、
- フロントエンドとバックエンドの分離は、クライアントを使用しない場合でも、フロントエンドサービス、またはバックエンド推論サービス、またはWebUIを備えたバックエンド推論サービスを個別に展開することを可能にします。
[シンプルなデプロイの例](#Simple-Deploy-Example) | [サーバーデプロイ例](https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples)
- OpenAI API と互換性があり、すべての ChatGPT クライアントを RWKV クライアントにします。モデル起動後、
http://127.0.0.1:8000/docs を開いて詳細をご覧ください。
- 依存関係の自動インストールにより、軽量な実行プログラムのみを必要とします
- 2G から 32G の VRAM のコンフィグが含まれており、ほとんどのコンピュータで動作します
- ユーザーフレンドリーなチャット完成インタラクションインターフェースを搭載
- 分かりやすく操作しやすいパラメータ設定
- 事前設定された多段階のVRAM設定、ほとんどのコンピュータで動作します。配置ページで、ストラテジーをWebGPUに切り替えると、AMD、インテル、その他のグラフィックカードでも動作します
- ユーザーフレンドリーなチャット完成、および作曲インターフェイスが含まれています。また、チャットプリセット、添付ファイルのアップロード、MIDIハードウェア入力、トラック編集もサポートしています。
[プレビュー](#Preview) | [MIDIハードウェア入力](#MIDI-Input)
- 内蔵WebUIオプション、Webサービスのワンクリック開始、ハードウェアリソースの共有
- 分かりやすく操作しやすいパラメータ設定、各種操作ガイダンスプロンプトとともに
- 内蔵モデル変換ツール
- ダウンロード管理とリモートモデル検査機能内蔵
- 内蔵のLoRA微調整機能を搭載しています
- このプログラムは、OpenAI ChatGPTとGPT Playgroundのクライアントとしても使用できます
- 内蔵のLoRA微調整機能を搭載しています (Windowsのみ)
- このプログラムは、OpenAI ChatGPTとGPT Playgroundのクライアントとしても使用できます(設定ページで `API URL``API Key`
を入力してください)
- 多言語ローカライズ
- テーマ切り替え
- 自動アップデート
## Simple Deploy Example
```bash
git clone https://github.com/josStorer/RWKV-Runner
# Then
cd RWKV-Runner
python ./backend-python/main.py #The backend inference service has been started, request /switch-model API to load the model, refer to the API documentation: http://127.0.0.1:8000/docs
# Or
cd RWKV-Runner/frontend
npm ci
npm run build #Compile the frontend
cd ..
python ./backend-python/webui_server.py #Start the frontend service separately
# Or
python ./backend-python/main.py --webui #Start the frontend and backend service at the same time
# Help Info
python ./backend-python/main.py -h
```
## API 同時実行ストレステスト
```bash
@@ -134,6 +161,48 @@ for i in np.argsort(embeddings_cos_sim)[::-1]:
print(f"{embeddings_cos_sim[i]:.10f} - {values[i]}")
```
## MIDI Input
Tip: You can download https://github.com/josStorer/sgm_plus and unzip it to the program's `assets/sound-font` directory
to use it as an offline sound source. Please note that if you are compiling the program from source code, do not place
it in the source code directory.
### USB MIDI Connection
- USB MIDI devices are plug-and-play, and you can select your input device in the Composition page
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/13bb92c3-4504-482d-ab82-026ac6c31095)
### Mac MIDI Bluetooth Connection
- For Mac users who want to use Bluetooth input,
please install [Bluetooth MIDI Connect](https://apps.apple.com/us/app/bluetooth-midi-connect/id1108321791), then click
the tray icon to connect after launching,
afterwards, you can select your input device in the Composition page.
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/c079a109-1e3d-45c1-bbf5-eed85da1550e)
### Windows MIDI Bluetooth Connection
- Windows seems to have implemented Bluetooth MIDI support only for UWP (Universal Windows Platform) apps. Therefore, it
requires multiple steps to establish a connection. We need to create a local virtual MIDI device and then launch a UWP
application. Through this UWP application, we will redirect Bluetooth MIDI input to the virtual MIDI device, and then
this software will listen to the input from the virtual MIDI device.
- So, first, you need to
download [loopMIDI](https://www.tobias-erichsen.de/wp-content/uploads/2020/01/loopMIDISetup_1_0_16_27.zip)
to create a virtual MIDI device. Click the plus sign in the bottom left corner to create the device.
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/b75998ff-115c-4ddd-b97c-deeb5c106255)
- Next, you need to download [Bluetooth LE Explorer](https://apps.microsoft.com/detail/9N0ZTKF1QD98) to discover and
connect to Bluetooth MIDI devices. Click "Start" to search for devices, and then click "Pair" to bind the MIDI device.
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/c142c3ea-a973-4531-9807-4c385d640a2b)
- Finally, you need to install [MIDIberry](https://apps.microsoft.com/detail/9N39720H2M05),
This UWP application can redirect Bluetooth MIDI input to the virtual MIDI device. After launching it, double-click
your actual Bluetooth MIDI device name in the input field, and in the output field, double-click the virtual MIDI
device name we created earlier.
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/5ad6a1d9-4f68-4d95-ae17-4296107d1669)
- Now, you can select the virtual MIDI device as the input in the Composition page. Bluetooth LE Explorer no longer
needs to run, and you can also close the loopMIDI window, it will run automatically in the background. Just keep
MIDIberry open.
- ![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/1c371821-c7b7-4c18-8e42-9e315efbe427)
## 関連リポジトリ:
- RWKV-4-World: https://huggingface.co/BlinkDL/rwkv-4-world/tree/main
@@ -143,31 +212,39 @@ for i in np.argsort(embeddings_cos_sim)[::-1]:
- RWKV-LM-LoRA: https://github.com/Blealtan/RWKV-LM-LoRA
- MIDI-LLM-tokenizer: https://github.com/briansemrau/MIDI-LLM-tokenizer
## プレビュー
## Preview
### ホームページ
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/d7f24d80-f382-428d-8b28-edf87e1549e2)
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/c9b9cdd0-63f9-4319-9f74-5bf5d7df5a67)
### チャット
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/80009872-528f-4932-aeb2-f724fa892e7c)
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/e98c9038-3323-47b0-8edb-d639fafd37b2)
### 補完
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/bf49de8e-3b89-4543-b1ef-7cd4b19a1836)
### 作曲
Tip: You can download https://github.com/josStorer/sgm_plus and unzip it to the program's `assets/sound-font` directory
to use it as an offline sound source. Please note that if you are compiling the program from source code, do not place
it in the source code directory.
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/e8ad908d-3fd2-4e92-bcdb-96815cb836ee)
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/b2ce4761-9e75-477e-a182-d0255fb8ac76)
### コンフィグ
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/48befdc6-e03c-4851-9bee-22f77ee2640e)
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/f41060dc-5517-44af-bb3f-8ef71720016d)
### モデル管理
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/367fe4f8-cc12-475f-9371-3cf62cdbf293)
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/b1581147-a6ce-4493-8010-e33c0ddeca0a)
### ダウンロード管理

View File

@@ -61,14 +61,14 @@ API兼容的接口这意味着一切ChatGPT客户端都是RWKV客户端。
[简明服务部署示例](#Simple-Deploy-Example) | [服务器部署示例](https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples)
- 与OpenAI API兼容一切ChatGPT客户端都是RWKV客户端。启动模型后打开 http://127.0.0.1:8000/docs 查看API文档
- 全自动依赖安装,你只需要一个轻巧的可执行程序
- 预设多级显存配置几乎在各种电脑上工作良好。通过配置页面切换到WebGPU策略还可以在AMDIntel等显卡上运行
- 预设多级显存配置,几乎在各种电脑上工作良好。通过配置页面切换Strategy到WebGPU还可以在AMDIntel等显卡上运行
- 自带用户友好的聊天续写作曲交互页面。支持聊天预设附件上传MIDI硬件输入及音轨编辑。
[预览](#Preview) | [MIDI硬件输入](#MIDI-Input)
- 内置WebUI选项一键启动Web服务共享硬件资源
- 易于理解和操作的参数配置,及各类操作引导提示
- 内置模型转换工具
- 内置下载管理和远程模型检视
- 内置一键LoRA微调
- 内置一键LoRA微调 (仅限Windows)
- 也可用作 OpenAI ChatGPT 和 GPT Playground 客户端 (在设置内填写API URL和API Key)
- 多语言本地化
- 主题切换
@@ -230,7 +230,7 @@ for i in np.argsort(embeddings_cos_sim)[::-1]:
### 模型管理
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/6bbbecec-d000-457e-993d-27ff53881d09)
![image](https://github.com/josStorer/RWKV-Runner/assets/13366013/871f2d2a-7e41-4be7-9b32-be1b3e00dc3e)
### 下载管理

View File

@@ -14,6 +14,13 @@ import (
wruntime "github.com/wailsapp/wails/v2/pkg/runtime"
)
func (a *App) SaveFile(path string, savedContent []byte) error {
if err := os.WriteFile(a.exDir+path, savedContent, 0644); err != nil {
return err
}
return nil
}
func (a *App) SaveJson(fileName string, jsonData any) error {
text, err := json.MarshalIndent(jsonData, "", " ")
if err != nil {
@@ -195,3 +202,8 @@ func (a *App) OpenFileFolder(path string, relative bool) error {
}
return errors.New("unsupported OS")
}
func (a *App) StartFile(path string) error {
_, err := CmdHelper(path)
return err
}

View File

@@ -10,7 +10,7 @@ import (
"strings"
)
func (a *App) StartServer(python string, port int, host string, webui bool, rwkvBeta bool) (string, error) {
func (a *App) StartServer(python string, port int, host string, webui bool, rwkvBeta bool, rwkvcpp bool) (string, error) {
var err error
if python == "" {
python, err = GetPython()
@@ -25,6 +25,9 @@ func (a *App) StartServer(python string, port int, host string, webui bool, rwkv
if rwkvBeta {
args = append(args, "--rwkv-beta")
}
if rwkvcpp {
args = append(args, "--rwkv.cpp")
}
args = append(args, "--port", strconv.Itoa(port), "--host", host)
return Cmd(args...)
}
@@ -52,6 +55,21 @@ func (a *App) ConvertSafetensors(modelPath string, outPath string) (string, erro
return Cmd(args...)
}
func (a *App) ConvertGGML(python string, modelPath string, outPath string, Q51 bool) (string, error) {
var err error
if python == "" {
python, err = GetPython()
}
if err != nil {
return "", err
}
dataType := "FP16"
if Q51 {
dataType = "Q5_1"
}
return Cmd(python, "./backend-python/convert_pytorch_to_ggml.py", modelPath, outPath, dataType)
}
func (a *App) ConvertData(python string, input string, outputPrefix string, vocab string) (string, error) {
var err error
if python == "" {

View File

@@ -15,33 +15,51 @@ import (
"runtime"
"strconv"
"strings"
"syscall"
)
func CmdHelper(args ...string) (*exec.Cmd, error) {
if runtime.GOOS != "windows" {
return nil, errors.New("unsupported OS")
}
filename := "./cmd-helper.bat"
_, err := os.Stat(filename)
if err != nil {
if err := os.WriteFile(filename, []byte("start %*"), 0644); err != nil {
return nil, err
}
}
cmdHelper, err := filepath.Abs(filename)
if err != nil {
return nil, err
}
if strings.Contains(cmdHelper, " ") {
for _, arg := range args {
if strings.Contains(arg, " ") {
return nil, errors.New("path contains space") // golang bug https://github.com/golang/go/issues/17149#issuecomment-473976818
}
}
}
cmd := exec.Command(cmdHelper, args...)
cmd.SysProcAttr = &syscall.SysProcAttr{}
//go:custom_build windows cmd.SysProcAttr.HideWindow = true
err = cmd.Start()
if err != nil {
return nil, err
}
return cmd, nil
}
func Cmd(args ...string) (string, error) {
switch platform := runtime.GOOS; platform {
case "windows":
if err := os.WriteFile("./cmd-helper.bat", []byte("start %*"), 0644); err != nil {
return "", err
}
cmdHelper, err := filepath.Abs("./cmd-helper")
cmd, err := CmdHelper(args...)
if err != nil {
return "", err
}
if strings.Contains(cmdHelper, " ") {
for _, arg := range args {
if strings.Contains(arg, " ") {
return "", errors.New("path contains space") // golang bug https://github.com/golang/go/issues/17149#issuecomment-473976818
}
}
}
cmd := exec.Command(cmdHelper, args...)
out, err := cmd.CombinedOutput()
if err != nil {
return "", err
}
return string(out), nil
cmd.Wait()
return "", nil
case "darwin":
ex, err := os.Executable()
if err != nil {

View File

@@ -0,0 +1,169 @@
# Converts an RWKV model checkpoint in PyTorch format to an rwkv.cpp compatible file.
# Usage: python convert_pytorch_to_ggml.py C:\RWKV-4-Pile-169M-20220807-8023.pth C:\rwkv.cpp-169M-FP16.bin FP16
# Get model checkpoints from https://huggingface.co/BlinkDL
# See FILE_FORMAT.md for the documentation on the file format.
import argparse
import struct
import torch
from typing import Dict
def parse_args():
parser = argparse.ArgumentParser(
description="Convert an RWKV model checkpoint in PyTorch format to an rwkv.cpp compatible file"
)
parser.add_argument("src_path", help="Path to PyTorch checkpoint file")
parser.add_argument(
"dest_path", help="Path to rwkv.cpp checkpoint file, will be overwritten"
)
parser.add_argument(
"data_type",
help="Data type, FP16, Q4_0, Q4_1, Q5_0, Q5_1, Q8_0",
type=str,
choices=[
"FP16",
"Q4_0",
"Q4_1",
"Q5_0",
"Q5_1",
"Q8_0",
],
default="FP16",
)
return parser.parse_args()
def get_layer_count(state_dict: Dict[str, torch.Tensor]) -> int:
n_layer: int = 0
while f"blocks.{n_layer}.ln1.weight" in state_dict:
n_layer += 1
assert n_layer > 0
return n_layer
def write_state_dict(
state_dict: Dict[str, torch.Tensor], dest_path: str, data_type: str
) -> None:
emb_weight: torch.Tensor = state_dict["emb.weight"]
n_layer: int = get_layer_count(state_dict)
n_vocab: int = emb_weight.shape[0]
n_embed: int = emb_weight.shape[1]
is_v5_1_or_2: bool = "blocks.0.att.ln_x.weight" in state_dict
is_v5_2: bool = "blocks.0.att.gate.weight" in state_dict
if is_v5_2:
print("Detected RWKV v5.2")
elif is_v5_1_or_2:
print("Detected RWKV v5.1")
else:
print("Detected RWKV v4")
with open(dest_path, "wb") as out_file:
is_FP16: bool = data_type == "FP16" or data_type == "float16"
out_file.write(
struct.pack(
# Disable padding with '='
"=iiiiii",
# Magic: 'ggmf' in hex
0x67676D66,
101,
n_vocab,
n_embed,
n_layer,
1 if is_FP16 else 0,
)
)
for k in state_dict.keys():
tensor: torch.Tensor = state_dict[k].float()
if ".time_" in k:
tensor = tensor.squeeze()
if is_v5_1_or_2:
if ".time_decay" in k:
if is_v5_2:
tensor = torch.exp(-torch.exp(tensor)).unsqueeze(-1)
else:
tensor = torch.exp(-torch.exp(tensor)).reshape(-1, 1, 1)
if ".time_first" in k:
tensor = torch.exp(tensor).reshape(-1, 1, 1)
if ".time_faaaa" in k:
tensor = tensor.unsqueeze(-1)
else:
if ".time_decay" in k:
tensor = -torch.exp(tensor)
# Keep 1-dim vectors and small matrices in FP32
if is_FP16 and len(tensor.shape) > 1 and ".time_" not in k:
tensor = tensor.half()
shape = tensor.shape
print(f"Writing {k}, shape {shape}, type {tensor.dtype}")
k_encoded: bytes = k.encode("utf-8")
out_file.write(
struct.pack(
"=iii",
len(shape),
len(k_encoded),
1 if tensor.dtype == torch.float16 else 0,
)
)
# Dimension order is reversed here:
# * PyTorch shape is (x rows, y columns)
# * ggml shape is (y elements in a row, x elements in a column)
# Both shapes represent the same tensor.
for dim in reversed(tensor.shape):
out_file.write(struct.pack("=i", dim))
out_file.write(k_encoded)
tensor.numpy().tofile(out_file)
def main() -> None:
args = parse_args()
print(f"Reading {args.src_path}")
state_dict: Dict[str, torch.Tensor] = torch.load(args.src_path, map_location="cpu")
temp_output: str = args.dest_path
if args.data_type.startswith("Q"):
import re
temp_output = re.sub(r"Q[4,5,8]_[0,1]", "fp16", temp_output)
write_state_dict(state_dict, temp_output, "FP16")
if args.data_type.startswith("Q"):
import sys
import os
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
from rwkv_pip.cpp import rwkv_cpp_shared_library
library = rwkv_cpp_shared_library.load_rwkv_shared_library()
library.rwkv_quantize_model_file(temp_output, args.dest_path, args.data_type)
print("Done")
if __name__ == "__main__":
try:
main()
except Exception as e:
print(e)
with open("error.txt", "w") as f:
f.write(str(e))

View File

@@ -32,6 +32,11 @@ def get_args(args: Union[Sequence[str], None] = None):
action="store_true",
help="whether to use rwkv-beta (default: False)",
)
group.add_argument(
"--rwkv.cpp",
action="store_true",
help="whether to use rwkv.cpp (default: False)",
)
args = parser.parse_args(args)
return args

View File

@@ -49,19 +49,13 @@ def switch_model(body: SwitchModelBody, response: Response, request: Request):
if body.model == "":
return "success"
STRATEGY_REGEX = r"^(?:(?:^|->) *(?:cuda(?::[\d]+)?|cpu|mps|dml) (?:fp(?:16|32)|bf16)(?:i8|i4|i3)?(?: \*[\d]+\+?)? *)+$"
if not re.match(STRATEGY_REGEX, body.strategy):
raise HTTPException(
Status.HTTP_400_BAD_REQUEST,
"Invalid strategy. Please read https://pypi.org/project/rwkv/",
)
devices = set(
[
x.strip().split(" ")[0].replace("cuda:0", "cuda")
for x in body.strategy.split("->")
]
)
print(f"Devices: {devices}")
print(f"Strategy Devices: {devices}")
# if len(devices) > 1:
# state_cache.disable_state_cache()
# else:

View File

@@ -1,6 +1,6 @@
import io
import global_var
from fastapi import APIRouter, HTTPException, status
from fastapi import APIRouter, HTTPException, UploadFile, status
from starlette.responses import StreamingResponse
from pydantic import BaseModel
from utils.midi import *
@@ -33,6 +33,16 @@ def text_to_midi(body: TextToMidiBody):
return StreamingResponse(mid_data, media_type="audio/midi")
@router.post("/midi-to-text", tags=["MIDI"])
async def midi_to_text(file_data: UploadFile):
vocab_config = "backend-python/utils/midi_vocab_config.json"
cfg = VocabConfig.from_json(vocab_config)
mid = mido.MidiFile(file=file_data.file)
text = convert_midi_to_str(cfg, mid)
return {"text": text}
class TxtToMidiBody(BaseModel):
txt_path: str
midi_path: str

View File

@@ -90,10 +90,15 @@ def add_state(body: AddStateBody):
try:
id: int = trie.insert(body.prompt)
devices: List[torch.device] = [tensor.device for tensor in body.state]
devices: List[torch.device] = [
(tensor.device if hasattr(tensor, "device") else torch.device("cpu"))
for tensor in body.state
]
dtrie[id] = {
"tokens": copy.deepcopy(body.tokens),
"state": [tensor.cpu() for tensor in body.state],
"state": [tensor.cpu() for tensor in body.state]
if hasattr(body.state[0], "device")
else copy.deepcopy(body.state),
"logits": copy.deepcopy(body.logits),
"devices": devices,
}
@@ -185,7 +190,9 @@ def longest_prefix_state(body: LongestPrefixStateBody, request: Request):
return {
"prompt": prompt,
"tokens": v["tokens"],
"state": [tensor.to(devices[i]) for i, tensor in enumerate(v["state"])],
"state": [tensor.to(devices[i]) for i, tensor in enumerate(v["state"])]
if hasattr(v["state"][0], "device")
else v["state"],
"logits": v["logits"],
}
else:

Binary file not shown.

BIN
backend-python/rwkv_pip/cpp/librwkv.so vendored Normal file

Binary file not shown.

14
backend-python/rwkv_pip/cpp/model.py vendored Normal file
View File

@@ -0,0 +1,14 @@
from typing import Any, List
from . import rwkv_cpp_model
from . import rwkv_cpp_shared_library
class RWKV:
def __init__(self, model_path: str, strategy=None):
self.library = rwkv_cpp_shared_library.load_rwkv_shared_library()
self.model = rwkv_cpp_model.RWKVModel(self.library, model_path)
self.w = {} # fake weight
self.w["emb.weight"] = [0] * self.model.n_vocab
def forward(self, tokens: List[int], state: Any | None):
return self.model.eval_sequence_in_chunks(tokens, state, use_numpy=True)

BIN
backend-python/rwkv_pip/cpp/rwkv.dll vendored Normal file

Binary file not shown.

View File

@@ -0,0 +1,369 @@
import os
import multiprocessing
# Pre-import PyTorch, if available.
# This fixes "OSError: [WinError 127] The specified procedure could not be found".
try:
import torch
except ModuleNotFoundError:
pass
# I'm sure this is not strictly correct, but let's keep this crutch for now.
try:
import rwkv_cpp_shared_library
except ModuleNotFoundError:
from . import rwkv_cpp_shared_library
from typing import TypeVar, Optional, Tuple, List
# A value of this type is either a numpy's ndarray or a PyTorch's Tensor.
NumpyArrayOrPyTorchTensor: TypeVar = TypeVar('NumpyArrayOrPyTorchTensor')
class RWKVModel:
"""
An RWKV model managed by rwkv.cpp library.
"""
def __init__(
self,
shared_library: rwkv_cpp_shared_library.RWKVSharedLibrary,
model_path: str,
thread_count: int = max(1, multiprocessing.cpu_count() // 2),
gpu_layer_count: int = 0,
**kwargs
) -> None:
"""
Loads the model and prepares it for inference.
In case of any error, this method will throw an exception.
Parameters
----------
shared_library : RWKVSharedLibrary
rwkv.cpp shared library.
model_path : str
Path to RWKV model file in ggml format.
thread_count : int
Thread count to use. If not set, defaults to CPU count / 2.
gpu_layer_count : int
Count of layers to offload onto the GPU, must be >= 0.
See documentation of `gpu_offload_layers` for details about layer offloading.
"""
if 'gpu_layers_count' in kwargs:
gpu_layer_count = kwargs['gpu_layers_count']
assert os.path.isfile(model_path), f'{model_path} is not a file'
assert thread_count > 0, 'Thread count must be > 0'
assert gpu_layer_count >= 0, 'GPU layer count must be >= 0'
self._library: rwkv_cpp_shared_library.RWKVSharedLibrary = shared_library
self._ctx: rwkv_cpp_shared_library.RWKVContext = self._library.rwkv_init_from_file(model_path, thread_count)
if gpu_layer_count > 0:
self.gpu_offload_layers(gpu_layer_count)
self._state_buffer_element_count: int = self._library.rwkv_get_state_buffer_element_count(self._ctx)
self._logits_buffer_element_count: int = self._library.rwkv_get_logits_buffer_element_count(self._ctx)
self._valid: bool = True
def gpu_offload_layers(self, layer_count: int) -> bool:
"""
Offloads specified count of model layers onto the GPU. Offloaded layers are evaluated using cuBLAS or CLBlast.
For the purposes of this function, model head (unembedding matrix) is treated as an additional layer:
- pass `model.n_layer` to offload all layers except model head
- pass `model.n_layer + 1` to offload all layers, including model head
Returns true if at least one layer was offloaded.
If rwkv.cpp was compiled without cuBLAS and CLBlast support, this function is a no-op and always returns false.
Parameters
----------
layer_count : int
Count of layers to offload onto the GPU, must be >= 0.
"""
assert layer_count >= 0, 'Layer count must be >= 0'
return self._library.rwkv_gpu_offload_layers(self._ctx, layer_count)
@property
def n_vocab(self) -> int:
return self._library.rwkv_get_n_vocab(self._ctx)
@property
def n_embed(self) -> int:
return self._library.rwkv_get_n_embed(self._ctx)
@property
def n_layer(self) -> int:
return self._library.rwkv_get_n_layer(self._ctx)
def eval(
self,
token: int,
state_in: Optional[NumpyArrayOrPyTorchTensor],
state_out: Optional[NumpyArrayOrPyTorchTensor] = None,
logits_out: Optional[NumpyArrayOrPyTorchTensor] = None,
use_numpy: bool = False
) -> Tuple[NumpyArrayOrPyTorchTensor, NumpyArrayOrPyTorchTensor]:
"""
Evaluates the model for a single token.
In case of any error, this method will throw an exception.
Parameters
----------
token : int
Index of next token to be seen by the model. Must be in range 0 <= token < n_vocab.
state_in : Optional[NumpyArrayOrTorchTensor]
State from previous call of this method. If this is a first pass, set it to None.
state_out : Optional[NumpyArrayOrTorchTensor]
Optional output tensor for state. If provided, must be of type float32, contiguous and of shape (state_buffer_element_count).
logits_out : Optional[NumpyArrayOrTorchTensor]
Optional output tensor for logits. If provided, must be of type float32, contiguous and of shape (logits_buffer_element_count).
use_numpy : bool
If set to True, numpy's ndarrays will be created instead of PyTorch's Tensors.
This parameter is ignored if any tensor parameter is not None; in such case,
type of returned tensors will match the type of received tensors.
Returns
-------
logits, state
Logits vector of shape (n_vocab); state for the next step.
"""
assert self._valid, 'Model was freed'
use_numpy = self._detect_numpy_usage([state_in, state_out, logits_out], use_numpy)
if state_in is not None:
self._validate_tensor(state_in, 'state_in', self._state_buffer_element_count)
state_in_ptr = self._get_data_ptr(state_in)
else:
state_in_ptr = 0
if state_out is not None:
self._validate_tensor(state_out, 'state_out', self._state_buffer_element_count)
else:
state_out = self._zeros_float32(self._state_buffer_element_count, use_numpy)
if logits_out is not None:
self._validate_tensor(logits_out, 'logits_out', self._logits_buffer_element_count)
else:
logits_out = self._zeros_float32(self._logits_buffer_element_count, use_numpy)
self._library.rwkv_eval(
self._ctx,
token,
state_in_ptr,
self._get_data_ptr(state_out),
self._get_data_ptr(logits_out)
)
return logits_out, state_out
def eval_sequence(
self,
tokens: List[int],
state_in: Optional[NumpyArrayOrPyTorchTensor],
state_out: Optional[NumpyArrayOrPyTorchTensor] = None,
logits_out: Optional[NumpyArrayOrPyTorchTensor] = None,
use_numpy: bool = False
) -> Tuple[NumpyArrayOrPyTorchTensor, NumpyArrayOrPyTorchTensor]:
"""
Evaluates the model for a sequence of tokens.
NOTE ON GGML NODE LIMIT
ggml has a hard-coded limit on max amount of nodes in a computation graph. The sequence graph is built in a way that quickly exceedes
this limit when using large models and/or large sequence lengths.
Fortunately, rwkv.cpp's fork of ggml has increased limit which was tested to work for sequence lengths up to 64 for 14B models.
If you get `GGML_ASSERT: ...\\ggml.c:16941: cgraph->n_nodes < GGML_MAX_NODES`, this means you've exceeded the limit.
To get rid of the assertion failure, reduce the model size and/or sequence length.
In case of any error, this method will throw an exception.
Parameters
----------
tokens : List[int]
Indices of the next tokens to be seen by the model. Must be in range 0 <= token < n_vocab.
state_in : Optional[NumpyArrayOrTorchTensor]
State from previous call of this method. If this is a first pass, set it to None.
state_out : Optional[NumpyArrayOrTorchTensor]
Optional output tensor for state. If provided, must be of type float32, contiguous and of shape (state_buffer_element_count).
logits_out : Optional[NumpyArrayOrTorchTensor]
Optional output tensor for logits. If provided, must be of type float32, contiguous and of shape (logits_buffer_element_count).
use_numpy : bool
If set to True, numpy's ndarrays will be created instead of PyTorch's Tensors.
This parameter is ignored if any tensor parameter is not None; in such case,
type of returned tensors will match the type of received tensors.
Returns
-------
logits, state
Logits vector of shape (n_vocab); state for the next step.
"""
assert self._valid, 'Model was freed'
use_numpy = self._detect_numpy_usage([state_in, state_out, logits_out], use_numpy)
if state_in is not None:
self._validate_tensor(state_in, 'state_in', self._state_buffer_element_count)
state_in_ptr = self._get_data_ptr(state_in)
else:
state_in_ptr = 0
if state_out is not None:
self._validate_tensor(state_out, 'state_out', self._state_buffer_element_count)
else:
state_out = self._zeros_float32(self._state_buffer_element_count, use_numpy)
if logits_out is not None:
self._validate_tensor(logits_out, 'logits_out', self._logits_buffer_element_count)
else:
logits_out = self._zeros_float32(self._logits_buffer_element_count, use_numpy)
self._library.rwkv_eval_sequence(
self._ctx,
tokens,
state_in_ptr,
self._get_data_ptr(state_out),
self._get_data_ptr(logits_out)
)
return logits_out, state_out
def eval_sequence_in_chunks(
self,
tokens: List[int],
state_in: Optional[NumpyArrayOrPyTorchTensor],
state_out: Optional[NumpyArrayOrPyTorchTensor] = None,
logits_out: Optional[NumpyArrayOrPyTorchTensor] = None,
chunk_size: int = 16,
use_numpy: bool = False
) -> Tuple[NumpyArrayOrPyTorchTensor, NumpyArrayOrPyTorchTensor]:
"""
Evaluates the model for a sequence of tokens using `eval_sequence`, splitting a potentially long sequence into fixed-length chunks.
This function is useful for processing complete prompts and user input in chat & role-playing use-cases.
It is recommended to use this function instead of `eval_sequence` to avoid mistakes and get maximum performance.
Chunking allows processing sequences of thousands of tokens, while not reaching the ggml's node limit and not consuming too much memory.
A reasonable and recommended value of chunk size is 16. If you want maximum performance, try different chunk sizes in range [2..64]
and choose one that works the best in your use case.
In case of any error, this method will throw an exception.
Parameters
----------
tokens : List[int]
Indices of the next tokens to be seen by the model. Must be in range 0 <= token < n_vocab.
chunk_size : int
Size of each chunk in tokens, must be positive.
state_in : Optional[NumpyArrayOrTorchTensor]
State from previous call of this method. If this is a first pass, set it to None.
state_out : Optional[NumpyArrayOrTorchTensor]
Optional output tensor for state. If provided, must be of type float32, contiguous and of shape (state_buffer_element_count).
logits_out : Optional[NumpyArrayOrTorchTensor]
Optional output tensor for logits. If provided, must be of type float32, contiguous and of shape (logits_buffer_element_count).
use_numpy : bool
If set to True, numpy's ndarrays will be created instead of PyTorch's Tensors.
This parameter is ignored if any tensor parameter is not None; in such case,
type of returned tensors will match the type of received tensors.
Returns
-------
logits, state
Logits vector of shape (n_vocab); state for the next step.
"""
assert self._valid, 'Model was freed'
use_numpy = self._detect_numpy_usage([state_in, state_out, logits_out], use_numpy)
if state_in is not None:
self._validate_tensor(state_in, 'state_in', self._state_buffer_element_count)
state_in_ptr = self._get_data_ptr(state_in)
else:
state_in_ptr = 0
if state_out is not None:
self._validate_tensor(state_out, 'state_out', self._state_buffer_element_count)
else:
state_out = self._zeros_float32(self._state_buffer_element_count, use_numpy)
if logits_out is not None:
self._validate_tensor(logits_out, 'logits_out', self._logits_buffer_element_count)
else:
logits_out = self._zeros_float32(self._logits_buffer_element_count, use_numpy)
self._library.rwkv_eval_sequence_in_chunks(
self._ctx,
tokens,
chunk_size,
state_in_ptr,
self._get_data_ptr(state_out),
self._get_data_ptr(logits_out)
)
return logits_out, state_out
def free(self) -> None:
"""
Frees all allocated resources.
In case of any error, this method will throw an exception.
The object must not be used anymore after calling this method.
"""
assert self._valid, 'Already freed'
self._valid = False
self._library.rwkv_free(self._ctx)
def __del__(self) -> None:
# Free the context on GC in case user forgot to call free() explicitly.
if hasattr(self, '_valid') and self._valid:
self.free()
def _is_pytorch_tensor(self, tensor: NumpyArrayOrPyTorchTensor) -> bool:
return hasattr(tensor, '__module__') and tensor.__module__ == 'torch'
def _detect_numpy_usage(self, tensors: List[Optional[NumpyArrayOrPyTorchTensor]], use_numpy_by_default: bool) -> bool:
for tensor in tensors:
if tensor is not None:
return False if self._is_pytorch_tensor(tensor) else True
return use_numpy_by_default
def _validate_tensor(self, tensor: NumpyArrayOrPyTorchTensor, name: str, size: int) -> None:
if self._is_pytorch_tensor(tensor):
tensor: torch.Tensor = tensor
assert tensor.device == torch.device('cpu'), f'{name} is not on CPU'
assert tensor.dtype == torch.float32, f'{name} is not of type float32'
assert tensor.shape == (size,), f'{name} has invalid shape {tensor.shape}, expected ({size})'
assert tensor.is_contiguous(), f'{name} is not contiguous'
else:
import numpy as np
tensor: np.ndarray = tensor
assert tensor.dtype == np.float32, f'{name} is not of type float32'
assert tensor.shape == (size,), f'{name} has invalid shape {tensor.shape}, expected ({size})'
assert tensor.data.contiguous, f'{name} is not contiguous'
def _get_data_ptr(self, tensor: NumpyArrayOrPyTorchTensor):
if self._is_pytorch_tensor(tensor):
return tensor.data_ptr()
else:
return tensor.ctypes.data
def _zeros_float32(self, element_count: int, use_numpy: bool) -> NumpyArrayOrPyTorchTensor:
if use_numpy:
import numpy as np
return np.zeros(element_count, dtype=np.float32)
else:
return torch.zeros(element_count, dtype=torch.float32, device='cpu')

View File

@@ -0,0 +1,444 @@
import os
import sys
import ctypes
import pathlib
import platform
from typing import Optional, List, Tuple, Callable
QUANTIZED_FORMAT_NAMES: Tuple[str, str, str, str, str] = (
'Q4_0',
'Q4_1',
'Q5_0',
'Q5_1',
'Q8_0'
)
P_FLOAT = ctypes.POINTER(ctypes.c_float)
P_INT = ctypes.POINTER(ctypes.c_int32)
class RWKVContext:
def __init__(self, ptr: ctypes.pointer) -> None:
self.ptr: ctypes.pointer = ptr
class RWKVSharedLibrary:
"""
Python wrapper around rwkv.cpp shared library.
"""
def __init__(self, shared_library_path: str) -> None:
"""
Loads the shared library from specified file.
In case of any error, this method will throw an exception.
Parameters
----------
shared_library_path : str
Path to rwkv.cpp shared library. On Windows, it would look like 'rwkv.dll'. On UNIX, 'rwkv.so'.
"""
# When Python is greater than 3.8, we need to reprocess the custom dll
# according to the documentation to prevent loading failure errors.
# https://docs.python.org/3/whatsnew/3.8.html#ctypes
if platform.system().lower() == 'windows':
self.library = ctypes.CDLL(shared_library_path, winmode=0)
else:
self.library = ctypes.cdll.LoadLibrary(shared_library_path)
self.library.rwkv_init_from_file.argtypes = [ctypes.c_char_p, ctypes.c_uint32]
self.library.rwkv_init_from_file.restype = ctypes.c_void_p
self.library.rwkv_gpu_offload_layers.argtypes = [ctypes.c_void_p, ctypes.c_uint32]
self.library.rwkv_gpu_offload_layers.restype = ctypes.c_bool
self.library.rwkv_eval.argtypes = [
ctypes.c_void_p, # ctx
ctypes.c_int32, # token
P_FLOAT, # state_in
P_FLOAT, # state_out
P_FLOAT # logits_out
]
self.library.rwkv_eval.restype = ctypes.c_bool
self.library.rwkv_eval_sequence.argtypes = [
ctypes.c_void_p, # ctx
P_INT, # tokens
ctypes.c_size_t, # token count
P_FLOAT, # state_in
P_FLOAT, # state_out
P_FLOAT # logits_out
]
self.library.rwkv_eval_sequence.restype = ctypes.c_bool
self.library.rwkv_eval_sequence_in_chunks.argtypes = [
ctypes.c_void_p, # ctx
P_INT, # tokens
ctypes.c_size_t, # token count
ctypes.c_size_t, # chunk size
P_FLOAT, # state_in
P_FLOAT, # state_out
P_FLOAT # logits_out
]
self.library.rwkv_eval_sequence_in_chunks.restype = ctypes.c_bool
self.library.rwkv_get_n_vocab.argtypes = [ctypes.c_void_p]
self.library.rwkv_get_n_vocab.restype = ctypes.c_size_t
self.library.rwkv_get_n_embed.argtypes = [ctypes.c_void_p]
self.library.rwkv_get_n_embed.restype = ctypes.c_size_t
self.library.rwkv_get_n_layer.argtypes = [ctypes.c_void_p]
self.library.rwkv_get_n_layer.restype = ctypes.c_size_t
self.library.rwkv_get_state_buffer_element_count.argtypes = [ctypes.c_void_p]
self.library.rwkv_get_state_buffer_element_count.restype = ctypes.c_uint32
self.library.rwkv_get_logits_buffer_element_count.argtypes = [ctypes.c_void_p]
self.library.rwkv_get_logits_buffer_element_count.restype = ctypes.c_uint32
self.library.rwkv_free.argtypes = [ctypes.c_void_p]
self.library.rwkv_free.restype = None
self.library.rwkv_free.argtypes = [ctypes.c_void_p]
self.library.rwkv_free.restype = None
self.library.rwkv_quantize_model_file.argtypes = [ctypes.c_char_p, ctypes.c_char_p, ctypes.c_char_p]
self.library.rwkv_quantize_model_file.restype = ctypes.c_bool
self.library.rwkv_get_system_info_string.argtypes = []
self.library.rwkv_get_system_info_string.restype = ctypes.c_char_p
self.nullptr = ctypes.cast(0, ctypes.c_void_p)
def rwkv_init_from_file(self, model_file_path: str, thread_count: int) -> RWKVContext:
"""
Loads the model from a file and prepares it for inference.
Throws an exception in case of any error. Error messages would be printed to stderr.
Parameters
----------
model_file_path : str
Path to model file in ggml format.
thread_count : int
Count of threads to use, must be positive.
"""
ptr = self.library.rwkv_init_from_file(model_file_path.encode('utf-8'), ctypes.c_uint32(thread_count))
assert ptr is not None, 'rwkv_init_from_file failed, check stderr'
return RWKVContext(ptr)
def rwkv_gpu_offload_layers(self, ctx: RWKVContext, layer_count: int) -> bool:
"""
Offloads specified count of model layers onto the GPU. Offloaded layers are evaluated using cuBLAS or CLBlast.
For the purposes of this function, model head (unembedding matrix) is treated as an additional layer:
- pass `rwkv_get_n_layer(ctx)` to offload all layers except model head
- pass `rwkv_get_n_layer(ctx) + 1` to offload all layers, including model head
Returns true if at least one layer was offloaded.
If rwkv.cpp was compiled without cuBLAS and CLBlast support, this function is a no-op and always returns false.
Parameters
----------
ctx : RWKVContext
RWKV context obtained from rwkv_init_from_file.
layer_count : int
Count of layers to offload onto the GPU, must be >= 0.
"""
assert layer_count >= 0, 'Layer count must be >= 0'
return self.library.rwkv_gpu_offload_layers(ctx.ptr, ctypes.c_uint32(layer_count))
def rwkv_eval(
self,
ctx: RWKVContext,
token: int,
state_in_address: Optional[int],
state_out_address: int,
logits_out_address: int
) -> None:
"""
Evaluates the model for a single token.
Throws an exception in case of any error. Error messages would be printed to stderr.
Not thread-safe. For parallel inference, call rwkv_clone_context to create one rwkv_context for each thread.
Parameters
----------
ctx : RWKVContext
RWKV context obtained from rwkv_init_from_file.
token : int
Next token index, in range 0 <= token < n_vocab.
state_in_address : int
Address of the first element of a FP32 buffer of size rwkv_get_state_buffer_element_count; or None, if this is a first pass.
state_out_address : int
Address of the first element of a FP32 buffer of size rwkv_get_state_buffer_element_count. This buffer will be written to.
logits_out_address : int
Address of the first element of a FP32 buffer of size rwkv_get_logits_buffer_element_count. This buffer will be written to.
"""
assert self.library.rwkv_eval(
ctx.ptr,
ctypes.c_int32(token),
ctypes.cast(0 if state_in_address is None else state_in_address, P_FLOAT),
ctypes.cast(state_out_address, P_FLOAT),
ctypes.cast(logits_out_address, P_FLOAT)
), 'rwkv_eval failed, check stderr'
def rwkv_eval_sequence(
self,
ctx: RWKVContext,
tokens: List[int],
state_in_address: Optional[int],
state_out_address: int,
logits_out_address: int
) -> None:
"""
Evaluates the model for a sequence of tokens.
Uses a faster algorithm than `rwkv_eval` if you do not need the state and logits for every token. Best used with sequence lengths of 64 or so.
Has to build a computation graph on the first call for a given sequence, but will use this cached graph for subsequent calls of the same sequence length.
NOTE ON GGML NODE LIMIT
ggml has a hard-coded limit on max amount of nodes in a computation graph. The sequence graph is built in a way that quickly exceedes
this limit when using large models and/or large sequence lengths.
Fortunately, rwkv.cpp's fork of ggml has increased limit which was tested to work for sequence lengths up to 64 for 14B models.
If you get `GGML_ASSERT: ...\\ggml.c:16941: cgraph->n_nodes < GGML_MAX_NODES`, this means you've exceeded the limit.
To get rid of the assertion failure, reduce the model size and/or sequence length.
Not thread-safe. For parallel inference, call `rwkv_clone_context` to create one rwkv_context for each thread.
Throws an exception in case of any error. Error messages would be printed to stderr.
Parameters
----------
ctx : RWKVContext
RWKV context obtained from rwkv_init_from_file.
tokens : List[int]
Next token indices, in range 0 <= token < n_vocab.
state_in_address : int
Address of the first element of a FP32 buffer of size rwkv_get_state_buffer_element_count; or None, if this is a first pass.
state_out_address : int
Address of the first element of a FP32 buffer of size rwkv_get_state_buffer_element_count. This buffer will be written to.
logits_out_address : int
Address of the first element of a FP32 buffer of size rwkv_get_logits_buffer_element_count. This buffer will be written to.
"""
assert self.library.rwkv_eval_sequence(
ctx.ptr,
ctypes.cast((ctypes.c_int32 * len(tokens))(*tokens), P_INT),
ctypes.c_size_t(len(tokens)),
ctypes.cast(0 if state_in_address is None else state_in_address, P_FLOAT),
ctypes.cast(state_out_address, P_FLOAT),
ctypes.cast(logits_out_address, P_FLOAT)
), 'rwkv_eval_sequence failed, check stderr'
def rwkv_eval_sequence_in_chunks(
self,
ctx: RWKVContext,
tokens: List[int],
chunk_size: int,
state_in_address: Optional[int],
state_out_address: int,
logits_out_address: int
) -> None:
"""
Evaluates the model for a sequence of tokens using `rwkv_eval_sequence`, splitting a potentially long sequence into fixed-length chunks.
This function is useful for processing complete prompts and user input in chat & role-playing use-cases.
It is recommended to use this function instead of `rwkv_eval_sequence` to avoid mistakes and get maximum performance.
Chunking allows processing sequences of thousands of tokens, while not reaching the ggml's node limit and not consuming too much memory.
A reasonable and recommended value of chunk size is 16. If you want maximum performance, try different chunk sizes in range [2..64]
and choose one that works the best in your use case.
Not thread-safe. For parallel inference, call `rwkv_clone_context` to create one rwkv_context for each thread.
Throws an exception in case of any error. Error messages would be printed to stderr.
Parameters
----------
ctx : RWKVContext
RWKV context obtained from rwkv_init_from_file.
tokens : List[int]
Next token indices, in range 0 <= token < n_vocab.
chunk_size : int
Size of each chunk in tokens, must be positive.
state_in_address : int
Address of the first element of a FP32 buffer of size rwkv_get_state_buffer_element_count; or None, if this is a first pass.
state_out_address : int
Address of the first element of a FP32 buffer of size rwkv_get_state_buffer_element_count. This buffer will be written to.
logits_out_address : int
Address of the first element of a FP32 buffer of size rwkv_get_logits_buffer_element_count. This buffer will be written to.
"""
assert self.library.rwkv_eval_sequence_in_chunks(
ctx.ptr,
ctypes.cast((ctypes.c_int32 * len(tokens))(*tokens), P_INT),
ctypes.c_size_t(len(tokens)),
ctypes.c_size_t(chunk_size),
ctypes.cast(0 if state_in_address is None else state_in_address, P_FLOAT),
ctypes.cast(state_out_address, P_FLOAT),
ctypes.cast(logits_out_address, P_FLOAT)
), 'rwkv_eval_sequence_in_chunks failed, check stderr'
def rwkv_get_n_vocab(self, ctx: RWKVContext) -> int:
"""
Returns the number of tokens in the given model's vocabulary.
Useful for telling 20B_tokenizer models (n_vocab = 50277) apart from World models (n_vocab = 65536).
Parameters
----------
ctx : RWKVContext
RWKV context obtained from rwkv_init_from_file.
"""
return self.library.rwkv_get_n_vocab(ctx.ptr)
def rwkv_get_n_embed(self, ctx: RWKVContext) -> int:
"""
Returns the number of elements in the given model's embedding.
Useful for reading individual fields of a model's hidden state.
Parameters
----------
ctx : RWKVContext
RWKV context obtained from rwkv_init_from_file.
"""
return self.library.rwkv_get_n_embed(ctx.ptr)
def rwkv_get_n_layer(self, ctx: RWKVContext) -> int:
"""
Returns the number of layers in the given model.
A layer is a pair of RWKV and FFN operations, stacked multiple times throughout the model.
Embedding matrix and model head (unembedding matrix) are NOT counted in `n_layer`.
Useful for always offloading the entire model to GPU.
Parameters
----------
ctx : RWKVContext
RWKV context obtained from rwkv_init_from_file.
"""
return self.library.rwkv_get_n_layer(ctx.ptr)
def rwkv_get_state_buffer_element_count(self, ctx: RWKVContext) -> int:
"""
Returns count of FP32 elements in state buffer.
Parameters
----------
ctx : RWKVContext
RWKV context obtained from rwkv_init_from_file.
"""
return self.library.rwkv_get_state_buffer_element_count(ctx.ptr)
def rwkv_get_logits_buffer_element_count(self, ctx: RWKVContext) -> int:
"""
Returns count of FP32 elements in logits buffer.
Parameters
----------
ctx : RWKVContext
RWKV context obtained from rwkv_init_from_file.
"""
return self.library.rwkv_get_logits_buffer_element_count(ctx.ptr)
def rwkv_free(self, ctx: RWKVContext) -> None:
"""
Frees all allocated memory and the context.
Parameters
----------
ctx : RWKVContext
RWKV context obtained from rwkv_init_from_file.
"""
self.library.rwkv_free(ctx.ptr)
ctx.ptr = self.nullptr
def rwkv_quantize_model_file(self, model_file_path_in: str, model_file_path_out: str, format_name: str) -> None:
"""
Quantizes FP32 or FP16 model to one of INT4 formats.
Throws an exception in case of any error. Error messages would be printed to stderr.
Parameters
----------
model_file_path_in : str
Path to model file in ggml format, must be either FP32 or FP16.
model_file_path_out : str
Quantized model will be written here.
format_name : str
One of QUANTIZED_FORMAT_NAMES.
"""
assert format_name in QUANTIZED_FORMAT_NAMES, f'Unknown format name {format_name}, use one of {QUANTIZED_FORMAT_NAMES}'
assert self.library.rwkv_quantize_model_file(
model_file_path_in.encode('utf-8'),
model_file_path_out.encode('utf-8'),
format_name.encode('utf-8')
), 'rwkv_quantize_model_file failed, check stderr'
def rwkv_get_system_info_string(self) -> str:
"""
Returns system information string.
"""
return self.library.rwkv_get_system_info_string().decode('utf-8')
def load_rwkv_shared_library() -> RWKVSharedLibrary:
"""
Attempts to find rwkv.cpp shared library and load it.
To specify exact path to the library, create an instance of RWKVSharedLibrary explicitly.
"""
file_name: str
if 'win32' in sys.platform or 'cygwin' in sys.platform:
file_name = 'rwkv.dll'
elif 'darwin' in sys.platform:
file_name = 'librwkv.dylib'
else:
file_name = 'librwkv.so'
# Possible sub-paths to the library relative to the repo dir.
child_paths: List[Callable[[pathlib.Path], pathlib.Path]] = [
# No lookup for Debug config here.
# I assume that if a user wants to debug the library,
# they will be able to find the library and set the exact path explicitly.
lambda p: p / 'backend-python' / 'rwkv_pip' / 'cpp' / file_name,
lambda p: p / 'bin' / 'Release' / file_name,
lambda p: p / 'bin' / file_name,
# Some people prefer to build in the "build" subdirectory.
lambda p: p / 'build' / 'bin' / 'Release' / file_name,
lambda p: p / 'build' / 'bin' / file_name,
lambda p: p / 'build' / file_name,
# Fallback.
lambda p: p / file_name
]
working_dir: pathlib.Path = pathlib.Path(os.path.abspath(os.getcwd()))
parent_paths: List[pathlib.Path] = [
# Possible repo dirs relative to the working dir.
# ./python/rwkv_cpp
working_dir.parent.parent,
# ./python
working_dir.parent,
# .
working_dir,
# Repo dir relative to this Python file.
pathlib.Path(os.path.abspath(__file__)).parent.parent.parent
]
for parent_path in parent_paths:
for child_path in child_paths:
full_path: pathlib.Path = child_path(parent_path)
if os.path.isfile(full_path):
return RWKVSharedLibrary(str(full_path))
assert False, (f'Failed to find {file_name} automatically; '
f'you need to find the library and create RWKVSharedLibrary specifying the path to it')

View File

@@ -78,12 +78,22 @@ class PIPELINE:
def decode(self, x):
return self.tokenizer.decode(x)
def np_softmax(self, x: np.ndarray, axis: int):
x -= x.max(axis=axis, keepdims=True)
e: np.ndarray = np.exp(x)
return e / e.sum(axis=axis, keepdims=True)
def sample_logits(self, logits, temperature=1.0, top_p=0.85, top_k=0):
probs = F.softmax(logits.float(), dim=-1)
np_logits = type(logits) == np.ndarray
if np_logits:
probs = self.np_softmax(logits, axis=-1)
else:
probs = F.softmax(logits.float(), dim=-1)
top_k = int(top_k)
# 'privateuseone' is the type of custom devices like `torch_directml.device()`
if probs.device.type in ["cpu", "privateuseone"]:
probs = probs.cpu().numpy()
if np_logits or probs.device.type in ["cpu", "privateuseone"]:
if not np_logits:
probs = probs.cpu().numpy()
sorted_ids = np.argsort(probs)
sorted_probs = probs[sorted_ids][::-1]
cumulative_probs = np.cumsum(sorted_probs)

View File

@@ -510,15 +510,22 @@ def get_tokenizer(tokenizer_len: int):
def RWKV(model: str, strategy: str, tokenizer: Union[str, None]) -> AbstractRWKV:
rwkv_beta = global_var.get(global_var.Args).rwkv_beta
rwkv_cpp = getattr(global_var.get(global_var.Args), "rwkv.cpp")
if "midi" in model.lower() or "abc" in model.lower():
os.environ["RWKV_RESCALE_LAYER"] = "999"
# dynamic import to make RWKV_CUDA_ON work
if rwkv_beta:
print("Using rwkv-beta")
from rwkv_pip.beta.model import (
RWKV as Model,
)
elif rwkv_cpp:
print("Using rwkv.cpp, strategy is ignored")
from rwkv_pip.cpp.model import (
RWKV as Model,
)
else:
from rwkv_pip.model import (
RWKV as Model,

View File

@@ -128,7 +128,7 @@
"Chinese Kongfu": "中国武術",
"Allow external access to the API (service must be restarted)": "APIへの外部アクセスを許可する (サービスを再起動する必要があります)",
"Custom": "カスタム",
"CUDA (Beta, Faster)": "CUDA (ベータ、高速)",
"CUDA (Beta, Faster)": "CUDA (Beta, 高速)",
"Reset All Configs": "すべての設定をリセット",
"Cancel": "キャンセル",
"Confirm": "確認",
@@ -256,7 +256,7 @@
"Insert default system prompt at the beginning": "最初にデフォルトのシステムプロンプトを挿入",
"Format Content": "内容フォーマットの規格化",
"Add An Attachment (Accepts pdf, txt)": "添付ファイルを追加 (pdf, txtを受け付けます)",
"Uploading Attachment": "添付ファイルアップロード中",
"Processing Attachment": "添付ファイルを処理中",
"Remove Attachment": "添付ファイルを削除",
"The content of file": "ファイル",
"is as follows. When replying to me, consider the file content and respond accordingly:": "の内容は以下の通りです。私に返信する際は、ファイルの内容を考慮して適切に返信してください:",
@@ -311,5 +311,13 @@
"CN": "中国語",
"JP": "日本語",
"Music": "音楽",
"Other": "その他"
"Other": "その他",
"Import MIDI": "MIDIをインポート",
"Current Instrument": "現在の楽器",
"Please convert model to GGML format first": "モデルをGGML形式に変換してください",
"Convert To GGML Format": "GGML形式に変換",
"CPU (rwkv.cpp, Faster)": "CPU (rwkv.cpp, 高速)",
"Play With External Player": "外部プレーヤーで再生",
"Core API URL": "コアAPI URL",
"Override core API URL(/chat/completions and /completions). If you don't know what this is, leave it blank.": "コアAPI URLを上書きします(/chat/completions と /completions)。何であるかわからない場合は空白のままにしてください。"
}

View File

@@ -256,7 +256,7 @@
"Insert default system prompt at the beginning": "在开头自动插入默认系统提示",
"Format Content": "规范格式",
"Add An Attachment (Accepts pdf, txt)": "添加一个附件 (支持pdf, txt)",
"Uploading Attachment": "正在上传附件",
"Processing Attachment": "正在处理附件",
"Remove Attachment": "移除附件",
"The content of file": "文件",
"is as follows. When replying to me, consider the file content and respond accordingly:": "内容如下。回复时考虑文件内容并做出相应回复:",
@@ -311,5 +311,13 @@
"CN": "中文",
"JP": "日文",
"Music": "音乐",
"Other": "其他"
"Other": "其他",
"Import MIDI": "导入MIDI",
"Current Instrument": "当前乐器",
"Please convert model to GGML format first": "请先将模型转换为GGML格式",
"Convert To GGML Format": "转换为GGML格式",
"CPU (rwkv.cpp, Faster)": "CPU (rwkv.cpp, 更快)",
"Play With External Player": "使用外部播放器播放",
"Core API URL": "核心 API URL",
"Override core API URL(/chat/completions and /completions). If you don't know what this is, leave it blank.": "覆盖核心的 API URL (/chat/completions 和 /completions)。如果你不知道这是什么,请留空"
}

View File

@@ -17,7 +17,8 @@ import { ToolTipButton } from './ToolTipButton';
import { Play16Regular, Stop16Regular } from '@fluentui/react-icons';
import { useNavigate } from 'react-router';
import { WindowShow } from '../../wailsjs/runtime';
import { convertToSt } from '../utils/convert-to-st';
import { convertToGGML, convertToSt } from '../utils/convert-model';
import { Precision } from '../types/configs';
const mainButtonText = {
[ModelStatus.Offline]: 'Run',
@@ -47,6 +48,7 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
const modelConfig = commonStore.getCurrentModelConfig();
const webgpu = modelConfig.modelParameters.device === 'WebGPU';
const cpp = modelConfig.modelParameters.device === 'CPU (rwkv.cpp)';
let modelName = '';
let modelPath = '';
if (modelConfig && modelConfig.modelParameters) {
@@ -112,6 +114,30 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
return;
}
if (cpp) {
if (!['.bin'].some(ext => modelPath.endsWith(ext))) {
const precision: Precision = modelConfig.modelParameters.precision === 'Q5_1' ? 'Q5_1' : 'fp16';
const ggmlModelPath = modelPath.replace(/\.pth$/, `-${precision}.bin`);
if (await FileExists(ggmlModelPath)) {
modelPath = ggmlModelPath;
} else if (!await FileExists(modelPath)) {
showDownloadPrompt(t('Model file not found'), modelName);
commonStore.setStatus({ status: ModelStatus.Offline });
return;
} else if (!currentModelSource?.isComplete) {
showDownloadPrompt(t('Model file download is not complete'), modelName);
commonStore.setStatus({ status: ModelStatus.Offline });
return;
} else {
toastWithButton(t('Please convert model to GGML format first'), t('Convert'), () => {
convertToGGML(modelConfig, navigate);
});
commonStore.setStatus({ status: ModelStatus.Offline });
return;
}
}
}
if (!await FileExists(modelPath)) {
showDownloadPrompt(t('Model file not found'), modelName);
commonStore.setStatus({ status: ModelStatus.Offline });
@@ -142,7 +168,7 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
const isUsingCudaBeta = modelConfig.modelParameters.device === 'CUDA-Beta';
startServer(commonStore.settings.customPythonPath, port, commonStore.settings.host !== '127.0.0.1' ? '0.0.0.0' : '127.0.0.1',
!!modelConfig.enableWebUI, isUsingCudaBeta
!!modelConfig.enableWebUI, isUsingCudaBeta, cpp
).catch((e) => {
const errMsg = e.message || e;
if (errMsg.includes('path contains space'))

View File

@@ -7,6 +7,7 @@ import { v4 as uuid } from 'uuid';
import {
Add16Regular,
ArrowAutofitWidth20Regular,
ArrowUpload16Regular,
Delete16Regular,
MusicNote220Regular,
Pause16Regular,
@@ -17,19 +18,25 @@ import {
} from '@fluentui/react-icons';
import { Button, Card, DialogTrigger, Slider, Text, Tooltip } from '@fluentui/react-components';
import { useWindowSize } from 'usehooks-ts';
import commonStore from '../../stores/commonStore';
import commonStore, { ModelStatus } from '../../stores/commonStore';
import classnames from 'classnames';
import {
InstrumentType,
InstrumentTypeNameMap,
InstrumentTypeTokenMap,
MidiMessage,
tracksMinimalTotalTime
} from '../../types/composition';
import { toast } from 'react-toastify';
import { ToastOptions } from 'react-toastify/dist/types';
import { flushMidiRecordingContent, refreshTracksTotalTime } from '../../utils';
import { PlayNote } from '../../../wailsjs/go/backend_golang/App';
import { t } from 'i18next';
import {
absPathAsset,
flushMidiRecordingContent,
getMidiRawContentMainInstrument,
getMidiRawContentTime,
getServerRoot,
refreshTracksTotalTime
} from '../../utils';
import { OpenOpenFileDialog, PlayNote } from '../../../wailsjs/go/backend_golang/App';
const snapValue = 25;
const minimalMoveTime = 8; // 1000/125=8ms wait_events=125
@@ -47,52 +54,62 @@ const pixelFix = 0.5;
const topToArrowIcon = 19;
const arrowIconToTracks = 23;
type TrackProps = {
id: string;
right: number;
scale: number;
isSelected: boolean;
onSelect: (id: string) => void;
};
const displayCurrentInstrumentType = () => {
const displayPanelId = 'instrument_panel_id';
const content: React.ReactNode =
<div className="flex gap-2 items-center">
{InstrumentTypeNameMap.map((name, i) =>
<Text key={name} style={{ whiteSpace: 'nowrap' }}
className={commonStore.instrumentType === i ? 'text-blue-600' : ''}
weight={commonStore.instrumentType === i ? 'bold' : 'regular'}
size={commonStore.instrumentType === i ? 300 : 100}
>{t(name)}</Text>)}
</div>;
const options: ToastOptions = {
type: 'default',
autoClose: 2000,
toastId: displayPanelId,
position: 'top-left',
style: {
width: 'fit-content'
}
};
if (toast.isActive(displayPanelId))
toast.update(displayPanelId, {
render: content,
...options
});
else
toast(content, options);
};
const velocityToBin = (velocity: number) => {
velocity = Math.max(0, Math.min(velocity, velocityEvents - 1));
const binsize = velocityEvents / (velocityBins - 1);
return Math.ceil((velocityEvents * ((Math.pow(velocityExp, (velocity / velocityEvents)) - 1.0) / (velocityExp - 1.0))) / binsize);
};
const binToVelocity = (bin: number) => {
const binsize = velocityEvents / (velocityBins - 1);
return Math.max(0, Math.ceil(velocityEvents * (Math.log(((velocityExp - 1) * binsize * bin) / velocityEvents + 1) / Math.log(velocityExp)) - 1));
};
const tokenToMidiMessage = (token: string): MidiMessage | null => {
if (token.startsWith('<')) return null;
if (token.startsWith('t') && !token.startsWith('t:')) {
return {
messageType: 'ElapsedTime',
value: parseInt(token.substring(1)) * minimalMoveTime,
channel: 0,
note: 0,
velocity: 0,
control: 0,
instrument: 0
};
}
const instrument: InstrumentType = InstrumentTypeTokenMap.findIndex(t => token.startsWith(t + ':'));
if (instrument >= 0) {
const parts = token.split(':');
if (parts.length !== 3) return null;
const note = parseInt(parts[1], 16);
const velocity = parseInt(parts[2], 16);
if (velocity < 0 || velocity > 127) return null;
if (velocity === 0) return {
messageType: 'NoteOff',
note: note,
instrument: instrument,
channel: 0,
velocity: 0,
control: 0,
value: 0
};
return {
messageType: 'NoteOn',
note: note,
velocity: binToVelocity(velocity),
instrument: instrument,
channel: 0,
control: 0,
value: 0
} as MidiMessage;
}
return null;
};
const midiMessageToToken = (msg: MidiMessage) => {
if (msg.messageType === 'NoteOn' || msg.messageType === 'NoteOff') {
const instrument = InstrumentTypeTokenMap[commonStore.instrumentType];
const instrument = InstrumentTypeTokenMap[msg.instrument];
const note = msg.note.toString(16);
const velocity = velocityToBin(msg.velocity).toString(16);
return `${instrument}:${note}:${velocity} `;
@@ -116,7 +133,6 @@ let dropRecordingTime = false;
export const midiMessageHandler = async (data: MidiMessage) => {
if (data.messageType === 'ControlChange') {
commonStore.setInstrumentType(Math.round(data.value / 127 * (InstrumentTypeNameMap.length - 1)));
displayCurrentInstrumentType();
return;
}
if (commonStore.recordingTrackId) {
@@ -136,6 +152,14 @@ export const midiMessageHandler = async (data: MidiMessage) => {
}
};
type TrackProps = {
id: string;
right: number;
scale: number;
isSelected: boolean;
onSelect: (id: string) => void;
};
const Track: React.FC<TrackProps> = observer(({
id,
right,
@@ -364,6 +388,7 @@ const AudiotrackEditor: FC<{ setPrompt: (prompt: string) => void }> = observer((
<div key={track.id} className="flex gap-2 pb-1 border-b">
<div className="flex gap-1 border-r h-7">
<ToolTipButton desc={commonStore.recordingTrackId === track.id ? t('Stop') : t('Record')}
disabled={commonStore.platform === 'web'}
icon={commonStore.recordingTrackId === track.id ? <Stop16Filled /> : <Record16Regular />}
size="small" shape="circular" appearance="subtle"
onClick={() => {
@@ -422,20 +447,76 @@ const AudiotrackEditor: FC<{ setPrompt: (prompt: string) => void }> = observer((
</div>
</div>)}
<div className="flex justify-between items-center">
<Button icon={<Add16Regular />} size="small" shape="circular"
appearance="subtle"
onClick={() => {
commonStore.setTracks([...commonStore.tracks, {
id: uuid(),
mainInstrument: '',
content: '',
rawContent: [],
offsetTime: 0,
contentTime: 0
}]);
}}>
{t('New Track')}
</Button>
<div className="flex gap-1">
<Button icon={<Add16Regular />} size="small" shape="circular"
appearance="subtle"
disabled={commonStore.platform === 'web'}
onClick={() => {
commonStore.setTracks([...commonStore.tracks, {
id: uuid(),
mainInstrument: '',
content: '',
rawContent: [],
offsetTime: 0,
contentTime: 0
}]);
}}>
{t('New Track')}
</Button>
<Button icon={<ArrowUpload16Regular />} size="small" shape="circular"
appearance="subtle"
onClick={() => {
if (commonStore.status.status === ModelStatus.Offline && !commonStore.settings.apiUrl && commonStore.platform !== 'web') {
toast(t('Please click the button in the top right corner to start the model'), { type: 'warning' });
return;
}
OpenOpenFileDialog('*.mid').then(async filePath => {
if (!filePath)
return;
let blob: Blob;
if (commonStore.platform === 'web')
blob = (filePath as unknown as { blob: Blob }).blob;
else
blob = await fetch(absPathAsset(filePath)).then(r => r.blob());
const bodyForm = new FormData();
bodyForm.append('file_data', blob);
fetch(getServerRoot(commonStore.getCurrentModelConfig().apiParameters.apiPort) + '/midi-to-text', {
method: 'POST',
body: bodyForm
}).then(async r => {
if (r.status === 200) {
const text = (await r.json()).text as string;
const rawContent = text.split(' ').map(tokenToMidiMessage).filter(m => m) as MidiMessage[];
const tracks = commonStore.tracks.slice();
tracks.push({
id: uuid(),
mainInstrument: getMidiRawContentMainInstrument(rawContent),
content: text,
rawContent: rawContent,
offsetTime: 0,
contentTime: getMidiRawContentTime(rawContent)
});
commonStore.setTracks(tracks);
refreshTracksTotalTime();
} else {
toast(r.statusText + '\n' + (await r.text()), {
type: 'error'
});
}
}
).catch(e => {
toast(t('Error') + ' - ' + (e.message || e), { type: 'error', autoClose: 2500 });
});
}).catch(e => {
toast(t('Error') + ' - ' + (e.message || e), { type: 'error', autoClose: 2500 });
});
}}>
{t('Import MIDI')}
</Button>
</div>
<Text size={100}>
{t('Select a track to preview the content')}
</Text>
@@ -455,6 +536,18 @@ const AudiotrackEditor: FC<{ setPrompt: (prompt: string) => void }> = observer((
</div>
</Card>
}
{
commonStore.platform !== 'web' &&
<div className="flex gap-2 items-end mx-auto">
{t('Current Instrument') + ':'}
{InstrumentTypeNameMap.map((name, i) =>
<Text key={name} style={{ whiteSpace: 'nowrap' }}
className={commonStore.instrumentType === i ? 'text-blue-600' : ''}
weight={commonStore.instrumentType === i ? 'bold' : 'regular'}
size={commonStore.instrumentType === i ? 300 : 100}
>{t(name)}</Text>)}
</div>
}
<DialogTrigger disableButtonEnhancement>
<Button icon={<MusicNote220Regular />} style={{ minHeight: '32px' }} onClick={() => {
flushMidiRecordingContent();
@@ -494,7 +587,7 @@ const AudiotrackEditor: FC<{ setPrompt: (prompt: string) => void }> = observer((
}
}
}
const result = ('<pad> ' + globalMessages.map(m => midiMessageToToken(m)).join('')).trim();
const result = ('<pad> ' + globalMessages.map(midiMessageToToken).join('')).trim();
commonStore.setCompositionSubmittedPrompt(result);
setPrompt(result);
}}>

View File

@@ -436,7 +436,7 @@ const ChatPanel: FC = observer(() => {
const chatSseController = new AbortController();
chatSseControllers[answerId] = chatSseController;
fetchEventSource( // https://api.openai.com/v1/chat/completions || http://127.0.0.1:${port}/v1/chat/completions
getServerRoot(port) + '/v1/chat/completions',
getServerRoot(port, true) + '/v1/chat/completions',
{
method: 'POST',
headers: {
@@ -553,7 +553,7 @@ const ChatPanel: FC = observer(() => {
{!commonStore.currentTempAttachment ?
<ToolTipButton
desc={commonStore.attachmentUploading ?
t('Uploading Attachment') :
t('Processing Attachment') :
t('Add An Attachment (Accepts pdf, txt)')}
icon={commonStore.attachmentUploading ?
<ArrowClockwise16Regular className="animate-spin" />
@@ -567,7 +567,7 @@ const ChatPanel: FC = observer(() => {
const setUploading = () => commonStore.setAttachmentUploading(true);
// actually, status of web platform is always Offline
if (commonStore.platform === 'web' || commonStore.status.status === ModelStatus.Offline || currentConfig.modelParameters.device === 'WebGPU') {
webOpenOpenFileDialog({ filterPattern, fnStartLoading: setUploading }).then(webReturn => {
webOpenOpenFileDialog(filterPattern, setUploading).then(webReturn => {
if (webReturn.content)
commonStore.setCurrentTempAttachment(
{

View File

@@ -82,7 +82,7 @@ const CompletionPanel: FC = observer(() => {
let answer = '';
completionSseController = new AbortController();
fetchEventSource( // https://api.openai.com/v1/completions || http://127.0.0.1:${port}/v1/completions
getServerRoot(port) + '/v1/completions',
getServerRoot(port, true) + '/v1/completions',
{
method: 'POST',
headers: {

View File

@@ -21,7 +21,9 @@ import {
FileExists,
OpenFileFolder,
OpenMidiPort,
OpenSaveFileDialogBytes
OpenSaveFileDialogBytes,
SaveFile,
StartFile
} from '../../wailsjs/go/backend_golang/App';
import { getServerRoot, getSoundFont, toastWithButton } from '../utils';
import { CompositionParams } from '../types/composition';
@@ -98,6 +100,31 @@ const CompositionPanel: FC = observer(() => {
}
}, []);
const externalPlayListener = () => {
const params = commonStore.compositionParams;
const saveAndPlay = async (midi: ArrayBuffer, path: string) => {
await SaveFile(path, Array.from(new Uint8Array(midi)));
StartFile(path);
};
if (params.externalPlay) {
if (params.midi) {
setTimeout(() => {
playerRef.current?.stop();
});
saveAndPlay(params.midi, './midi/last.mid').catch((e: string) => {
if (e.includes('being used'))
saveAndPlay(params.midi!, './midi/last-2.mid');
});
}
}
};
useEffect(() => {
playerRef.current?.addEventListener('start', externalPlayListener);
return () => {
playerRef.current?.removeEventListener('start', externalPlayListener);
};
}, [params.externalPlay]);
useEffect(() => {
if (!(commonStore.activeMidiDeviceIndex in commonStore.midiPorts)) {
commonStore.setActiveMidiDeviceIndex(-1);
@@ -123,9 +150,12 @@ const CompositionPanel: FC = observer(() => {
});
updateNs(ns);
if (autoPlay) {
setTimeout(() => {
playerRef.current?.start();
});
if (commonStore.compositionParams.externalPlay)
externalPlayListener();
else
setTimeout(() => {
playerRef.current?.start();
});
}
});
});
@@ -143,7 +173,7 @@ const CompositionPanel: FC = observer(() => {
let answer = '';
compositionSseController = new AbortController();
fetchEventSource( // https://api.openai.com/v1/completions || http://127.0.0.1:${port}/v1/completions
getServerRoot(port) + '/v1/completions',
getServerRoot(port, true) + '/v1/completions',
{
method: 'POST',
headers: {
@@ -250,32 +280,46 @@ const CompositionPanel: FC = observer(() => {
}} />
} />
<div className="grow" />
<Checkbox className="select-none"
size="large" label={t('Use Local Sound Font')} checked={params.useLocalSoundFont}
onChange={async (_, data) => {
if (data.checked) {
if (!await FileExists('assets/sound-font/accordion/instrument.json')) {
toast(t('Failed to load local sound font, please check if the files exist - assets/sound-font'),
{ type: 'warning' });
return;
{
commonStore.platform !== 'web' &&
<Checkbox className="select-none"
size="large" label={t('Use Local Sound Font')} checked={params.useLocalSoundFont}
onChange={async (_, data) => {
if (data.checked) {
if (!await FileExists('assets/sound-font/accordion/instrument.json')) {
toast(t('Failed to load local sound font, please check if the files exist - assets/sound-font'),
{ type: 'warning' });
return;
}
}
}
setParams({
useLocalSoundFont: data.checked as boolean
});
setSoundFont();
}} />
setParams({
useLocalSoundFont: data.checked as boolean
});
setSoundFont();
}} />
}
{
commonStore.platform === 'windows' &&
<Checkbox className="select-none"
size="large" label={t('Play With External Player')} checked={params.externalPlay}
onChange={async (_, data) => {
setParams({
externalPlay: data.checked as boolean
});
}} />
}
<Checkbox className="select-none"
size="large" label={t('Auto Play At The End')} checked={params.autoPlay} onChange={(_, data) => {
setParams({
autoPlay: data.checked as boolean
});
}} />
{commonStore.platform !== 'web' &&
<Labeled flex breakline label={t('MIDI Input')}
desc={t('Select the MIDI input device to be used.')}
content={
<div className="flex flex-col gap-1">
<Labeled flex breakline label={t('MIDI Input')}
desc={t('Select the MIDI input device to be used.')}
content={
<div className="flex flex-col gap-1">
{
commonStore.platform !== 'web' &&
<Dropdown style={{ minWidth: 0 }}
value={(commonStore.activeMidiDeviceIndex === -1 || !(commonStore.activeMidiDeviceIndex in commonStore.midiPorts))
? t('None')!
@@ -299,10 +343,10 @@ const CompositionPanel: FC = observer(() => {
<Option key={i} value={i.toString()}>{p.name}</Option>)
}
</Dropdown>
<AudiotrackButton setPrompt={setPrompt} />
</div>
} />
}
}
<AudiotrackButton setPrompt={setPrompt} />
</div>
} />
</div>
<div className="flex justify-between gap-2">
<ToolTipButton desc={t('Regenerate')} icon={<ArrowSync20Regular />} onClick={() => {

View File

@@ -27,18 +27,19 @@ import { Page } from '../components/Page';
import { useNavigate } from 'react-router';
import { RunButton } from '../components/RunButton';
import { updateConfig } from '../apis';
import { ConvertModel, FileExists, GetPyError } from '../../wailsjs/go/backend_golang/App';
import { checkDependencies, getStrategy } from '../utils';
import { getStrategy } from '../utils';
import { useTranslation } from 'react-i18next';
import { WindowShow } from '../../wailsjs/runtime';
import strategyImg from '../assets/images/strategy.jpg';
import strategyZhImg from '../assets/images/strategy_zh.jpg';
import { ResetConfigsButton } from '../components/ResetConfigsButton';
import { useMediaQuery } from 'usehooks-ts';
import { ApiParameters, Device, ModelParameters, Precision } from '../types/configs';
import { convertToSt } from '../utils/convert-to-st';
import { convertModel, convertToGGML, convertToSt } from '../utils/convert-model';
const ConfigSelector: FC<{ selectedIndex: number, updateSelectedIndex: (i: number) => void }> = observer(({ selectedIndex, updateSelectedIndex }) => {
const ConfigSelector: FC<{
selectedIndex: number,
updateSelectedIndex: (i: number) => void
}> = observer(({ selectedIndex, updateSelectedIndex }) => {
return (
<Dropdown style={{ minWidth: 0 }} className="grow" value={commonStore.modelConfigs[selectedIndex].name}
selectedOptions={[selectedIndex.toString()]}
@@ -246,45 +247,14 @@ const Configs: FC = observer(() => {
} />
{
selectedConfig.modelParameters.device !== 'WebGPU' ?
<ToolTipButton text={t('Convert')}
desc={t('Convert model with these configs. Using a converted model will greatly improve the loading speed, but model parameters of the converted model cannot be modified.')}
onClick={async () => {
if (commonStore.platform === 'darwin') {
toast(t('MacOS is not yet supported for performing this operation, please do it manually.') + ' (backend-python/convert_model.py)', { type: 'info' });
return;
} else if (commonStore.platform === 'linux') {
toast(t('Linux is not yet supported for performing this operation, please do it manually.') + ' (backend-python/convert_model.py)', { type: 'info' });
return;
}
const ok = await checkDependencies(navigate);
if (!ok)
return;
const modelPath = `${commonStore.settings.customModelsPath}/${selectedConfig.modelParameters.modelName}`;
if (await FileExists(modelPath)) {
const strategy = getStrategy(selectedConfig);
const newModelPath = modelPath + '-' + strategy.replace(/[:> *+]/g, '-');
toast(t('Start Converting'), { autoClose: 1000, type: 'info' });
ConvertModel(commonStore.settings.customPythonPath, modelPath, strategy, newModelPath).then(async () => {
if (!await FileExists(newModelPath + '.pth')) {
toast(t('Convert Failed') + ' - ' + await GetPyError(), { type: 'error' });
} else {
toast(`${t('Convert Success')} - ${newModelPath}`, { type: 'success' });
}
}).catch(e => {
const errMsg = e.message || e;
if (errMsg.includes('path contains space'))
toast(`${t('Convert Failed')} - ${t('File Path Cannot Contain Space')}`, { type: 'error' });
else
toast(`${t('Convert Failed')} - ${e.message || e}`, { type: 'error' });
});
setTimeout(WindowShow, 1000);
} else {
toast(`${t('Model Not Found')} - ${modelPath}`, { type: 'error' });
}
}} /> :
<ToolTipButton text={t('Convert To Safe Tensors Format')}
(selectedConfig.modelParameters.device !== 'CPU (rwkv.cpp)' ?
<ToolTipButton text={t('Convert')}
desc={t('Convert model with these configs. Using a converted model will greatly improve the loading speed, but model parameters of the converted model cannot be modified.')}
onClick={() => convertModel(selectedConfig, navigate)} /> :
<ToolTipButton text={t('Convert To GGML Format')}
desc=""
onClick={() => convertToGGML(selectedConfig, navigate)} />)
: <ToolTipButton text={t('Convert To Safe Tensors Format')}
desc=""
onClick={() => convertToSt(selectedConfig)} />
}
@@ -299,6 +269,7 @@ const Configs: FC = observer(() => {
}
}}>
<Option value="CPU">CPU</Option>
<Option value="CPU (rwkv.cpp)">{t('CPU (rwkv.cpp, Faster)')!}</Option>
{commonStore.platform === 'darwin' && <Option value="MPS">MPS</Option>}
<Option value="CUDA">CUDA</Option>
<Option value="CUDA-Beta">{t('CUDA (Beta, Faster)')!}</Option>
@@ -322,9 +293,11 @@ const Configs: FC = observer(() => {
}}>
{selectedConfig.modelParameters.device !== 'CPU' && selectedConfig.modelParameters.device !== 'MPS' &&
<Option>fp16</Option>}
<Option>int8</Option>
{selectedConfig.modelParameters.device !== 'CPU (rwkv.cpp)' && <Option>int8</Option>}
{selectedConfig.modelParameters.device === 'WebGPU' && <Option>nf4</Option>}
{selectedConfig.modelParameters.device !== 'WebGPU' && <Option>fp32</Option>}
{selectedConfig.modelParameters.device !== 'CPU (rwkv.cpp)' && selectedConfig.modelParameters.device !== 'WebGPU' &&
<Option>fp32</Option>}
{selectedConfig.modelParameters.device === 'CPU (rwkv.cpp)' && <Option>Q5_1</Option>}
</Dropdown>
} />
}

View File

@@ -186,6 +186,16 @@ export const AdvancedGeneralSettings: FC = observer(() => {
</Dropdown>
</div>
} />
<Labeled label={t('Core API URL')}
desc={t('Override core API URL(/chat/completions and /completions). If you don\'t know what this is, leave it blank.')}
content={
<Input style={{ minWidth: 0 }} className="grow" value={commonStore.settings.coreApiUrl}
onChange={(e, data) => {
commonStore.setSettings({
coreApiUrl: data.value
});
}} />
} />
</div>;
});

View File

@@ -94,6 +94,7 @@ class CommonStore {
topP: 0.8,
autoPlay: true,
useLocalSoundFont: false,
externalPlay: false,
midi: null,
ns: null
};
@@ -174,7 +175,8 @@ class CommonStore {
apiUrl: '',
apiKey: '',
apiChatModelName: 'rwkv',
apiCompletionModelName: 'rwkv'
apiCompletionModelName: 'rwkv',
coreApiUrl: ''
};
// about
about: AboutContent = manifest.about;

View File

@@ -9,6 +9,7 @@ export type CompositionParams = {
topP: number,
autoPlay: boolean,
useLocalSoundFont: boolean,
externalPlay: boolean,
midi: ArrayBuffer | null,
ns: NoteSequence | null
}

View File

@@ -6,8 +6,8 @@ export type ApiParameters = {
presencePenalty: number;
frequencyPenalty: number;
}
export type Device = 'CPU' | 'CUDA' | 'CUDA-Beta' | 'WebGPU' | 'MPS' | 'Custom';
export type Precision = 'fp16' | 'int8' | 'fp32' | 'nf4';
export type Device = 'CPU' | 'CPU (rwkv.cpp)' | 'CUDA' | 'CUDA-Beta' | 'WebGPU' | 'MPS' | 'Custom';
export type Precision = 'fp16' | 'int8' | 'fp32' | 'nf4' | 'Q5_1';
export type ModelParameters = {
// different models can not have the same name
modelName: string;

View File

@@ -19,4 +19,5 @@ export type SettingsType = {
apiKey: string
apiChatModelName: string
apiCompletionModelName: string
coreApiUrl: string
}

View File

@@ -0,0 +1,107 @@
import { toast } from 'react-toastify';
import commonStore from '../stores/commonStore';
import { t } from 'i18next';
import {
ConvertGGML,
ConvertModel,
ConvertSafetensors,
FileExists,
GetPyError
} from '../../wailsjs/go/backend_golang/App';
import { WindowShow } from '../../wailsjs/runtime';
import { ModelConfig, Precision } from '../types/configs';
import { checkDependencies, getStrategy } from './index';
import { NavigateFunction } from 'react-router';
export const convertModel = async (selectedConfig: ModelConfig, navigate: NavigateFunction) => {
if (commonStore.platform === 'darwin') {
toast(t('MacOS is not yet supported for performing this operation, please do it manually.') + ' (backend-python/convert_model.py)', { type: 'info' });
return;
} else if (commonStore.platform === 'linux') {
toast(t('Linux is not yet supported for performing this operation, please do it manually.') + ' (backend-python/convert_model.py)', { type: 'info' });
return;
}
const ok = await checkDependencies(navigate);
if (!ok)
return;
const modelPath = `${commonStore.settings.customModelsPath}/${selectedConfig.modelParameters.modelName}`;
if (await FileExists(modelPath)) {
const strategy = getStrategy(selectedConfig);
const newModelPath = modelPath + '-' + strategy.replace(/[:> *+]/g, '-');
toast(t('Start Converting'), { autoClose: 2000, type: 'info' });
ConvertModel(commonStore.settings.customPythonPath, modelPath, strategy, newModelPath).then(async () => {
if (!await FileExists(newModelPath + '.pth')) {
toast(t('Convert Failed') + ' - ' + await GetPyError(), { type: 'error' });
} else {
toast(`${t('Convert Success')} - ${newModelPath}`, { type: 'success' });
}
}).catch(e => {
const errMsg = e.message || e;
if (errMsg.includes('path contains space'))
toast(`${t('Convert Failed')} - ${t('File Path Cannot Contain Space')}`, { type: 'error' });
else
toast(`${t('Convert Failed')} - ${e.message || e}`, { type: 'error' });
});
setTimeout(WindowShow, 1000);
} else {
toast(`${t('Model Not Found')} - ${modelPath}`, { type: 'error' });
}
};
export const convertToSt = async (selectedConfig: ModelConfig) => {
const modelPath = `${commonStore.settings.customModelsPath}/${selectedConfig.modelParameters.modelName}`;
if (await FileExists(modelPath)) {
toast(t('Start Converting'), { autoClose: 2000, type: 'info' });
const newModelPath = modelPath.replace(/\.pth$/, '.st');
ConvertSafetensors(modelPath, newModelPath).then(async () => {
if (!await FileExists(newModelPath)) {
if (commonStore.platform === 'windows' || commonStore.platform === 'linux')
toast(t('Convert Failed') + ' - ' + await GetPyError(), { type: 'error' });
} else {
toast(`${t('Convert Success')} - ${newModelPath}`, { type: 'success' });
}
}).catch(e => {
const errMsg = e.message || e;
if (errMsg.includes('path contains space'))
toast(`${t('Convert Failed')} - ${t('File Path Cannot Contain Space')}`, { type: 'error' });
else
toast(`${t('Convert Failed')} - ${e.message || e}`, { type: 'error' });
});
setTimeout(WindowShow, 1000);
} else {
toast(`${t('Model Not Found')} - ${modelPath}`, { type: 'error' });
}
};
export const convertToGGML = async (selectedConfig: ModelConfig, navigate: NavigateFunction) => {
const ok = await checkDependencies(navigate);
if (!ok)
return;
const modelPath = `${commonStore.settings.customModelsPath}/${selectedConfig.modelParameters.modelName}`;
if (await FileExists(modelPath)) {
toast(t('Start Converting'), { autoClose: 2000, type: 'info' });
const precision: Precision = selectedConfig.modelParameters.precision === 'Q5_1' ? 'Q5_1' : 'fp16';
const newModelPath = modelPath.replace(/\.pth$/, `-${precision}.bin`);
ConvertGGML(commonStore.settings.customPythonPath, modelPath, newModelPath, precision === 'Q5_1').then(async () => {
if (!await FileExists(newModelPath)) {
if (commonStore.platform === 'windows' || commonStore.platform === 'linux')
toast(t('Convert Failed') + ' - ' + await GetPyError(), { type: 'error' });
} else {
toast(`${t('Convert Success')} - ${newModelPath}`, { type: 'success' });
}
}).catch(e => {
const errMsg = e.message || e;
if (errMsg.includes('path contains space'))
toast(`${t('Convert Failed')} - ${t('File Path Cannot Contain Space')}`, { type: 'error' });
else
toast(`${t('Convert Failed')} - ${e.message || e}`, { type: 'error' });
});
setTimeout(WindowShow, 1000);
} else {
toast(`${t('Model Not Found')} - ${modelPath}`, { type: 'error' });
}
};

View File

@@ -1,31 +0,0 @@
import { toast } from 'react-toastify';
import commonStore from '../stores/commonStore';
import { t } from 'i18next';
import { ConvertSafetensors, FileExists, GetPyError } from '../../wailsjs/go/backend_golang/App';
import { WindowShow } from '../../wailsjs/runtime';
import { ModelConfig } from '../types/configs';
export const convertToSt = async (selectedConfig: ModelConfig) => {
const modelPath = `${commonStore.settings.customModelsPath}/${selectedConfig.modelParameters.modelName}`;
if (await FileExists(modelPath)) {
toast(t('Start Converting'), { autoClose: 2000, type: 'info' });
const newModelPath = modelPath.replace(/\.pth$/, '.st');
ConvertSafetensors(modelPath, newModelPath).then(async () => {
if (!await FileExists(newModelPath)) {
if (commonStore.platform === 'windows' || commonStore.platform === 'linux')
toast(t('Convert Failed') + ' - ' + await GetPyError(), { type: 'error' });
} else {
toast(`${t('Convert Success')} - ${newModelPath}`, { type: 'success' });
}
}).catch(e => {
const errMsg = e.message || e;
if (errMsg.includes('path contains space'))
toast(`${t('Convert Failed')} - ${t('File Path Cannot Contain Space')}`, { type: 'error' });
else
toast(`${t('Convert Failed')} - ${e.message || e}`, { type: 'error' });
});
setTimeout(WindowShow, 1000);
} else {
toast(`${t('Model Not Found')} - ${modelPath}`, { type: 'error' });
}
};

View File

@@ -22,7 +22,7 @@ import { DownloadStatus } from '../types/downloads';
import { ModelSourceItem } from '../types/models';
import { Language, Languages, SettingsType } from '../types/settings';
import { DataProcessParameters, LoraFinetuneParameters } from '../types/train';
import { InstrumentTypeNameMap, tracksMinimalTotalTime } from '../types/composition';
import { InstrumentTypeNameMap, MidiMessage, tracksMinimalTotalTime } from '../types/composition';
import logo from '../assets/images/logo.png';
import { Preset } from '../types/presets';
import { botName, Conversation, MessageType, userName } from '../types/chat';
@@ -63,7 +63,7 @@ export async function refreshBuiltInModels(readCache: boolean = false) {
return cache;
}
const modelSuffix = ['.pth', '.st', '.safetensors'];
const modelSuffix = ['.pth', '.st', '.safetensors', '.bin'];
export async function refreshLocalModels(cache: {
models: ModelSourceItem[]
@@ -303,7 +303,11 @@ export function bytesToReadable(size: number) {
else return bytesToGb(size) + ' GB';
}
export function getServerRoot(defaultLocalPort: number) {
export function getServerRoot(defaultLocalPort: number, isCore: boolean = false) {
const coreCustomApiUrl = commonStore.settings.coreApiUrl.trim().replace(/\/$/, '');
if (isCore && coreCustomApiUrl)
return coreCustomApiUrl;
const defaultRoot = `http://127.0.0.1:${defaultLocalPort}`;
if (commonStore.status.status !== ModelStatus.Offline)
return defaultRoot;
@@ -513,34 +517,39 @@ export function refreshTracksTotalTime() {
commonStore.setTrackTotalTime(totalTime);
}
export function getMidiRawContentTime(rawContent: MidiMessage[]) {
return rawContent.reduce((sum, current) =>
sum + (current.messageType === 'ElapsedTime' ? current.value : 0)
, 0);
}
export function getMidiRawContentMainInstrument(rawContent: MidiMessage[]) {
const sortedInstrumentFrequency = Object.entries(rawContent
.filter(c => c.messageType === 'NoteOn')
.map(c => c.instrument)
.reduce((frequencyCount, current) => (frequencyCount[current] = (frequencyCount[current] || 0) + 1, frequencyCount)
, {} as {
[key: string]: number
}))
.sort((a, b) => b[1] - a[1]);
let mainInstrument: string = '';
if (sortedInstrumentFrequency.length > 0)
mainInstrument = InstrumentTypeNameMap[Number(sortedInstrumentFrequency[0][0])];
return mainInstrument;
}
export function flushMidiRecordingContent() {
const recordingTrackIndex = commonStore.tracks.findIndex(t => t.id === commonStore.recordingTrackId);
if (recordingTrackIndex >= 0) {
const recordingTrack = commonStore.tracks[recordingTrackIndex];
const tracks = commonStore.tracks.slice();
const contentTime = commonStore.recordingRawContent
.reduce((sum, current) =>
sum + (current.messageType === 'ElapsedTime' ? current.value : 0)
, 0);
const sortedInstrumentFrequency = Object.entries(commonStore.recordingRawContent
.filter(c => c.messageType === 'NoteOn')
.map(c => c.instrument)
.reduce((frequencyCount, current) => (frequencyCount[current] = (frequencyCount[current] || 0) + 1, frequencyCount)
, {} as {
[key: string]: number
}))
.sort((a, b) => b[1] - a[1]);
let mainInstrument: string = '';
if (sortedInstrumentFrequency.length > 0)
mainInstrument = InstrumentTypeNameMap[Number(sortedInstrumentFrequency[0][0])];
tracks[recordingTrackIndex] = {
...recordingTrack,
content: commonStore.recordingContent,
rawContent: commonStore.recordingRawContent,
contentTime: contentTime,
mainInstrument: mainInstrument
contentTime: getMidiRawContentTime(commonStore.recordingRawContent),
mainInstrument: getMidiRawContentMainInstrument(commonStore.recordingRawContent)
};
commonStore.setTracks(tracks);
refreshTracksTotalTime();

View File

@@ -1,12 +1,17 @@
import { getDocument, GlobalWorkerOptions, PDFDocumentProxy } from 'pdfjs-dist';
import { TextItem } from 'pdfjs-dist/types/src/display/api';
export function webOpenOpenFileDialog({ filterPattern, fnStartLoading }: { filterPattern: string, fnStartLoading: Function | null }): Promise<{ blob: Blob, content?: string }> {
export function webOpenOpenFileDialog(filterPattern: string, fnStartLoading: Function | undefined): Promise<{
blob: Blob,
content?: string
}> {
return new Promise((resolve, reject) => {
const input = document.createElement('input');
input.type = 'file';
input.accept = filterPattern
.replaceAll('*.txt', 'text/plain')
.replace('*.midi', 'audio/midi')
.replace('*.mid', 'audio/midi')
.replaceAll('*.', 'application/')
.replaceAll(';', ',');
@@ -15,7 +20,7 @@ export function webOpenOpenFileDialog({ filterPattern, fnStartLoading }: { filte
const file: Blob = e.target?.files[0];
if (fnStartLoading && typeof fnStartLoading === 'function')
fnStartLoading();
if (!GlobalWorkerOptions.workerSrc)
if (!GlobalWorkerOptions.workerSrc && file.type === 'application/pdf')
// @ts-ignore
GlobalWorkerOptions.workerSrc = await import('pdfjs-dist/build/pdf.worker.min.mjs');
if (file.type === 'text/plain') {

View File

@@ -10,6 +10,8 @@ export function ContinueDownload(arg1:string):Promise<void>;
export function ConvertData(arg1:string,arg2:string,arg3:string,arg4:string):Promise<string>;
export function ConvertGGML(arg1:string,arg2:string,arg3:string,arg4:boolean):Promise<string>;
export function ConvertModel(arg1:string,arg2:string,arg3:string,arg4:string):Promise<string>;
export function ConvertSafetensors(arg1:string,arg2:string):Promise<string>;
@@ -56,9 +58,13 @@ export function ReadJson(arg1:string):Promise<any>;
export function RestartApp():Promise<void>;
export function SaveFile(arg1:string,arg2:Array<number>):Promise<void>;
export function SaveJson(arg1:string,arg2:any):Promise<void>;
export function StartServer(arg1:string,arg2:number,arg3:string,arg4:boolean,arg5:boolean):Promise<string>;
export function StartFile(arg1:string):Promise<void>;
export function StartServer(arg1:string,arg2:number,arg3:string,arg4:boolean,arg5:boolean,arg6:boolean):Promise<string>;
export function StartWebGPUServer(arg1:number,arg2:string):Promise<string>;

View File

@@ -18,6 +18,10 @@ export function ConvertData(arg1, arg2, arg3, arg4) {
return window['go']['backend_golang']['App']['ConvertData'](arg1, arg2, arg3, arg4);
}
export function ConvertGGML(arg1, arg2, arg3, arg4) {
return window['go']['backend_golang']['App']['ConvertGGML'](arg1, arg2, arg3, arg4);
}
export function ConvertModel(arg1, arg2, arg3, arg4) {
return window['go']['backend_golang']['App']['ConvertModel'](arg1, arg2, arg3, arg4);
}
@@ -110,12 +114,20 @@ export function RestartApp() {
return window['go']['backend_golang']['App']['RestartApp']();
}
export function SaveFile(arg1, arg2) {
return window['go']['backend_golang']['App']['SaveFile'](arg1, arg2);
}
export function SaveJson(arg1, arg2) {
return window['go']['backend_golang']['App']['SaveJson'](arg1, arg2);
}
export function StartServer(arg1, arg2, arg3, arg4, arg5) {
return window['go']['backend_golang']['App']['StartServer'](arg1, arg2, arg3, arg4, arg5);
export function StartFile(arg1) {
return window['go']['backend_golang']['App']['StartFile'](arg1);
}
export function StartServer(arg1, arg2, arg3, arg4, arg5, arg6) {
return window['go']['backend_golang']['App']['StartServer'](arg1, arg2, arg3, arg4, arg5, arg6);
}
export function StartWebGPUServer(arg1, arg2) {

14
go.mod
View File

@@ -9,13 +9,14 @@ require (
github.com/minio/selfupdate v0.6.0
github.com/nyaosorg/go-windows-su v0.2.1
github.com/ubuntu/gowsl v0.0.0-20230615094051-94945650cc1e
github.com/wailsapp/wails/v2 v2.6.0
github.com/wailsapp/wails/v2 v2.7.1
)
require (
aead.dev/minisign v0.2.0 // indirect
github.com/bep/debounce v1.2.1 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/jchv/go-winloader v0.0.0-20210711035445-715c2860da7e // indirect
github.com/labstack/echo/v4 v4.10.2 // indirect
@@ -23,6 +24,7 @@ require (
github.com/leaanthony/go-ansi-parser v1.6.0 // indirect
github.com/leaanthony/gosod v1.0.3 // indirect
github.com/leaanthony/slicer v1.6.0 // indirect
github.com/leaanthony/u v1.1.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
@@ -34,11 +36,11 @@ require (
github.com/ubuntu/decorate v0.0.0-20230125165522-2d5b0a9bb117 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/fasttemplate v1.2.2 // indirect
github.com/wailsapp/go-webview2 v1.0.1 // indirect
github.com/wailsapp/go-webview2 v1.0.10 // indirect
github.com/wailsapp/mimetype v1.4.1 // indirect
golang.org/x/crypto v0.9.0 // indirect
golang.org/x/crypto v0.14.0 // indirect
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 // indirect
golang.org/x/net v0.10.0 // indirect
golang.org/x/sys v0.9.0 // indirect
golang.org/x/text v0.9.0 // indirect
golang.org/x/net v0.17.0 // indirect
golang.org/x/sys v0.13.0 // indirect
golang.org/x/text v0.13.0 // indirect
)

28
go.sum
View File

@@ -12,6 +12,8 @@ github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/jchv/go-winloader v0.0.0-20210711035445-715c2860da7e h1:Q3+PugElBCf4PFpxhErSzU3/PY5sFL5Z6rfv4AbGAck=
@@ -29,6 +31,8 @@ github.com/leaanthony/gosod v1.0.3/go.mod h1:BJ2J+oHsQIyIQpnLPjnqFGTMnOZXDbvWtRC
github.com/leaanthony/slicer v1.5.0/go.mod h1:FwrApmf8gOrpzEWM2J/9Lh79tyq8KTX5AzRtwV7m4AY=
github.com/leaanthony/slicer v1.6.0 h1:1RFP5uiPJvT93TAHi+ipd3NACobkW53yUiBqZheE/Js=
github.com/leaanthony/slicer v1.6.0/go.mod h1:o/Iz29g7LN0GqH3aMjWAe90381nyZlDNquK+mtH2Fj8=
github.com/leaanthony/u v1.1.0 h1:2n0d2BwPVXSUq5yhe8lJPHdxevE2qK5G99PMStMZMaI=
github.com/leaanthony/u v1.1.0/go.mod h1:9+o6hejoRljvZ3BzdYlVL0JYCwtnAsVuN9pVTQcaRfI=
github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE=
github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU=
github.com/mattn/go-colorable v0.1.11/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
@@ -71,24 +75,24 @@ github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyC
github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo=
github.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/wailsapp/go-webview2 v1.0.1 h1:dEJIeEApW/MhO2tTMISZBFZPuW7kwrFA1NtgFB1z1II=
github.com/wailsapp/go-webview2 v1.0.1/go.mod h1:Uk2BePfCRzttBBjFrBmqKGJd41P6QIHeV9kTgIeOZNo=
github.com/wailsapp/go-webview2 v1.0.10 h1:PP5Hug6pnQEAhfRzLCoOh2jJaPdrqeRgJKZhyYyDV/w=
github.com/wailsapp/go-webview2 v1.0.10/go.mod h1:Uk2BePfCRzttBBjFrBmqKGJd41P6QIHeV9kTgIeOZNo=
github.com/wailsapp/mimetype v1.4.1 h1:pQN9ycO7uo4vsUUuPeHEYoUkLVkaRntMnHJxVwYhwHs=
github.com/wailsapp/mimetype v1.4.1/go.mod h1:9aV5k31bBOv5z6u+QP8TltzvNGJPmNJD4XlAL3U+j3o=
github.com/wailsapp/wails/v2 v2.6.0 h1:EyH0zR/EO6dDiqNy8qU5spaXDfkluiq77xrkabPYD4c=
github.com/wailsapp/wails/v2 v2.6.0/go.mod h1:WBG9KKWuw0FKfoepBrr/vRlyTmHaMibWesK3yz6nNiM=
github.com/wailsapp/wails/v2 v2.7.1 h1:HAzp2c5ODOzsLC6ZMDVtNOB72ozM7/SJecJPB2Ur+UU=
github.com/wailsapp/wails/v2 v2.7.1/go.mod h1:oIJVwwso5fdOgprBYWXBBqtx6PaSvxg8/KTQHNGkadc=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20211209193657-4570a0811e8b/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.9.0 h1:LF6fAI+IutBocDJ2OT0Q1g8plpYljMZ4+lty+dsqw3g=
golang.org/x/crypto v0.9.0/go.mod h1:yrmDGqONDYtNj3tH8X9dzUun2m2lzPa9ngI6/RUPGR0=
golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 h1:k/i9J1pBpvlfR+9QsetwPyERsqu1GIbi967PQMq3Ivc=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20210505024714-0287a6fb4125/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.10.0 h1:X2//UzNDwYmtCLn7To6G58Wr6f5ahEAQgKNzv9Y951M=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -105,14 +109,14 @@ golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.9.0 h1:KS/R3tvhPqvJvwcKfnBHJwwthS11LRhmM5D59eEXa0s=
golang.org/x/sys v0.9.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

12
main.go
View File

@@ -11,6 +11,7 @@ import (
backend "rwkv-runner/backend-golang"
"github.com/wailsapp/wails/v2"
wailsLogger "github.com/wailsapp/wails/v2/pkg/logger"
"github.com/wailsapp/wails/v2/pkg/options"
"github.com/wailsapp/wails/v2/pkg/options/assetserver"
"github.com/wailsapp/wails/v2/pkg/options/windows"
@@ -66,7 +67,10 @@ var midiAssets embed.FS
var components embed.FS
func main() {
dev := true
if buildInfo, ok := debug.ReadBuildInfo(); !ok || strings.Contains(buildInfo.String(), "-ldflags") {
dev = false
backend.CopyEmbed(assets)
os.RemoveAll("./py310/Lib/site-packages/cyac-1.7.dist-info")
backend.CopyEmbed(cyac)
@@ -94,6 +98,13 @@ func main() {
app.HasConfigData = false
}
var logger wailsLogger.Logger
if dev {
logger = wailsLogger.NewDefaultLogger()
} else {
logger = wailsLogger.NewFileLogger("crash.log")
}
// Create application with options
err = wails.Run(&options.App{
Title: "RWKV-Runner",
@@ -115,6 +126,7 @@ func main() {
Bind: []any{
app,
},
Logger: logger,
})
if err != nil {

View File

@@ -1,5 +1,5 @@
{
"version": "1.5.8",
"version": "1.6.0",
"introduction": {
"en": "RWKV is an open-source, commercially usable large language model with high flexibility and great potential for development.\n### About This Tool\nThis tool aims to lower the barrier of entry for using large language models, making it accessible to everyone. It provides fully automated dependency and model management. You simply need to click and run, following the instructions, to deploy a local large language model. The tool itself is very compact and only requires a single executable file for one-click deployment.\nAdditionally, this tool offers an interface that is fully compatible with the OpenAI API. This means you can use any ChatGPT client as a client for RWKV, enabling capability expansion beyond just chat functionality.\n### Preset Configuration Rules at the Bottom\nThis tool comes with a series of preset configurations to reduce complexity. The naming rules for each configuration represent the following in order: device - required VRAM/memory - model size - model language.\nFor example, \"GPU-8G-3B-EN\" indicates that this configuration is for a graphics card with 8GB of VRAM, a model size of 3 billion parameters, and it uses an English language model.\nLarger model sizes have higher performance and VRAM requirements. Among configurations with the same model size, those with higher VRAM usage will have faster runtime.\nFor example, if you have 12GB of VRAM but running the \"GPU-12G-7B-EN\" configuration is slow, you can downgrade to \"GPU-8G-3B-EN\" for a significant speed improvement.\n### About RWKV\nRWKV is an RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's 100% attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the \"GPT\" mode to quickly compute the hidden state for the \"RNN\" mode.<br/>So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, \"infinite\" ctx_len, and free sentence embedding (using the final hidden state).",
"zh": "RWKV是一个开源且允许商用的大语言模型灵活性很高且极具发展潜力。\n### 关于本工具\n本工具旨在降低大语言模型的使用门槛做到人人可用本工具提供了全自动化的依赖和模型管理你只需要直接点击运行跟随引导即可完成本地大语言模型的部署工具本身体积极小只需要一个exe即可完成一键部署。\n此外本工具提供了与OpenAI API完全兼容的接口这意味着你可以把任意ChatGPT客户端用作RWKV的客户端实现能力拓展而不局限于聊天。\n### 底部的预设配置规则\n本工具内置了一系列预设配置以降低使用难度每个配置名的规则依次代表着设备-所需显存/内存-模型规模-模型语言。\n例如GPU-8G-3B-CN表示该配置用于显卡需要8G显存模型规模为30亿参数使用的是中文模型。\n模型规模越大性能要求越高显存要求也越高而同样模型规模的配置中显存占用越高的运行速度越快。\n例如当你有12G显存但运行GPU-12G-7B-CN配置速度比较慢可降级成GPU-8G-3B-CN将会大幅提速。\n### 关于RWKV\nRWKV是具有Transformer级别LLM性能的RNN也可以像GPT Transformer一样直接进行训练可并行化。而且它是100% attention-free的。你只需在位置t处获得隐藏状态即可计算位置t + 1处的状态。你可以使用“GPT”模式快速计算用于“RNN”模式的隐藏状态。\n因此它将RNN和Transformer的优点结合起来 - 高性能、快速推理、节省显存、快速训练、“无限”上下文长度以及免费的语句嵌入(使用最终隐藏状态)。"

View File

@@ -3,6 +3,7 @@
- ^backend-python/get-pip\.py
- ^backend-python/convert_model\.py
- ^backend-python/convert_safetensors\.py
- ^backend-python/convert_pytorch_to_ggml\.py linguist-vendored
- ^backend-python/utils/midi\.py
- ^build/
- ^finetune/lora/