A RWKV management and startup tool, full automation, only 8MB. And provides an interface compatible with the OpenAI API. RWKV is a large language model that is fully open source and available for commercial use.
Go to file
2023-06-07 22:20:35 +08:00
.vscode dev config 2023-06-05 22:57:01 +08:00
backend-golang update InstallPyDep for better macOS support 2023-06-07 20:38:19 +08:00
backend-python add requirements_without_cyac.txt 2023-06-05 22:58:56 +08:00
build Update Readme_Install.txt 2023-06-07 14:03:25 +08:00
deploy-examples/ChatGPT-Next-Web deploy example for windows 2023-06-07 22:20:35 +08:00
frontend add macOS MPS configs 2023-06-06 22:42:38 +08:00
.gitattributes upload .gitattributes 2023-05-30 13:17:45 +08:00
.gitignore add logs 2023-06-03 17:12:59 +08:00
exportModelsJson.js update manifest.json 2023-05-07 16:09:16 +08:00
go.mod update 2023-05-17 23:27:52 +08:00
go.sum update 2023-05-17 23:27:52 +08:00
LICENSE navigate card 2023-05-05 13:41:54 +08:00
main.go chore 2023-05-31 14:14:25 +08:00
Makefile dev config 2023-06-05 22:57:01 +08:00
manifest.json update manifest.json 2023-06-07 19:45:53 +08:00
README_ZH.md update readme 2023-06-07 00:11:39 +08:00
README.md update readme 2023-06-07 00:11:39 +08:00
vendor.yml upload vendor.yml 2023-05-30 10:35:24 +08:00
wails.json init 2023-05-03 23:38:54 +08:00

RWKV Runner

This project aims to eliminate the barriers of using large language models by automating everything for you. All you need is a lightweight executable program of just a few megabytes. Additionally, this project provides an interface compatible with the OpenAI API, which means that every ChatGPT client is an RWKV client.

license release

English | 简体中文

Install

Windows MacOS Linux

FAQs | Preview | Download

For Mac and Linux users, please manually install Python 3.10 (usually the latest systems come with it built-in).

Default configs has enabled custom CUDA kernel acceleration, which is much faster and consumes much less VRAM. If you encounter possible compatibility issues, go to the Configs page and turn off Use Custom CUDA kernel to Accelerate.

For different tasks, adjusting API parameters can achieve better results. For example, for translation tasks, you can try setting Temperature to 1 and Top_P to 0.3.

Features

  • RWKV model management and one-click startup
  • Fully compatible with the OpenAI API, making every ChatGPT client an RWKV client. After starting the model, open http://127.0.0.1:8000/docs to view more details.
  • Automatic dependency installation, requiring only a lightweight executable program
  • Configs with 2G to 32G VRAM are included, works well on almost all computers
  • User-friendly chat and completion interaction interface included
  • Easy-to-understand and operate parameter configuration
  • Built-in model conversion tool
  • Built-in download management and remote model inspection
  • Multilingual localization
  • Theme switching
  • Automatic updates

API Concurrency Stress Testing

ab -p body.json -T application/json -c 20 -n 100 -l http://127.0.0.1:8000/chat/completions

body.json:

{
  "messages": [
    {
      "role": "user",
      "content": "Hello"
    }
  ]
}

Todo

  • Model training functionality
  • CUDA operator int8 acceleration
  • macOS support
  • Linux support
  • Local State Cache DB

Preview

Homepage

image

Chat

image

Completion

image

Configuration

image

Model Management

image

Download Management

image

Settings

image