A RWKV management and startup tool, full automation, only 8MB. And provides an interface compatible with the OpenAI API. RWKV is a large language model that is fully open source and available for commercial use.
Go to file
2023-11-20 23:47:39 +08:00
.github/workflows macos: change default webgpu backend to aarch64-apple-darwin 2023-11-17 21:16:08 +08:00
.vscode dev config 2023-06-05 22:57:01 +08:00
assets add Composition Page (RWKV-Music) 2023-07-28 12:30:05 +08:00
backend-golang fix webgpu permission for macos 2023-11-20 20:12:20 +08:00
backend-python add tokenizer(/switch-model) to /docs 2023-11-20 20:11:45 +08:00
backend-rust/assets webgpu support 2023-08-16 23:07:58 +08:00
build chore 2023-08-27 21:21:34 +08:00
deploy-examples update setup comments 2023-11-17 20:47:33 +08:00
finetune fix a finetune bug 2023-11-17 22:37:21 +08:00
frontend add sidePanel for Chat page 2023-11-20 23:47:39 +08:00
midi add midi api 2023-07-25 16:11:17 +08:00
.gitattributes webgpu support 2023-08-16 23:07:58 +08:00
.gitignore update .gitignore 2023-10-05 00:08:02 +08:00
CURRENT_CHANGE.md release v1.5.2 2023-11-09 22:11:05 +08:00
exportModelsJson.js update manifest.json 2023-05-07 16:09:16 +08:00
go.mod upgrade to wails@v2.6.0 (EnableDefaultContextMenu: true) 2023-09-16 00:29:45 +08:00
go.sum upgrade to wails@v2.6.0 (EnableDefaultContextMenu: true) 2023-09-16 00:29:45 +08:00
LICENSE navigate card 2023-05-05 13:41:54 +08:00
main.go create webui assets 2023-11-07 22:23:26 +08:00
Makefile webui build 2023-11-07 19:27:21 +08:00
manifest.json release v1.5.2 2023-11-09 14:11:40 +00:00
README_JA.md update readme 2023-10-24 21:11:55 +08:00
README_ZH.md update readme 2023-10-24 21:11:55 +08:00
README.md update readme 2023-10-24 21:11:55 +08:00
vendor.yml webgpu support 2023-08-16 23:07:58 +08:00
wails.json init 2023-05-03 23:38:54 +08:00

RWKV Runner

This project aims to eliminate the barriers of using large language models by automating everything for you. All you need is a lightweight executable program of just a few megabytes. Additionally, this project provides an interface compatible with the OpenAI API, which means that every ChatGPT client is an RWKV client.

license release

English | 简体中文 | 日本語

Install

Windows MacOS Linux

FAQs | Preview | Download | Server-Deploy-Examples

Tip: You can deploy backend-python on a server and use this program as a client only. Fill in your server address in the Settings API URL.

Default configs has enabled custom CUDA kernel acceleration, which is much faster and consumes much less VRAM. If you encounter possible compatibility issues (output garbled), go to the Configs page and turn off Use Custom CUDA kernel to Accelerate, or try to upgrade your gpu driver.

If Windows Defender claims this is a virus, you can try downloading v1.3.7_win.zip and letting it update automatically to the latest version, or add it to the trusted list (Windows Security -> Virus & threat protection -> Manage settings -> Exclusions -> Add or remove exclusions -> Add an exclusion -> Folder -> RWKV-Runner).

For different tasks, adjusting API parameters can achieve better results. For example, for translation tasks, you can try setting Temperature to 1 and Top_P to 0.3.

Features

  • RWKV model management and one-click startup
  • Fully compatible with the OpenAI API, making every ChatGPT client an RWKV client. After starting the model, open http://127.0.0.1:8000/docs to view more details.
  • Automatic dependency installation, requiring only a lightweight executable program
  • Configs with 2G to 32G VRAM are included, works well on almost all computers
  • User-friendly chat and completion interaction interface included
  • Easy-to-understand and operate parameter configuration
  • Built-in model conversion tool
  • Built-in download management and remote model inspection
  • Built-in one-click LoRA Finetune
  • Can also be used as an OpenAI ChatGPT and GPT-Playground client
  • Multilingual localization
  • Theme switching
  • Automatic updates

API Concurrency Stress Testing

ab -p body.json -T application/json -c 20 -n 100 -l http://127.0.0.1:8000/chat/completions

body.json:

{
  "messages": [
    {
      "role": "user",
      "content": "Hello"
    }
  ]
}

Embeddings API Example

Note: v1.4.0 has improved the quality of embeddings API. The generated results are not compatible with previous versions. If you are using embeddings API to generate knowledge bases or similar, please regenerate.

If you are using langchain, just use OpenAIEmbeddings(openai_api_base="http://127.0.0.1:8000", openai_api_key="sk-")

import numpy as np
import requests


def cosine_similarity(a, b):
    return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))


values = [
    "I am a girl",
    "我是个女孩",
    "私は女の子です",
    "广东人爱吃福建人",
    "我是个人类",
    "I am a human",
    "that dog is so cute",
    "私はねこむすめです、にゃん♪",
    "宇宙级特大事件!号外号外!"
]

embeddings = []
for v in values:
    r = requests.post("http://127.0.0.1:8000/embeddings", json={"input": v})
    embedding = r.json()["data"][0]["embedding"]
    embeddings.append(embedding)

compared_embedding = embeddings[0]

embeddings_cos_sim = [cosine_similarity(compared_embedding, e) for e in embeddings]

for i in np.argsort(embeddings_cos_sim)[::-1]:
    print(f"{embeddings_cos_sim[i]:.10f} - {values[i]}")

Preview

Homepage

image

Chat

image

Completion

image

Composition

image

Configuration

image

Model Management

image

Download Management

image

LoRA Finetune

image

Settings

image