update readme
This commit is contained in:
parent
28874585ea
commit
4d0d0a4dee
@ -29,6 +29,8 @@ English | [简体中文](README_ZH.md)
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
#### Default configs do not enable custom CUDA kernel acceleration, but I strongly recommend that you enable it and run with int8 precision, which is much faster and consumes much less VRAM. Go to the Configs page and turn on `Use Custom CUDA kernel to Accelerate`.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- RWKV model management and one-click startup
|
- RWKV model management and one-click startup
|
||||||
|
@ -30,6 +30,8 @@ API兼容的接口,这意味着一切ChatGPT客户端都是RWKV客户端。
|
|||||||
|
|
||||||
#### 注意 目前RWKV中文模型质量一般,推荐使用英文模型体验实际RWKV能力
|
#### 注意 目前RWKV中文模型质量一般,推荐使用英文模型体验实际RWKV能力
|
||||||
|
|
||||||
|
#### 预设配置没有开启自定义CUDA算子加速,但我强烈建议你开启它并使用int8量化运行,速度非常快,且显存消耗少得多。前往配置页面,打开`使用自定义CUDA算子加速`
|
||||||
|
|
||||||
## 功能
|
## 功能
|
||||||
|
|
||||||
- RWKV模型管理,一键启动
|
- RWKV模型管理,一键启动
|
||||||
|
Loading…
Reference in New Issue
Block a user