update readme
This commit is contained in:
parent
4d0d0a4dee
commit
f8cb9511e1
@ -31,6 +31,8 @@ English | [简体中文](README_ZH.md)
|
|||||||
|
|
||||||
#### Default configs do not enable custom CUDA kernel acceleration, but I strongly recommend that you enable it and run with int8 precision, which is much faster and consumes much less VRAM. Go to the Configs page and turn on `Use Custom CUDA kernel to Accelerate`.
|
#### Default configs do not enable custom CUDA kernel acceleration, but I strongly recommend that you enable it and run with int8 precision, which is much faster and consumes much less VRAM. Go to the Configs page and turn on `Use Custom CUDA kernel to Accelerate`.
|
||||||
|
|
||||||
|
#### For different tasks, adjusting API parameters can achieve better results. For example, for translation tasks, you can try setting Temperature to 1 and Top_P to 0.3.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- RWKV model management and one-click startup
|
- RWKV model management and one-click startup
|
||||||
|
@ -32,6 +32,8 @@ API兼容的接口,这意味着一切ChatGPT客户端都是RWKV客户端。
|
|||||||
|
|
||||||
#### 预设配置没有开启自定义CUDA算子加速,但我强烈建议你开启它并使用int8量化运行,速度非常快,且显存消耗少得多。前往配置页面,打开`使用自定义CUDA算子加速`
|
#### 预设配置没有开启自定义CUDA算子加速,但我强烈建议你开启它并使用int8量化运行,速度非常快,且显存消耗少得多。前往配置页面,打开`使用自定义CUDA算子加速`
|
||||||
|
|
||||||
|
#### 对于不同的任务,调整API参数会获得更好的效果,例如对于翻译任务,你可以尝试设置Temperature为1,Top_P为0.3
|
||||||
|
|
||||||
## 功能
|
## 功能
|
||||||
|
|
||||||
- RWKV模型管理,一键启动
|
- RWKV模型管理,一键启动
|
||||||
|
Loading…
Reference in New Issue
Block a user