From a6f5b520c3c3095adfcd85d7d998eeb7c8719c39 Mon Sep 17 00:00:00 2001 From: josc146 Date: Tue, 6 Jun 2023 23:57:28 +0800 Subject: [PATCH] update readme --- README.md | 2 +- README_ZH.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 30fa787..0b3c962 100644 --- a/README.md +++ b/README.md @@ -47,7 +47,7 @@ English | [简体中文](README_ZH.md) -#### Default configs do not enable custom CUDA kernel acceleration, but I strongly recommend that you enable it and run with int8 precision, which is much faster and consumes much less VRAM. Go to the Configs page and turn on `Use Custom CUDA kernel to Accelerate`. +#### Default configs has enabled custom CUDA kernel acceleration, which is much faster and consumes much less VRAM. If you encounter possible compatibility issues, go to the Configs page and turn off `Use Custom CUDA kernel to Accelerate`. #### For different tasks, adjusting API parameters can achieve better results. For example, for translation tasks, you can try setting Temperature to 1 and Top_P to 0.3. diff --git a/README_ZH.md b/README_ZH.md index a735919..1ad3360 100644 --- a/README_ZH.md +++ b/README_ZH.md @@ -48,7 +48,7 @@ API兼容的接口,这意味着一切ChatGPT客户端都是RWKV客户端。 #### 注意 目前RWKV中文模型质量一般,推荐使用英文模型体验实际RWKV能力 -#### 预设配置没有开启自定义CUDA算子加速,但我强烈建议你开启它并使用int8量化运行,速度非常快,且显存消耗少得多。前往配置页面,打开`使用自定义CUDA算子加速` +#### 预设配置已经开启自定义CUDA算子加速,速度更快,且显存消耗更少。如果你遇到可能的兼容性问题,前往配置页面,关闭`使用自定义CUDA算子加速` #### 对于不同的任务,调整API参数会获得更好的效果,例如对于翻译任务,你可以尝试设置Temperature为1,Top_P为0.3