diff --git a/README.md b/README.md index f1c801c..7a05cea 100644 --- a/README.md +++ b/README.md @@ -31,6 +31,8 @@ English | [简体中文](README_ZH.md) #### Default configs do not enable custom CUDA kernel acceleration, but I strongly recommend that you enable it and run with int8 precision, which is much faster and consumes much less VRAM. Go to the Configs page and turn on `Use Custom CUDA kernel to Accelerate`. +#### For different tasks, adjusting API parameters can achieve better results. For example, for translation tasks, you can try setting Temperature to 1 and Top_P to 0.3. + ## Features - RWKV model management and one-click startup diff --git a/README_ZH.md b/README_ZH.md index 5f2ffe4..b70ac95 100644 --- a/README_ZH.md +++ b/README_ZH.md @@ -32,6 +32,8 @@ API兼容的接口,这意味着一切ChatGPT客户端都是RWKV客户端。 #### 预设配置没有开启自定义CUDA算子加速,但我强烈建议你开启它并使用int8量化运行,速度非常快,且显存消耗少得多。前往配置页面,打开`使用自定义CUDA算子加速` +#### 对于不同的任务,调整API参数会获得更好的效果,例如对于翻译任务,你可以尝试设置Temperature为1,Top_P为0.3 + ## 功能 - RWKV模型管理,一键启动