2023-05-23 14:04:06 +08:00

105 lines
6.8 KiB
JSON

{
"Home": "主页",
"Train": "训练",
"About": "关于",
"Settings": "设置",
"Go to chat page": "前往聊天页",
"Manage your configs": "管理你的配置",
"Manage models": "管理模型",
"Run": "运行",
"Offline": "离线",
"Starting": "启动中",
"Loading": "读取模型中",
"Working": "运行中",
"Stop": "停止",
"Enable High Precision For Last Layer": "输出层使用高精度",
"Stored Layers": "载入显存层数",
"Precision": "精度",
"Device": "设备",
"Convert model with these configs": "用这些设置转换模型",
"Manage Models": "管理模型",
"Model": "模型",
"Model Parameters": "模型参数",
"Frequency Penalty *": "Frequency Penalty *",
"Presence Penalty *": "Presence Penalty *",
"Top_P *": "Top_P *",
"Temperature *": "Temperature *",
"Max Response Token *": "最大响应 Token *",
"API Port": "API 端口",
"Hover your mouse over the text to view a detailed description. Settings marked with * will take effect immediately after being saved.": "把鼠标悬停在文本上查看详细描述. 标记了星号 * 的设置在保存后会立即生效.",
"Default API Parameters": "默认 API 参数",
"Provide JSON file URLs for the models manifest. Separate URLs with semicolons. The \"models\" field in JSON files will be parsed into the following table.": "填写模型描述的 JSON 文件地址. 地址间用分号分隔. JSON 文件内的 \"models\" 字段会被分析进下表.",
"Config Name": "配置名",
"Refresh": "刷新",
"Save Config": "保存配置",
"Model Source Manifest List": "模型源",
"Models": "模型",
"Delete Config": "删除配置",
"Help": "帮助",
"Version": "版本",
"New Config": "新建配置",
"Open Url": "打开网页",
"Download": "下载",
"Open Folder": "打开文件夹",
"Configs": "配置",
"Automatic Updates Check": "自动检查更新",
"Updates Check Error": "检查更新失败",
"Introduction": "介绍",
"Dark Mode": "深色模式",
"Language": "语言",
"In Development": "开发中",
"Chat": "聊天",
"Convert": "转换",
"Actions": "动作",
"Last updated": "上次更新",
"Desc": "描述",
"Size": "文件大小",
"File": "文件",
"Config Saved": "配置已保存",
"Downloading": "正在下载",
"Loading Model": "正在读取模型",
"Startup Completed": "启动完成",
"Failed to switch model": "切换模型失败",
"Start Converting": "开始转换",
"Convert Success": "转换成功",
"Convert Failed": "转换失败",
"Model Not Found": "模型不存在",
"Model Status": "模型状态",
"Clear": "清除",
"Send": "发送",
"Type your message here": "在此输入消息",
"Copy": "复制",
"Read Aloud": "朗读",
"Hello! I'm RWKV, an open-source and commercially available large language model.": "你好! 我是RWKV, 一个开源可商用的大语言模型.",
"This tool's API is compatible with OpenAI API. It can be used with any ChatGPT tool you like. Go to the settings of some ChatGPT tool, replace the 'https://api.openai.com' part in the API address with '": "本工具的API与OpenAI API兼容. 因此可以配合任意你喜欢的ChatGPT工具使用. 打开某个ChatGPT工具的设置, 将API地址中的'https://api.openai.com'部分替换为'",
"New Version Available": "新版本可用",
"Update": "更新",
"Please click the button in the top right corner to start the model": "请点击右上角的按钮启动模型",
"Update Error, Please restart this program": "更新出错, 请重启本程序",
"Open the following URL with your browser to view the API documentation": "使用浏览器打开以下地址查看API文档",
"By default, the maximum number of tokens that can be answered in a single response, it can be changed by the user by specifying API parameters.": "默认情况下, 单个回复最多回答的token数目, 用户可以通过自行指定API参数改变这个值",
"Sampling temperature, the higher the stronger the randomness and creativity, while the lower, the more focused and deterministic it will be.": "采样温度, 越大随机性越强, 更具创造力, 越小则越保守稳定",
"Consider the results of the top n% probability mass, 0.1 considers the top 10%, with higher quality but more conservative, 1 considers all results, with lower quality but more diverse.": "考虑前 n% 概率质量的结果, 0.1 考虑前 10%, 质量更高, 但更保守, 1 考虑所有质量结果, 质量降低, 但更多样",
"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "存在惩罚. 正值根据新token在至今的文本中是否出现过, 来对其进行惩罚, 从而增加了模型涉及新话题的可能性",
"Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "频率惩罚. 正值根据新token在至今的文本中出现的频率/次数, 来对其进行惩罚, 从而减少模型原封不动地重复相同句子的可能性",
"int8 uses less VRAM, but has slightly lower quality. fp16 has higher quality, and fp32 has the best quality.": "int8占用显存更低, 但质量略微下降. fp16质量更好, fp32质量最好",
"Number of the neural network layers loaded into VRAM, the more you load, the faster the speed, but it consumes more VRAM.": "载入显存的神经网络层数, 载入越多, 速度越快, 但显存消耗越大",
"Whether to use CPU to calculate the last output layer of the neural network with FP32 precision to obtain better quality.": "是否使用cpu以fp32精度计算神经网络的最后一层输出层, 以获得更好的质量",
"Downloads": "下载",
"Pause": "暂停",
"Continue": "继续",
"Check": "查看",
"Model file not found": "模型文件不存在",
"Can not find download url": "找不到下载地址",
"Python target not found, would you like to download it?": "没有找到目标Python, 是否下载?",
"Python dependencies are incomplete, would you like to install them?": "Python依赖缺失, 是否安装?",
"Install": "安装",
"This is the latest version": "已是最新版",
"Use Tsinghua Pip Mirrors": "使用清华大学Pip镜像源",
"Model Config Exception": "模型配置异常",
"Use Gitee Updates Source": "使用Gitee更新源",
"Use Custom CUDA kernel to Accelerate": "使用自定义CUDA算子加速",
"Enabling this option can greatly improve inference speed, but there may be compatibility issues. If it fails to start, please turn off this option.": "开启这个选项能大大提升推理速度,但可能存在兼容性问题,如果启动失败,请关闭此选项",
"Supported custom cuda file not found": "没有找到支持的自定义cuda文件",
"Failed to copy custom cuda file": "自定义cuda文件复制失败"
}