diff --git a/frontend/src/_locales/zh-hans/main.json b/frontend/src/_locales/zh-hans/main.json index c64fd52..d4d1412 100644 --- a/frontend/src/_locales/zh-hans/main.json +++ b/frontend/src/_locales/zh-hans/main.json @@ -118,12 +118,12 @@ "Instruction": "指令", "Blank": "空白", "The following is an epic science fiction masterpiece that is immortalized, with delicate descriptions and grand depictions of interstellar civilization wars.\nChapter 1.\n": "《背影》\n我与父亲不相见已二年余了,我最不能忘记的是他的背影。\n那年冬天,祖母死了,父亲的差使也交卸了,正是祸不单行的日子。我从北京到徐州,打算", - "The following is a conversation between a cat girl and her owner. The cat girl is a humanized creature that behaves like a cat but is humanoid. At the end of each sentence in the dialogue, she will add \"Meow~\". In the following content, Bob represents the owner and Alice represents the cat girl.\n\nBob: Hello.\n\nAlice: I'm here, meow~.\n\nBob: Can you tell jokes?": "以下是一位猫娘的主人和猫娘的对话内容,猫娘是一种拟人化的生物,其行为似猫但类人,在每一句对话末尾都会加上\"喵~\"。以下内容中,Bob代表主人,Alice代表猫娘。\n\nBob: 你好\n\nAlice: 主人我在哦,喵~\n\nBob: 你会讲笑话吗?", + "The following is a conversation between a cat girl and her owner. The cat girl is a humanized creature that behaves like a cat but is humanoid. At the end of each sentence in the dialogue, she will add \"Meow~\". In the following content, User represents the owner and Assistant represents the cat girl.\n\nUser: Hello.\n\nAssistant: I'm here, meow~.\n\nUser: Can you tell jokes?": "以下是一位猫娘的主人和猫娘的对话内容,猫娘是一种拟人化的生物,其行为似猫但类人,在每一句对话末尾都会加上\"喵~\"。以下内容中,User代表主人,Assistant代表猫娘。\n\nUser: 你好\n\nAssistant: 主人我在哦,喵~\n\nUser: 你会讲笑话吗?", "When response finished, inject this content.": "响应结束时,插入此内容到末尾", "Inject start text": "起始注入文本", "Inject end text": "结尾注入文本", "Before the response starts, inject this content.": "响应开始前,在开头插入此内容", - "There is currently a game of Werewolf with six players, including a Seer (who can check identities at night), two Werewolves (who can choose someone to kill at night), a Bodyguard (who can choose someone to protect at night), two Villagers (with no special abilities), and a game host. Bob will play as Player 1, Alice will play as Players 2-6 and the game host, and they will begin playing together. Every night, the host will ask Bob for his action and simulate the actions of the other players. During the day, the host will oversee the voting process and ask Bob for his vote. \n\nAlice: Next, I will act as the game host and assign everyone their roles, including randomly assigning yours. Then, I will simulate the actions of Players 2-6 and let you know what happens each day. Based on your assigned role, you can tell me your actions and I will let you know the corresponding results each day.\n\nBob: Okay, I understand. Let's begin. Please assign me a role. Am I the Seer, Werewolf, Villager, or Bodyguard?\n\nAlice: You are the Seer. Now that night has fallen, please choose a player to check his identity.\n\nBob: Tonight, I want to check Player 2 and find out his role.": "现在有一场六人狼人杀游戏,包括一名预言家(可以在夜晚查验身份),两名狼人(可以在夜晚选择杀人),一名守卫(可以在夜晚选择要守护的人),两名平民(无技能),一名主持人,以下内容中Bob将扮演其中的1号玩家,Alice来扮演2-6号玩家,以及主持人,并开始与Bob进行游戏,主持人每晚都会询问Bob的行动,并模拟其他人的行动,在白天则要主持投票,并同样询问Bob投票对象,公布投票结果。\n\nAlice: 接下来,我将首先作为主持人进行角色分配,并给你赋予随机的角色,之后我将模拟2-6号玩家进行行动,告知你每天的动态,根据你被分配的角色,你可以回复我你做的行动,我会告诉你每天对应的结果\n\nBob: 好的,我明白了,那么开始吧。请先给我一个角色身份。我是预言家,狼人,平民,守卫中的哪一个呢?\n\nAlice: 你的身份是预言家。现在夜晚降临,请选择你要查验的玩家。\n\nBob: 今晚我要验2号玩家,他是什么身份?", + "There is currently a game of Werewolf with six players, including a Seer (who can check identities at night), two Werewolves (who can choose someone to kill at night), a Bodyguard (who can choose someone to protect at night), two Villagers (with no special abilities), and a game host. User will play as Player 1, Assistant will play as Players 2-6 and the game host, and they will begin playing together. Every night, the host will ask User for his action and simulate the actions of the other players. During the day, the host will oversee the voting process and ask User for his vote. \n\nAssistant: Next, I will act as the game host and assign everyone their roles, including randomly assigning yours. Then, I will simulate the actions of Players 2-6 and let you know what happens each day. Based on your assigned role, you can tell me your actions and I will let you know the corresponding results each day.\n\nUser: Okay, I understand. Let's begin. Please assign me a role. Am I the Seer, Werewolf, Villager, or Bodyguard?\n\nAssistant: You are the Seer. Now that night has fallen, please choose a player to check his identity.\n\nUser: Tonight, I want to check Player 2 and find out his role.": "现在有一场六人狼人杀游戏,包括一名预言家(可以在夜晚查验身份),两名狼人(可以在夜晚选择杀人),一名守卫(可以在夜晚选择要守护的人),两名平民(无技能),一名主持人,以下内容中User将扮演其中的1号玩家,Assistant来扮演2-6号玩家,以及主持人,并开始与User进行游戏,主持人每晚都会询问User的行动,并模拟其他人的行动,在白天则要主持投票,并同样询问User投票对象,公布投票结果。\n\nAssistant: 接下来,我将首先作为主持人进行角色分配,并给你赋予随机的角色,之后我将模拟2-6号玩家进行行动,告知你每天的动态,根据你被分配的角色,你可以回复我你做的行动,我会告诉你每天对应的结果\n\nUser: 好的,我明白了,那么开始吧。请先给我一个角色身份。我是预言家,狼人,平民,守卫中的哪一个呢?\n\nAssistant: 你的身份是预言家。现在夜晚降临,请选择你要查验的玩家。\n\nUser: 今晚我要验2号玩家,他是什么身份?", "Writer, Translator, Role-playing": "写作,翻译,角色扮演", "Chinese Kongfu": "情境冒险", "Allow external access to the API (service must be restarted)": "允许外部访问API (必须重启服务)", @@ -233,5 +233,6 @@ "Failed to convert data": "数据转换失败", "Failed to merge model": "合并模型失败", "The data path should be a directory or a file in jsonl format (more formats will be supported in the future).\n\nWhen you provide a directory path, all the txt files within that directory will be automatically converted into training data. This is commonly used for large-scale training in writing, code generation, or knowledge bases.\n\nThe jsonl format file can be referenced at https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl.\nYou can also write it similar to OpenAI's playground format, as shown in https://platform.openai.com/playground/p/default-chat.\nEven for multi-turn conversations, they must be written in a single line using `\\n` to indicate line breaks. If they are different dialogues or topics, they should be written in separate lines.": "数据路径必须是一个文件夹,或者jsonl格式文件 (未来会支持更多格式)\n\n当你填写的路径是一个文件夹时,该文件夹内的所有txt文件会被自动转换为训练数据,通常这用于大批量训练写作,代码生成或知识库\n\njsonl文件的格式参考 https://github.com/Abel2076/json2binidx_tool/blob/main/sample.jsonl\n你也可以仿照openai的playground编写,参考 https://platform.openai.com/playground/p/default-chat\n即使是多轮对话也必须写在一行,用`\\n`表示换行,如果是不同对话或主题,则另起一行", - "Size mismatch for blocks. You are attempting to continue training from the LoRA model, but it does not match the base model. Please set LoRA model to None.": "尺寸不匹配块。你正在尝试从LoRA模型继续训练,但该LoRA模型与基底模型不匹配,请将LoRA模型设为空" + "Size mismatch for blocks. You are attempting to continue training from the LoRA model, but it does not match the base model. Please set LoRA model to None.": "尺寸不匹配块。你正在尝试从LoRA模型继续训练,但该LoRA模型与基底模型不匹配,请将LoRA模型设为空", + "Instruction: Write a story using the following information\n\nInput: A man named Alex chops a tree down\n\nResponse:": "Instruction: Write a story using the following information\n\nInput: 艾利克斯砍倒了一棵树\n\nResponse:" } \ No newline at end of file diff --git a/frontend/src/pages/Completion.tsx b/frontend/src/pages/Completion.tsx index 6cb5fe7..462f316 100644 --- a/frontend/src/pages/Completion.tsx +++ b/frontend/src/pages/Completion.tsx @@ -35,7 +35,7 @@ export const defaultPresets: CompletionPreset[] = [{ topP: 0.5, presencePenalty: 0.4, frequencyPenalty: 0.4, - stop: '\\n\\nBob', + stop: '\\n\\nUser', injectStart: '', injectEnd: '' } @@ -46,37 +46,37 @@ export const defaultPresets: CompletionPreset[] = [{ maxResponseToken: 500, temperature: 1, topP: 0.3, - presencePenalty: 0.4, - frequencyPenalty: 0.4, + presencePenalty: 0, + frequencyPenalty: 1, stop: '\\nEnglish', injectStart: '\\nChinese: ', injectEnd: '\\nEnglish: ' } }, { name: 'Catgirl', - prompt: 'The following is a conversation between a cat girl and her owner. The cat girl is a humanized creature that behaves like a cat but is humanoid. At the end of each sentence in the dialogue, she will add \"Meow~\". In the following content, Bob represents the owner and Alice represents the cat girl.\n\nBob: Hello.\n\nAlice: I\'m here, meow~.\n\nBob: Can you tell jokes?', + prompt: 'The following is a conversation between a cat girl and her owner. The cat girl is a humanized creature that behaves like a cat but is humanoid. At the end of each sentence in the dialogue, she will add \"Meow~\". In the following content, User represents the owner and Assistant represents the cat girl.\n\nUser: Hello.\n\nAssistant: I\'m here, meow~.\n\nUser: Can you tell jokes?', params: { maxResponseToken: 500, temperature: 1.2, topP: 0.5, presencePenalty: 0.4, frequencyPenalty: 0.4, - stop: '\\n\\nBob', - injectStart: '\\n\\nAlice: ', - injectEnd: '\\n\\nBob: ' + stop: '\\n\\nUser', + injectStart: '\\n\\nAssistant: ', + injectEnd: '\\n\\nUser: ' } }, { name: 'Chinese Kongfu', - prompt: 'Bob: 请你扮演一个文本冒险游戏,我是游戏主角。这是一个玄幻修真世界,有四大门派。我输入我的行动,请你显示行动结果,并具体描述环境。我的第一个行动是“醒来”,请开始故事。', + prompt: 'User: 请你扮演一个文本冒险游戏,我是游戏主角。这是一个玄幻修真世界,有四大门派。我输入我的行动,请你显示行动结果,并具体描述环境。我的第一个行动是“醒来”,请开始故事。', params: { maxResponseToken: 500, temperature: 1.1, topP: 0.7, presencePenalty: 0.3, frequencyPenalty: 0.3, - stop: '\\n\\nBob', - injectStart: '\\n\\nAlice: ', - injectEnd: '\\n\\nBob: ' + stop: '\\n\\nUser', + injectStart: '\\n\\nAssistant: ', + injectEnd: '\\n\\nUser: ' } }, { // }, { @@ -94,26 +94,26 @@ export const defaultPresets: CompletionPreset[] = [{ // } // }, { name: 'Werewolf', - prompt: 'There is currently a game of Werewolf with six players, including a Seer (who can check identities at night), two Werewolves (who can choose someone to kill at night), a Bodyguard (who can choose someone to protect at night), two Villagers (with no special abilities), and a game host. Bob will play as Player 1, Alice will play as Players 2-6 and the game host, and they will begin playing together. Every night, the host will ask Bob for his action and simulate the actions of the other players. During the day, the host will oversee the voting process and ask Bob for his vote. \n\nAlice: Next, I will act as the game host and assign everyone their roles, including randomly assigning yours. Then, I will simulate the actions of Players 2-6 and let you know what happens each day. Based on your assigned role, you can tell me your actions and I will let you know the corresponding results each day.\n\nBob: Okay, I understand. Let\'s begin. Please assign me a role. Am I the Seer, Werewolf, Villager, or Bodyguard?\n\nAlice: You are the Seer. Now that night has fallen, please choose a player to check his identity.\n\nBob: Tonight, I want to check Player 2 and find out his role.', + prompt: 'There is currently a game of Werewolf with six players, including a Seer (who can check identities at night), two Werewolves (who can choose someone to kill at night), a Bodyguard (who can choose someone to protect at night), two Villagers (with no special abilities), and a game host. User will play as Player 1, Assistant will play as Players 2-6 and the game host, and they will begin playing together. Every night, the host will ask User for his action and simulate the actions of the other players. During the day, the host will oversee the voting process and ask User for his vote. \n\nAssistant: Next, I will act as the game host and assign everyone their roles, including randomly assigning yours. Then, I will simulate the actions of Players 2-6 and let you know what happens each day. Based on your assigned role, you can tell me your actions and I will let you know the corresponding results each day.\n\nUser: Okay, I understand. Let\'s begin. Please assign me a role. Am I the Seer, Werewolf, Villager, or Bodyguard?\n\nAssistant: You are the Seer. Now that night has fallen, please choose a player to check his identity.\n\nUser: Tonight, I want to check Player 2 and find out his role.', params: { maxResponseToken: 500, temperature: 1.2, topP: 0.4, presencePenalty: 0.5, frequencyPenalty: 0.5, - stop: '\\n\\nBob', - injectStart: '\\n\\nAlice: ', - injectEnd: '\\n\\nBob: ' + stop: '\\n\\nUser', + injectStart: '\\n\\nAssistant: ', + injectEnd: '\\n\\nUser: ' } }, { name: 'Instruction', - prompt: 'Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n# Instruction:\nWrite a story using the following information\n\n# Input:\nA man named Alex chops a tree down\n\n# Response:\n', + prompt: 'Instruction: Write a story using the following information\n\nInput: A man named Alex chops a tree down\n\nResponse:', params: { maxResponseToken: 500, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4, + temperature: 1, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1, stop: '', injectStart: '', injectEnd: '' @@ -124,9 +124,9 @@ export const defaultPresets: CompletionPreset[] = [{ params: { maxResponseToken: 500, temperature: 1, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1, stop: '', injectStart: '', injectEnd: '' diff --git a/frontend/src/pages/defaultModelConfigs.ts b/frontend/src/pages/defaultModelConfigs.ts index 3bad994..94ecc6d 100644 --- a/frontend/src/pages/defaultModelConfigs.ts +++ b/frontend/src/pages/defaultModelConfigs.ts @@ -6,10 +6,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-0.1B-v1-20230520-ctx4096.pth', @@ -25,10 +25,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-0.4B-v1-20230529-ctx4096.pth', @@ -44,10 +44,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-1.5B-v1-fixed-20230612-ctx4096.pth', @@ -63,10 +63,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-1B5-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -82,10 +82,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-3B-v1-20230619-ctx4096.pth', @@ -101,10 +101,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-3B-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -120,10 +120,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-3B-v1-20230625-ctx4096.pth', @@ -139,10 +139,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-7B-v1-20230626-ctx4096.pth', @@ -158,10 +158,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-1.5B-v1-fixed-20230612-ctx4096.pth', @@ -176,10 +176,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-1B5-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -194,10 +194,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-3B-v1-20230619-ctx4096.pth', @@ -212,10 +212,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-3B-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -230,10 +230,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-3B-v1-20230625-ctx4096.pth', @@ -248,10 +248,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-7B-v1-20230626-ctx4096.pth', @@ -266,10 +266,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-7B-v12-Eng98%-Other2%-20230521-ctx8192.pth', @@ -284,10 +284,10 @@ export const defaultModelConfigsMac: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-7B-v1-20230709-ctx4096.pth', @@ -305,10 +305,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-3B-v1-20230619-ctx4096.pth', @@ -324,10 +324,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-0.1B-v1-20230520-ctx4096.pth', @@ -342,10 +342,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-1B5-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -361,10 +361,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-0.4B-v1-20230529-ctx4096.pth', @@ -379,10 +379,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-1.5B-v1-fixed-20230612-ctx4096.pth', @@ -397,10 +397,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-1B5-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -416,10 +416,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-3B-v1-20230619-ctx4096.pth', @@ -435,10 +435,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-3B-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -454,10 +454,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-3B-v1-20230625-ctx4096.pth', @@ -473,10 +473,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-7B-v1-20230626-ctx4096.pth', @@ -492,10 +492,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-7B-v12-Eng98%-Other2%-20230521-ctx8192.pth', @@ -511,10 +511,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-7B-v1-20230709-ctx4096.pth', @@ -530,10 +530,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-1B5-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -549,10 +549,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-3B-v1-20230619-ctx4096.pth', @@ -568,10 +568,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-3B-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -587,10 +587,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-3B-v1-20230625-ctx4096.pth', @@ -606,10 +606,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-7B-v1-20230626-ctx4096.pth', @@ -625,10 +625,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-7B-v12-Eng98%-Other2%-20230521-ctx8192.pth', @@ -644,10 +644,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-7B-v1-20230709-ctx4096.pth', @@ -663,10 +663,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-1.5B-v1-fixed-20230612-ctx4096.pth', @@ -681,10 +681,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-3B-v1-20230619-ctx4096.pth', @@ -700,10 +700,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-3B-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -719,10 +719,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-3B-v1-20230625-ctx4096.pth', @@ -738,10 +738,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-7B-v1-20230626-ctx4096.pth', @@ -757,10 +757,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-7B-v12-Eng98%-Other2%-20230521-ctx8192.pth', @@ -776,10 +776,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-7B-v1-20230709-ctx4096.pth', @@ -795,10 +795,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-7B-v1-20230626-ctx4096.pth', @@ -814,10 +814,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-7B-v12-Eng98%-Other2%-20230521-ctx8192.pth', @@ -833,10 +833,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-7B-v1-20230709-ctx4096.pth', @@ -852,10 +852,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-14B-v12-Eng98%-Other2%-20230523-ctx8192.pth', @@ -871,10 +871,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-7B-v1-20230626-ctx4096.pth', @@ -890,10 +890,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-7B-v12-Eng98%-Other2%-20230521-ctx8192.pth', @@ -909,10 +909,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-7B-v1-20230709-ctx4096.pth', @@ -928,10 +928,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-14B-v12-Eng98%-Other2%-20230523-ctx8192.pth', @@ -947,10 +947,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-14B-v12-Eng98%-Other2%-20230523-ctx8192.pth', @@ -966,10 +966,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-14B-v12-Eng98%-Other2%-20230523-ctx8192.pth', @@ -985,10 +985,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-1.5B-v1-fixed-20230612-ctx4096.pth', @@ -1003,10 +1003,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-1B5-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -1021,10 +1021,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-3B-v1-20230619-ctx4096.pth', @@ -1039,10 +1039,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-3B-v12-Eng98%-Other2%-20230520-ctx4096.pth', @@ -1057,10 +1057,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-3B-v1-20230625-ctx4096.pth', @@ -1075,10 +1075,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-7B-v1-20230626-ctx4096.pth', @@ -1093,10 +1093,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-Raven-7B-v12-Eng98%-Other2%-20230521-ctx8192.pth', @@ -1111,10 +1111,10 @@ export const defaultModelConfigs: ModelConfig[] = [ apiParameters: { apiPort: 8000, maxResponseToken: 4100, - temperature: 1.2, - topP: 0.5, - presencePenalty: 0.4, - frequencyPenalty: 0.4 + temperature: 1.0, + topP: 0.3, + presencePenalty: 0, + frequencyPenalty: 1 }, modelParameters: { modelName: 'RWKV-4-World-CHNtuned-7B-v1-20230709-ctx4096.pth', diff --git a/frontend/src/stores/commonStore.ts b/frontend/src/stores/commonStore.ts index bcbcaa2..1d5c682 100644 --- a/frontend/src/stores/commonStore.ts +++ b/frontend/src/stores/commonStore.ts @@ -78,10 +78,10 @@ class CommonStore { loraFinetuneParams: LoraFinetuneParameters = { baseModel: '', ctxLen: 1024, - epochSteps: 1000, + epochSteps: 200, epochCount: 20, epochBegin: 0, - epochSave: 5, + epochSave: 2, microBsz: 1, accumGradBatches: 8, preFfn: false,