update defaultPresets
This commit is contained in:
parent
932281db0a
commit
78238c24cf
File diff suppressed because one or more lines are too long
@ -333,7 +333,7 @@
|
||||
"Instruction 1": "指令1",
|
||||
"Instruction 2": "指令2",
|
||||
"Instruction 3": "指令3",
|
||||
"Instruction: You are an expert assistant for summarizing and extracting information from given content\nGenerate a valid JSON in the following format:\n{\n \"summary\": \"Summary of content\",\n \"keywords\": [\"content keyword 1\", \"content keyword 2\"]\n}\n\nInput: The open-source community has introduced Eagle 7B, a new RNN model, built on the RWKV-v5 architecture. This new model has been trained on 1.1 trillion tokens and supports over 100 languages. The RWKV architecture, short for ‘Rotary Weighted Key-Value,’ is a type of architecture used in the field of artificial intelligence, particularly in natural language processing (NLP) and is a variation of the Recurrent Neural Network (RNN) architecture.\nEagle 7B promises lower inference cost and stands out as a leading 7B model in terms of environmental efficiency and language versatility.\nThe model, with its 7.52 billion parameters, shows excellent performance in multi-lingual benchmarks, setting a new standard in its category. It competes closely with larger models in English language evaluations and is distinctive as an “Attention-Free Transformer,” though it requires additional tuning for specific uses. This model is accessible under the Apache 2.0 license and can be downloaded from HuggingFace for both personal and commercial purposes.\nIn terms of multilingual performance, Eagle 7B has claimed to have achieved notable results in benchmarks covering 23 languages. Its English performance has also seen significant advancements, outperforming its predecessor, RWKV v4, and competing with top-tier models.\nWorking towards a more scalable architecture and use of data efficiently, Eagle 7B is a more inclusive AI technology, supporting a broader range of languages. This model challenges the prevailing dominance of transformer models by demonstrating the capabilities of RNNs like RWKV in achieving superior performance when trained on comparable data volumes.\nIn the RWKV model, the rotary mechanism transforms the input data in a way that helps the model better understand the position or or order of elements in a sequence. The weighted key value also makes the model efficient by retrieving the stored information from previous elements in a sequence. \nHowever, questions remain about the scalability of RWKV compared to transformers, although there is optimism regarding its potential. The team plans to include additional training, an in-depth paper on Eagle 7B, and the development of a 2T model.\n\nResponse:": "Instruction: 你是一个专业的内容分析总结助手\n根据提供的内容生成以下格式的有效JSON信息:\n{\n \"summary\": \"内容的简短摘要\",\n \"keywords\": [\"内容关键词 1\", \"内容关键词 2\"]\n}\n\nInput: 开源社区推出了基于RWKV-v5架构的Eagle 7B新的RNN模型。这个新模型以1.1万亿个token进行了训练,并支持100多种语言。RWKV架构是人工智能领域中特别是自然语言处理(NLP)中使用的一种架构,它是循环神经网络(RNN)架构的一种变种。\nEagle 7B承诺低推理成本,并以其环境效益和语言灵活性在领先的7B模型中脱颖而出。\n该模型拥有75.2亿个参数,在多语言基准测试中表现出色,树立了新的行业标准。它在英语语言评估中与更大的模型竞争激烈,并作为“无注意力Transformer”独具特色,尽管它需要针对特定用途进行额外调整。该模型可在Apache 2.0许可下访问,并可从HuggingFace下载,用于个人和商业目的。\n关于多语言性能,Eagle 7B声称在涵盖23种语言的基准测试中取得了显著成绩。它的英语性能也取得了重大进步,超越了它的前身RWKV v4,并与顶级模型竞争。\n为了实现更可扩展的架构和有效利用数据,Eagle 7B是一种更包容的人工智能技术,支持更广泛的语言范围。通过展示RWKV等RNNs在训练相当数据量时实现卓越性能的能力,该模型挑战了Transformer模型的主导地位。\n在RWKV模型中,旋转机制以一种有助于模型更好地理解序列中元素的位置或顺序的方式转换输入数据。加权关键值还通过从序列中先前元素中检索存储的信息,使模型更高效。\n然而,与Transformer相比,人们对RWKV的可扩展性仍然存在疑问,尽管对其潜力持乐观态度。团队计划包括额外的训练、对Eagle 7B进行深入论文研究以及开发一个2T模型。\n\nResponse:",
|
||||
"Instruction: You are an expert assistant for summarizing and extracting information from given content\nGenerate a valid JSON in the following format:\n{\n \"summary\": \"Summary of content\",\n \"keywords\": [\"content keyword 1\", \"content keyword 2\"]\n}\n\nInput: The open-source community has introduced Eagle 7B, a new RNN model, built on the RWKV-v5 architecture. This new model has been trained on 1.1 trillion tokens and supports over 100 languages. The RWKV architecture, short for ‘Rotary Weighted Key-Value,’ is a type of architecture used in the field of artificial intelligence, particularly in natural language processing (NLP) and is a variation of the Recurrent Neural Network (RNN) architecture.\nEagle 7B promises lower inference cost and stands out as a leading 7B model in terms of environmental efficiency and language versatility.\nThe model, with its 7.52 billion parameters, shows excellent performance in multi-lingual benchmarks, setting a new standard in its category. It competes closely with larger models in English language evaluations and is distinctive as an “Attention-Free Transformer,” though it requires additional tuning for specific uses. This model is accessible under the Apache 2.0 license and can be downloaded from HuggingFace for both personal and commercial purposes.\nIn terms of multilingual performance, Eagle 7B has claimed to have achieved notable results in benchmarks covering 23 languages. Its English performance has also seen significant advancements, outperforming its predecessor, RWKV v4, and competing with top-tier models.\nWorking towards a more scalable architecture and use of data efficiently, Eagle 7B is a more inclusive AI technology, supporting a broader range of languages. This model challenges the prevailing dominance of transformer models by demonstrating the capabilities of RNNs like RWKV in achieving superior performance when trained on comparable data volumes.\nIn the RWKV model, the rotary mechanism transforms the input data in a way that helps the model better understand the position or or order of elements in a sequence. The weighted key value also makes the model efficient by retrieving the stored information from previous elements in a sequence. \nHowever, questions remain about the scalability of RWKV compared to transformers, although there is optimism regarding its potential. The team plans to include additional training, an in-depth paper on Eagle 7B, and the development of a 2T model.\n\nResponse: {": "Instruction: 你是一个专业的内容分析总结助手\n根据提供的内容生成以下格式的有效JSON信息:\n{\n \"summary\": \"内容的简短摘要\",\n \"keywords\": [\"内容关键词 1\", \"内容关键词 2\"]\n}\n\nInput: 开源社区推出了基于RWKV-v5架构的Eagle 7B新的RNN模型。这个新模型以1.1万亿个token进行了训练,并支持100多种语言。RWKV架构是人工智能领域中特别是自然语言处理(NLP)中使用的一种架构,它是循环神经网络(RNN)架构的一种变种。\nEagle 7B承诺低推理成本,并以其环境效益和语言灵活性在领先的7B模型中脱颖而出。\n该模型拥有75.2亿个参数,在多语言基准测试中表现出色,树立了新的行业标准。它在英语语言评估中与更大的模型竞争激烈,并作为“无注意力Transformer”独具特色,尽管它需要针对特定用途进行额外调整。该模型可在Apache 2.0许可下访问,并可从HuggingFace下载,用于个人和商业目的。\n关于多语言性能,Eagle 7B声称在涵盖23种语言的基准测试中取得了显著成绩。它的英语性能也取得了重大进步,超越了它的前身RWKV v4,并与顶级模型竞争。\n为了实现更可扩展的架构和有效利用数据,Eagle 7B是一种更包容的人工智能技术,支持更广泛的语言范围。通过展示RWKV等RNNs在训练相当数据量时实现卓越性能的能力,该模型挑战了Transformer模型的主导地位。\n在RWKV模型中,旋转机制以一种有助于模型更好地理解序列中元素的位置或顺序的方式转换输入数据。加权关键值还通过从序列中先前元素中检索存储的信息,使模型更高效。\n然而,与Transformer相比,人们对RWKV的可扩展性仍然存在疑问,尽管对其潜力持乐观态度。团队计划包括额外的训练、对Eagle 7B进行深入论文研究以及开发一个2T模型。\n\nResponse: {",
|
||||
"Penalty Decay": "惩罚衰减",
|
||||
"If you don't know what it is, keep it default.": "如果你不知道这是什么,保持默认"
|
||||
}
|
@ -145,7 +145,7 @@ export const defaultPresets: CompletionPreset[] = [{
|
||||
'In the RWKV model, the rotary mechanism transforms the input data in a way that helps the model better understand the position or or order of elements in a sequence. The weighted key value also makes the model efficient by retrieving the stored information from previous elements in a sequence. \n' +
|
||||
'However, questions remain about the scalability of RWKV compared to transformers, although there is optimism regarding its potential. The team plans to include additional training, an in-depth paper on Eagle 7B, and the development of a 2T model.\n' +
|
||||
'\n' +
|
||||
'Response:',
|
||||
'Response: {',
|
||||
params: {
|
||||
maxResponseToken: 500,
|
||||
temperature: 1,
|
||||
|
Loading…
Reference in New Issue
Block a user