mirror of
https://github.com/modelscope/DiffSynth-Studio.git
synced 2026-03-20 23:58:12 +00:00
Add files via upload
上一次上传到docs文件夹中了,修改一下
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
# ControlNet、LoRA、IP-Adapter——Precision Control Technology
|
||||
|
||||
Based on the VinVL model, various adapter-based models can be used to control the generation process.
|
||||
Based on the text-to-images model, various adapter-based models can be used to control the generation process.
|
||||
|
||||
Let's download the models we'll be using in the upcoming examples:
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ When generating images, we need to write prompt words to describe the content of
|
||||
|
||||
## Translation
|
||||
|
||||
Most text-to-image models currently only support English prompt words, which can be challenging for users who are not native English speakers. To address this, we can use open-source translation models to translate the prompt words into English. In the following example, we take "一个女孩" (a girl) as the prompt word and use the model opus-mt-zh-en for translation(which can be downloaded from [HuggingFace](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) or [ModelScope](https://modelscope.cn/models/moxying/opus-mt-zh-en)).
|
||||
Most text-to-image models currently only support English prompt words, which can be challenging for users who are not native English speakers. To address this, we can use open-source translation models to translate the prompt words into English. In the following example, we take "一个女孩" (a girl) as the prompt word and use the model opus-mt-zh-en (which can be downloaded from [HuggingFace](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) or [ModelScope](https://modelscope.cn/models/moxying/opus-mt-zh-en)) for translation.
|
||||
```python
|
||||
from diffsynth import ModelManager, SDXLImagePipeline, Translator
|
||||
import torch
|
||||
|
||||
Reference in New Issue
Block a user