mirror of
https://github.com/modelscope/DiffSynth-Studio.git
synced 2026-03-18 22:08:13 +00:00
rearrange examples
This commit is contained in:
21
examples/Diffutoon/README.md
Normal file
21
examples/Diffutoon/README.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Diffutoon
|
||||
|
||||
[Diffutoon](https://arxiv.org/abs/2401.16224) is a toon shading approach. This approach is adept for rendering high-resoluton videos with rapid motion.
|
||||
|
||||
## Example: Toon Shading (Diffutoon)
|
||||
|
||||
Directly render realistic videos in a flatten style. In this example, you can easily modify the parameters in the config dict. See [`diffutoon_toon_shading.py`](./diffutoon_toon_shading.py). We also provide [an example on Colab](https://colab.research.google.com/github/Artiprocher/DiffSynth-Studio/blob/main/examples/Diffutoon.ipynb).
|
||||
|
||||
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/b54c05c5-d747-4709-be5e-b39af82404dd
|
||||
|
||||
## Example: Toon Shading with Editing Signals (Diffutoon)
|
||||
|
||||
This example supports video editing signals. See `examples\diffutoon_toon_shading_with_editing_signals.py`. The editing feature is also supported in the [Colab example](https://colab.research.google.com/github/Artiprocher/DiffSynth-Studio/blob/main/examples/Diffutoon/Diffutoon.ipynb).
|
||||
|
||||
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/20528af5-5100-474a-8cdc-440b9efdd86c
|
||||
|
||||
## Example: Toon Shading (in native Python code)
|
||||
|
||||
This example is provided for developers. If you don't want to use the config to manage parameters, you can see `examples/sd_toon_shading.py` to learn how to use it in native Python code.
|
||||
|
||||
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/607c199b-6140-410b-a111-3e4ffb01142c
|
||||
3
examples/Ip-Adapter/README.md
Normal file
3
examples/Ip-Adapter/README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# IP-Adapter
|
||||
|
||||
The features of IP-Adapter in DiffSynth Studio is not completed. Please wait for us.
|
||||
7
examples/diffsynth/README.md
Normal file
7
examples/diffsynth/README.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# DiffSynth
|
||||
|
||||
DiffSynth is the initial version of our video synthesis framework. In this framework, you can apply video deflickering algorithms to the latent space of diffusion models. You can refer to the [original repo](https://github.com/alibaba/EasyNLP/tree/master/diffusion/DiffSynth) for more details.
|
||||
|
||||
We provide an example for video stylization. In this pipeline, the rendered video is completely different from the original video, thus we need a powerful deflickering algorithm. We use FastBlend to implement the deflickering module. Please see [`sd_video_rerender.py`](./sd_video_rerender.py).
|
||||
|
||||
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/59fb2f7b-8de0-4481-b79f-0c3a7361a1ea
|
||||
@@ -26,6 +26,8 @@ models/HunyuanDiT/
|
||||
|
||||
The original resolution of Hunyuan DiT is 1024x1024. If you want to use larger resolutions, please use highres-fix.
|
||||
|
||||
Hunyuan DiT is also supported in our UI.
|
||||
|
||||
```python
|
||||
from diffsynth import ModelManager, HunyuanDiTImagePipeline
|
||||
import torch
|
||||
|
||||
43
examples/image_synthesis/README.md
Normal file
43
examples/image_synthesis/README.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Image Synthesis
|
||||
|
||||
Image synthesis is the base feature of DiffSynth Studio.
|
||||
|
||||
### Example: Stable Diffusion
|
||||
|
||||
We can generate images with very high resolution. Please see `examples/sd_text_to_image.py` for more details.
|
||||
|
||||
|512*512|1024*1024|2048*2048|4096*4096|
|
||||
|-|-|-|-|
|
||||
|||||
|
||||
|
||||
### Example: Stable Diffusion XL
|
||||
|
||||
Generate images with Stable Diffusion XL. Please see `examples/sdxl_text_to_image.py` for more details.
|
||||
|
||||
|1024*1024|2048*2048|
|
||||
|-|-|
|
||||
|||
|
||||
|
||||
### Example: Stable Diffusion XL Turbo
|
||||
|
||||
Generate images with Stable Diffusion XL Turbo. You can see `examples/sdxl_turbo.py` for more details, but we highly recommend you to use it in the WebUI.
|
||||
|
||||
|"black car"|"red car"|
|
||||
|-|-|
|
||||
|||
|
||||
|
||||
### Example: Prompt Processing
|
||||
|
||||
If you are not native English user, we provide translation service for you. Our prompter can translate other language to English and refine it using "BeautifulPrompt" models. Please see `examples/sd_prompt_refining.py` for more details.
|
||||
|
||||
Prompt: "一个漂亮的女孩". The [translation model](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) will translate it to English.
|
||||
|
||||
|seed=0|seed=1|seed=2|seed=3|
|
||||
|-|-|-|-|
|
||||
|||||
|
||||
|
||||
Prompt: "一个漂亮的女孩". The [translation model](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) will translate it to English. Then the [refining model](https://huggingface.co/alibaba-pai/pai-bloom-1b1-text2prompt-sd) will refine the translated prompt for better visual quality.
|
||||
|
||||
|seed=0|seed=1|seed=2|seed=3|
|
||||
|-|-|-|-|
|
||||
|||||
|
||||
9
examples/video_synthesis/README.md
Normal file
9
examples/video_synthesis/README.md
Normal file
@@ -0,0 +1,9 @@
|
||||
# Text to Video
|
||||
|
||||
In DiffSynth Studio, we can use AnimateDiff and SVD to generate videos. However, these models usually generate terrible contents. We do not recommend users to use these models, until a more powerful video model emerges.
|
||||
|
||||
### Example 7: Text to Video
|
||||
|
||||
Generate a video using a Stable Diffusion model and an AnimateDiff model. We can break the limitation of number of frames! See [sd_text_to_video.py](./sd_text_to_video.py).
|
||||
|
||||
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/8f556355-4079-4445-9b48-e9da77699437
|
||||
Reference in New Issue
Block a user