mirror of
https://github.com/modelscope/DiffSynth-Studio.git
synced 2026-03-24 01:48:13 +00:00
add diffutoon
This commit is contained in:
22
README.md
22
README.md
@@ -54,25 +54,37 @@ Generate images with Stable Diffusion XL Turbo. You can see `examples/sdxl_turbo
|
||||
|-|-|
|
||||
|||
|
||||
|
||||
### Example 4: Toon Shading
|
||||
### Example 4: Toon Shading (Diffutoon)
|
||||
|
||||
A very interesting example. Please see `examples/sd_toon_shading.py` for more details.
|
||||
This example is implemented based on [Diffutoon](https://arxiv.org/abs/2401.16224). This approach is adept for rendering high-resoluton videos with rapid motion. You can easily modify the parameters in the config dict.
|
||||
|
||||
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/b54c05c5-d747-4709-be5e-b39af82404dd
|
||||
|
||||
### Example 5: Toon Shading with Editing Signals (Diffutoon)
|
||||
|
||||
Coming soon.
|
||||
|
||||
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/20528af5-5100-474a-8cdc-440b9efdd86c
|
||||
|
||||
### Example 6: Toon Shading (in native Python code)
|
||||
|
||||
This example is provided for developers. If you don't want to use the config to manage parameters, you can see `examples/sd_toon_shading.py` to learn how to use it in native Python code.
|
||||
|
||||
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/607c199b-6140-410b-a111-3e4ffb01142c
|
||||
|
||||
### Example 5: Text to Video
|
||||
### Example 7: Text to Video
|
||||
|
||||
Given a prompt, DiffSynth Studio can generate a video using a Stable Diffusion model and an AnimateDiff model. We can break the limitation of number of frames! See `examples/sd_text_to_video.py`.
|
||||
|
||||
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/8f556355-4079-4445-9b48-e9da77699437
|
||||
|
||||
### Example 6: Video Stylization
|
||||
### Example 8: Video Stylization
|
||||
|
||||
We provide an example for video stylization. In this pipeline, the rendered video is completely different from the original video, thus we need a powerful deflickering algorithm. We use FastBlend to implement the deflickering module. Please see `examples/sd_video_rerender.py` for more details.
|
||||
|
||||
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/59fb2f7b-8de0-4481-b79f-0c3a7361a1ea
|
||||
|
||||
### Example 7: Prompt Processing
|
||||
### Example 9: Prompt Processing
|
||||
|
||||
If you are not native English user, we provide translation service for you. Our prompter can translate other language to English and refine it using "BeautifulPrompt" models. Please see `examples/sd_prompt_refining.py` for more details.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user