36 KiB
DiffSynth-Studio
简介
欢迎来到 Diffusion 模型的魔法世界!DiffSynth-Studio 是由魔搭社区团队开发和维护的开源 Diffusion 模型引擎。我们期望以框架建设孵化技术创新,凝聚开源社区的力量,探索生成式模型技术的边界!
DiffSynth 目前包括两个开源项目:
- DiffSynth-Studio: 聚焦于激进的技术探索,面向学术界,提供更前沿的模型能力支持。
- DiffSynth-Engine: 聚焦于稳定的模型部署,面向工业界,提供更高的计算性能与更稳定的功能。
安装
从源码安装(推荐):
git clone https://github.com/modelscope/DiffSynth-Studio.git
cd DiffSynth-Studio
pip install -e .
其他安装方式
从 pypi 安装(存在版本更新延迟,如需使用最新功能,请从源码安装)
pip install diffsynth
如果在安装过程中遇到问题,可能是由上游依赖包导致的,请参考这些包的文档:
基础框架
DiffSynth-Studio 为主流 Diffusion 模型(包括 FLUX、Wan 等)重新设计了推理和训练流水线。
FLUX 系列
详细页面:./examples/flux/
快速开始
import torch
from diffsynth.pipelines.flux_image_new import FluxImagePipeline, ModelConfig
pipe = FluxImagePipeline.from_pretrained(
torch_dtype=torch.bfloat16,
device="cuda",
model_configs=[
ModelConfig(model_id="black-forest-labs/FLUX.1-dev", origin_file_pattern="flux1-dev.safetensors"),
ModelConfig(model_id="black-forest-labs/FLUX.1-dev", origin_file_pattern="text_encoder/model.safetensors"),
ModelConfig(model_id="black-forest-labs/FLUX.1-dev", origin_file_pattern="text_encoder_2/"),
ModelConfig(model_id="black-forest-labs/FLUX.1-dev", origin_file_pattern="ae.safetensors"),
],
)
image = pipe(prompt="a cat", seed=0)
image.save("image.jpg")
模型总览
| 模型 ID | 额外参数 | 推理 | 低显存推理 | 全量训练 | 全量训练后验证 | LoRA 训练 | LoRA 训练后验证 |
|---|---|---|---|---|---|---|---|
| FLUX.1-dev | code | code | code | code | code | code | |
| FLUX.1-Kontext-dev | kontext_images |
code | code | code | code | code | code |
| FLUX.1-dev-Controlnet-Inpainting-Beta | controlnet_inputs |
code | code | code | code | code | code |
| FLUX.1-dev-Controlnet-Union-alpha | controlnet_inputs |
code | code | code | code | code | code |
| FLUX.1-dev-Controlnet-Upscaler | controlnet_inputs |
code | code | code | code | code | code |
| FLUX.1-dev-IP-Adapter | ipadapter_images, ipadapter_scale |
code | code | code | code | code | code |
| FLUX.1-dev-InfiniteYou | infinityou_id_image, infinityou_guidance, controlnet_inputs |
code | code | code | code | code | code |
| FLUX.1-dev-EliGen | eligen_entity_prompts, eligen_entity_masks, eligen_enable_on_negative, eligen_enable_inpaint |
code | code | - | - | ||
| FLUX.1-dev-LoRA-Encoder | lora_encoder_inputs, lora_encoder_scale |
code | code | code | code | - | - |
| Step1X-Edit | step1x_reference_image |
code | code | code | code | code | code |
| FLEX.2-preview | flex_inpaint_image, flex_inpaint_mask, flex_control_image, flex_control_strength, flex_control_stop |
code | code | code | code | code | code |
Wan 系列
详细页面:./examples/wanvideo/
https://github.com/user-attachments/assets/1d66ae74-3b02-40a9-acc3-ea95fc039314
快速开始
import torch
from diffsynth import save_video
from diffsynth.pipelines.wan_video_new import WanVideoPipeline, ModelConfig
pipe = WanVideoPipeline.from_pretrained(
torch_dtype=torch.bfloat16,
device="cuda",
model_configs=[
ModelConfig(model_id="Wan-AI/Wan2.1-T2V-1.3B", origin_file_pattern="diffusion_pytorch_model*.safetensors", offload_device="cpu"),
ModelConfig(model_id="Wan-AI/Wan2.1-T2V-1.3B", origin_file_pattern="models_t5_umt5-xxl-enc-bf16.pth", offload_device="cpu"),
ModelConfig(model_id="Wan-AI/Wan2.1-T2V-1.3B", origin_file_pattern="Wan2.1_VAE.pth", offload_device="cpu"),
],
)
pipe.enable_vram_management()
video = pipe(
prompt="纪实摄影风格画面,一只活泼的小狗在绿茵茵的草地上迅速奔跑。小狗毛色棕黄,两只耳朵立起,神情专注而欢快。阳光洒在它身上,使得毛发看上去格外柔软而闪亮。背景是一片开阔的草地,偶尔点缀着几朵野花,远处隐约可见蓝天和几片白云。透视感鲜明,捕捉小狗奔跑时的动感和四周草地的生机。中景侧面移动视角。",
negative_prompt="色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走",
seed=0, tiled=True,
)
save_video(video, "video1.mp4", fps=15, quality=5)
模型总览
更多模型
图像生成模型
详细页面:./examples/image_synthesis/
| FLUX | Stable Diffusion 3 |
|---|---|
| Kolors | Hunyuan-DiT |
|---|---|
| Stable Diffusion | Stable Diffusion XL |
|---|---|
视频生成模型
- HunyuanVideo:./examples/HunyuanVideo/
https://github.com/user-attachments/assets/48dd24bb-0cc6-40d2-88c3-10feed3267e9
- StepVideo:./examples/stepvideo/
https://github.com/user-attachments/assets/5954fdaa-a3cf-45a3-bd35-886e3cc4581b
- CogVideoX:./examples/CogVideoX/
https://github.com/user-attachments/assets/26b044c1-4a60-44a4-842f-627ff289d006
图像质量评估模型
我们集成了一系列图像质量评估模型,这些模型可以用于图像生成模型的评测、对齐训练等场景中。
创新成果
DiffSynth-Studio 不仅仅是一个工程化的模型框架,更是创新成果的孵化器。
Nexus-Gen: 统一架构的图像理解、生成、编辑
ArtAug: 图像生成模型的美学提升
- 详细页面:./examples/ArtAug/
- 论文:ArtAug: Enhancing Text-to-Image Generation through Synthesis-Understanding Interaction
- 模型:ModelScope, HuggingFace
- 在线体验:ModelScope AIGC Tab
| FLUX.1-dev | FLUX.1-dev + ArtAug LoRA |
|---|---|
EliGen: 精准的图像分区控制
- 详细页面:./examples/EntityControl/
- 论文:EliGen: Entity-Level Controlled Image Generation with Regional Attention
- 模型:ModelScope, HuggingFace
- 在线体验:ModelScope EliGen Studio
- 数据集:EliGen Train Set
| 实体控制区域 | 生成图像 |
|---|---|
ExVideo: 视频生成模型的扩展训练
- 项目页面:Project Page
- 论文:ExVideo: Extending Video Diffusion Models via Parameter-Efficient Post-Tuning
- 代码样例:./examples/ExVideo/
- 模型:ModelScope, HuggingFace
https://github.com/modelscope/DiffSynth-Studio/assets/35051019/d97f6aa9-8064-4b5b-9d49-ed6001bb9acc
Diffutoon: 高分辨率动漫风格视频渲染
- 项目页面:Project Page
- 论文:Diffutoon: High-Resolution Editable Toon Shading via Diffusion Models
- 代码样例:./examples/Diffutoon/
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/b54c05c5-d747-4709-be5e-b39af82404dd
DiffSynth: 本项目的初代版本
- 项目页面:Project Page
- 论文:DiffSynth: Latent In-Iteration Deflickering for Realistic Video Synthesis
- 代码样例:./examples/diffsynth/
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/59fb2f7b-8de0-4481-b79f-0c3a7361a1ea
版本更新历史
-
July 11, 2025 🔥🔥🔥 We propose Nexus-Gen, a unified model that synergizes the language reasoning capabilities of LLMs with the image synthesis power of diffusion models. This framework enables seamless image understanding, generation, and editing tasks.
- Paper: Nexus-Gen: Unified Image Understanding, Generation, and Editing via Prefilled Autoregression in Shared Embedding Space
- Github Repo: https://github.com/modelscope/Nexus-Gen
- Model: ModelScope, HuggingFace
- Training Dataset: ModelScope Dataset
- Online Demo: ModelScope Nexus-Gen Studio
-
June 15, 2025 ModelScope's official evaluation framework, EvalScope, now supports text-to-image generation evaluation. Try it with the Best Practices guide.
-
March 31, 2025 We support InfiniteYou, an identity preserving method for FLUX. Please refer to ./examples/InfiniteYou/ for more details.
-
March 25, 2025 Our new open-source project, DiffSynth-Engine, is now open-sourced! Focused on stable model deployment. Geared towards industry. Offers better engineering support, higher computational performance, and more stable functionality.
-
March 13, 2025 We support HunyuanVideo-I2V, the image-to-video generation version of HunyuanVideo open-sourced by Tencent. Please refer to ./examples/HunyuanVideo/ for more details.
-
February 25, 2025 We support Wan-Video, a collection of SOTA video synthesis models open-sourced by Alibaba. See ./examples/wanvideo/.
-
February 17, 2025 We support StepVideo! State-of-the-art video synthesis model! See ./examples/stepvideo.
-
December 31, 2024 We propose EliGen, a novel framework for precise entity-level controlled text-to-image generation, complemented by an inpainting fusion pipeline to extend its capabilities to image inpainting tasks. EliGen seamlessly integrates with existing community models, such as IP-Adapter and In-Context LoRA, enhancing its versatility. For more details, see ./examples/EntityControl.
- Paper: EliGen: Entity-Level Controlled Image Generation with Regional Attention
- Model: ModelScope, HuggingFace
- Online Demo: ModelScope EliGen Studio
- Training Dataset: EliGen Train Set
-
December 19, 2024 We implement advanced VRAM management for HunyuanVideo, making it possible to generate videos at a resolution of 129x720x1280 using 24GB of VRAM, or at 129x512x384 resolution with just 6GB of VRAM. Please refer to ./examples/HunyuanVideo/ for more details.
-
December 18, 2024 We propose ArtAug, an approach designed to improve text-to-image synthesis models through synthesis-understanding interactions. We have trained an ArtAug enhancement module for FLUX.1-dev in the format of LoRA. This model integrates the aesthetic understanding of Qwen2-VL-72B into FLUX.1-dev, leading to an improvement in the quality of generated images.
- Paper: https://arxiv.org/abs/2412.12888
- Examples: https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/ArtAug
- Model: ModelScope, HuggingFace
- Demo: ModelScope, HuggingFace (Coming soon)
-
October 25, 2024 We provide extensive FLUX ControlNet support. This project supports many different ControlNet models that can be freely combined, even if their structures differ. Additionally, ControlNet models are compatible with high-resolution refinement and partition control techniques, enabling very powerful controllable image generation. See
./examples/ControlNet/. -
October 8, 2024. We release the extended LoRA based on CogVideoX-5B and ExVideo. You can download this model from ModelScope or HuggingFace.
-
August 22, 2024. CogVideoX-5B is supported in this project. See here. We provide several interesting features for this text-to-video model, including
- Text to video
- Video editing
- Self-upscaling
- Video interpolation
-
August 22, 2024. We have implemented an interesting painter that supports all text-to-image models. Now you can create stunning images using the painter, with assistance from AI!
- Use it in our WebUI.
-
August 21, 2024. FLUX is supported in DiffSynth-Studio.
- Enable CFG and highres-fix to improve visual quality. See here
- LoRA, ControlNet, and additional models will be available soon.
-
June 21, 2024. We propose ExVideo, a post-tuning technique aimed at enhancing the capability of video generation models. We have extended Stable Video Diffusion to achieve the generation of long videos up to 128 frames.
- Project Page
- Source code is released in this repo. See
examples/ExVideo. - Models are released on HuggingFace and ModelScope.
- Technical report is released on arXiv.
- You can try ExVideo in this Demo!
-
June 13, 2024. DiffSynth Studio is transferred to ModelScope. The developers have transitioned from "I" to "we". Of course, I will still participate in development and maintenance.
-
Jan 29, 2024. We propose Diffutoon, a fantastic solution for toon shading.
- Project Page
- The source codes are released in this project.
- The technical report (IJCAI 2024) is released on arXiv.
-
Dec 8, 2023. We decide to develop a new Project, aiming to release the potential of diffusion models, especially in video synthesis. The development of this project is started.
-
Nov 15, 2023. We propose FastBlend, a powerful video deflickering algorithm.
-
Oct 1, 2023. We release an early version of this project, namely FastSDXL. A try for building a diffusion engine.
- The source codes are released on GitHub.
- FastSDXL includes a trainable OLSS scheduler for efficiency improvement.
-
Aug 29, 2023. We propose DiffSynth, a video synthesis framework.
- Project Page.
- The source codes are released in EasyNLP.
- The technical report (ECML PKDD 2024) is released on arXiv.

