Files
DiffSynth-Studio/README_zh.md
2025-07-22 20:02:21 +08:00

36 KiB
Raw Blame History

DiffSynth-Studio

PyPI license open issues GitHub pull-requests GitHub latest commit

modelscope%2FDiffSynth-Studio | Trendshift

简介

欢迎来到 Diffusion 模型的魔法世界DiffSynth-Studio 是由魔搭社区团队开发和维护的开源 Diffusion 模型引擎。我们期望以框架建设孵化技术创新,凝聚开源社区的力量,探索生成式模型技术的边界!

DiffSynth 目前包括两个开源项目:

  • DiffSynth-Studio: 聚焦于激进的技术探索,面向学术界,提供更前沿的模型能力支持。
  • DiffSynth-Engine: 聚焦于稳定的模型部署,面向工业界,提供更高的计算性能与更稳定的功能。

安装

从源码安装(推荐):

git clone https://github.com/modelscope/DiffSynth-Studio.git
cd DiffSynth-Studio
pip install -e .
其他安装方式

从 pypi 安装(存在版本更新延迟,如需使用最新功能,请从源码安装)

pip install diffsynth

如果在安装过程中遇到问题,可能是由上游依赖包导致的,请参考这些包的文档:

基础框架

DiffSynth-Studio 为主流 Diffusion 模型(包括 FLUX、Wan 等)重新设计了推理和训练流水线。

FLUX 系列

详细页面:./examples/flux/

Image

快速开始
import torch
from diffsynth.pipelines.flux_image_new import FluxImagePipeline, ModelConfig

pipe = FluxImagePipeline.from_pretrained(
    torch_dtype=torch.bfloat16,
    device="cuda",
    model_configs=[
        ModelConfig(model_id="black-forest-labs/FLUX.1-dev", origin_file_pattern="flux1-dev.safetensors"),
        ModelConfig(model_id="black-forest-labs/FLUX.1-dev", origin_file_pattern="text_encoder/model.safetensors"),
        ModelConfig(model_id="black-forest-labs/FLUX.1-dev", origin_file_pattern="text_encoder_2/"),
        ModelConfig(model_id="black-forest-labs/FLUX.1-dev", origin_file_pattern="ae.safetensors"),
    ],
)

image = pipe(prompt="a cat", seed=0)
image.save("image.jpg")
模型总览
模型 ID 额外参数 推理 低显存推理 全量训练 全量训练后验证 LoRA 训练 LoRA 训练后验证
FLUX.1-dev code code code code code code
FLUX.1-Kontext-dev kontext_images code code code code code code
FLUX.1-dev-Controlnet-Inpainting-Beta controlnet_inputs code code code code code code
FLUX.1-dev-Controlnet-Union-alpha controlnet_inputs code code code code code code
FLUX.1-dev-Controlnet-Upscaler controlnet_inputs code code code code code code
FLUX.1-dev-IP-Adapter ipadapter_images, ipadapter_scale code code code code code code
FLUX.1-dev-InfiniteYou infinityou_id_image, infinityou_guidance, controlnet_inputs code code code code code code
FLUX.1-dev-EliGen eligen_entity_prompts, eligen_entity_masks, eligen_enable_on_negative, eligen_enable_inpaint code code - -
FLUX.1-dev-LoRA-Encoder lora_encoder_inputs, lora_encoder_scale code code code code - -
Step1X-Edit step1x_reference_image code code code code code code
FLEX.2-preview flex_inpaint_image, flex_inpaint_mask, flex_control_image, flex_control_strength, flex_control_stop code code code code code code

Wan 系列

详细页面:./examples/wanvideo/

https://github.com/user-attachments/assets/1d66ae74-3b02-40a9-acc3-ea95fc039314

快速开始
import torch
from diffsynth import save_video
from diffsynth.pipelines.wan_video_new import WanVideoPipeline, ModelConfig

pipe = WanVideoPipeline.from_pretrained(
    torch_dtype=torch.bfloat16,
    device="cuda",
    model_configs=[
        ModelConfig(model_id="Wan-AI/Wan2.1-T2V-1.3B", origin_file_pattern="diffusion_pytorch_model*.safetensors", offload_device="cpu"),
        ModelConfig(model_id="Wan-AI/Wan2.1-T2V-1.3B", origin_file_pattern="models_t5_umt5-xxl-enc-bf16.pth", offload_device="cpu"),
        ModelConfig(model_id="Wan-AI/Wan2.1-T2V-1.3B", origin_file_pattern="Wan2.1_VAE.pth", offload_device="cpu"),
    ],
)
pipe.enable_vram_management()

video = pipe(
    prompt="纪实摄影风格画面,一只活泼的小狗在绿茵茵的草地上迅速奔跑。小狗毛色棕黄,两只耳朵立起,神情专注而欢快。阳光洒在它身上,使得毛发看上去格外柔软而闪亮。背景是一片开阔的草地,偶尔点缀着几朵野花,远处隐约可见蓝天和几片白云。透视感鲜明,捕捉小狗奔跑时的动感和四周草地的生机。中景侧面移动视角。",
    negative_prompt="色调艳丽过曝静态细节模糊不清字幕风格作品画作画面静止整体发灰最差质量低质量JPEG压缩残留丑陋的残缺的多余的手指画得不好的手部画得不好的脸部畸形的毁容的形态畸形的肢体手指融合静止不动的画面杂乱的背景三条腿背景人很多倒着走",
    seed=0, tiled=True,
)
save_video(video, "video1.mp4", fps=15, quality=5)
模型总览
模型 ID 额外参数 推理 全量训练 全量训练后验证 LoRA 训练 LoRA 训练后验证
Wan-AI/Wan2.1-T2V-1.3B code code code code code
Wan-AI/Wan2.1-T2V-14B code code code code code
Wan-AI/Wan2.1-I2V-14B-480P input_image code code code code code
Wan-AI/Wan2.1-I2V-14B-720P input_image code code code code code
Wan-AI/Wan2.1-FLF2V-14B-720P input_image, end_image code code code code code
PAI/Wan2.1-Fun-1.3B-InP input_image, end_image code code code code code
PAI/Wan2.1-Fun-1.3B-Control control_video code code code code code
PAI/Wan2.1-Fun-14B-InP input_image, end_image code code code code code
PAI/Wan2.1-Fun-14B-Control control_video code code code code code
PAI/Wan2.1-Fun-V1.1-1.3B-Control control_video, reference_image code code code code code
PAI/Wan2.1-Fun-V1.1-14B-Control control_video, reference_image code code code code code
PAI/Wan2.1-Fun-V1.1-1.3B-InP input_image, end_image code code code code code
PAI/Wan2.1-Fun-V1.1-14B-InP input_image, end_image code code code code code
PAI/Wan2.1-Fun-V1.1-1.3B-Control-Camera control_camera_video, input_image code code code code code
PAI/Wan2.1-Fun-V1.1-14B-Control-Camera control_camera_video, input_image code code code code code
iic/VACE-Wan2.1-1.3B-Preview vace_control_video, vace_reference_image code code code code code
Wan-AI/Wan2.1-VACE-1.3B vace_control_video, vace_reference_image code code code code code
Wan-AI/Wan2.1-VACE-14B vace_control_video, vace_reference_image code code code code code
DiffSynth-Studio/Wan2.1-1.3b-speedcontrol-v1 motion_bucket_id code code code code code

更多模型

图像生成模型

详细页面:./examples/image_synthesis/

FLUX Stable Diffusion 3
image_1024_cfg image_1024
Kolors Hunyuan-DiT
image_1024 image_1024
Stable Diffusion Stable Diffusion XL
1024 1024
视频生成模型

https://github.com/user-attachments/assets/48dd24bb-0cc6-40d2-88c3-10feed3267e9

https://github.com/user-attachments/assets/5954fdaa-a3cf-45a3-bd35-886e3cc4581b

https://github.com/user-attachments/assets/26b044c1-4a60-44a4-842f-627ff289d006

图像质量评估模型

我们集成了一系列图像质量评估模型,这些模型可以用于图像生成模型的评测、对齐训练等场景中。

详细页面:./examples/image_quality_metric/

创新成果

DiffSynth-Studio 不仅仅是一个工程化的模型框架,更是创新成果的孵化器。

Nexus-Gen: 统一架构的图像理解、生成、编辑

ArtAug: 图像生成模型的美学提升
FLUX.1-dev FLUX.1-dev + ArtAug LoRA
image_1_base image_1_enhance
EliGen: 精准的图像分区控制
实体控制区域 生成图像
eligen_example_2_mask_0 eligen_example_2_0
ExVideo: 视频生成模型的扩展训练

https://github.com/modelscope/DiffSynth-Studio/assets/35051019/d97f6aa9-8064-4b5b-9d49-ed6001bb9acc

Diffutoon: 高分辨率动漫风格视频渲染

https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/b54c05c5-d747-4709-be5e-b39af82404dd

DiffSynth: 本项目的初代版本

https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/59fb2f7b-8de0-4481-b79f-0c3a7361a1ea

版本更新历史

  • July 11, 2025 🔥🔥🔥 We propose Nexus-Gen, a unified model that synergizes the language reasoning capabilities of LLMs with the image synthesis power of diffusion models. This framework enables seamless image understanding, generation, and editing tasks.

  • June 15, 2025 ModelScope's official evaluation framework, EvalScope, now supports text-to-image generation evaluation. Try it with the Best Practices guide.

  • March 31, 2025 We support InfiniteYou, an identity preserving method for FLUX. Please refer to ./examples/InfiniteYou/ for more details.

  • March 25, 2025 Our new open-source project, DiffSynth-Engine, is now open-sourced! Focused on stable model deployment. Geared towards industry. Offers better engineering support, higher computational performance, and more stable functionality.

  • March 13, 2025 We support HunyuanVideo-I2V, the image-to-video generation version of HunyuanVideo open-sourced by Tencent. Please refer to ./examples/HunyuanVideo/ for more details.

  • February 25, 2025 We support Wan-Video, a collection of SOTA video synthesis models open-sourced by Alibaba. See ./examples/wanvideo/.

  • February 17, 2025 We support StepVideo! State-of-the-art video synthesis model! See ./examples/stepvideo.

  • December 31, 2024 We propose EliGen, a novel framework for precise entity-level controlled text-to-image generation, complemented by an inpainting fusion pipeline to extend its capabilities to image inpainting tasks. EliGen seamlessly integrates with existing community models, such as IP-Adapter and In-Context LoRA, enhancing its versatility. For more details, see ./examples/EntityControl.

  • December 19, 2024 We implement advanced VRAM management for HunyuanVideo, making it possible to generate videos at a resolution of 129x720x1280 using 24GB of VRAM, or at 129x512x384 resolution with just 6GB of VRAM. Please refer to ./examples/HunyuanVideo/ for more details.

  • December 18, 2024 We propose ArtAug, an approach designed to improve text-to-image synthesis models through synthesis-understanding interactions. We have trained an ArtAug enhancement module for FLUX.1-dev in the format of LoRA. This model integrates the aesthetic understanding of Qwen2-VL-72B into FLUX.1-dev, leading to an improvement in the quality of generated images.

  • October 25, 2024 We provide extensive FLUX ControlNet support. This project supports many different ControlNet models that can be freely combined, even if their structures differ. Additionally, ControlNet models are compatible with high-resolution refinement and partition control techniques, enabling very powerful controllable image generation. See ./examples/ControlNet/.

  • October 8, 2024. We release the extended LoRA based on CogVideoX-5B and ExVideo. You can download this model from ModelScope or HuggingFace.

  • August 22, 2024. CogVideoX-5B is supported in this project. See here. We provide several interesting features for this text-to-video model, including

    • Text to video
    • Video editing
    • Self-upscaling
    • Video interpolation
  • August 22, 2024. We have implemented an interesting painter that supports all text-to-image models. Now you can create stunning images using the painter, with assistance from AI!

  • August 21, 2024. FLUX is supported in DiffSynth-Studio.

    • Enable CFG and highres-fix to improve visual quality. See here
    • LoRA, ControlNet, and additional models will be available soon.
  • June 21, 2024. We propose ExVideo, a post-tuning technique aimed at enhancing the capability of video generation models. We have extended Stable Video Diffusion to achieve the generation of long videos up to 128 frames.

  • June 13, 2024. DiffSynth Studio is transferred to ModelScope. The developers have transitioned from "I" to "we". Of course, I will still participate in development and maintenance.

  • Jan 29, 2024. We propose Diffutoon, a fantastic solution for toon shading.

    • Project Page
    • The source codes are released in this project.
    • The technical report (IJCAI 2024) is released on arXiv.
  • Dec 8, 2023. We decide to develop a new Project, aiming to release the potential of diffusion models, especially in video synthesis. The development of this project is started.

  • Nov 15, 2023. We propose FastBlend, a powerful video deflickering algorithm.

  • Oct 1, 2023. We release an early version of this project, namely FastSDXL. A try for building a diffusion engine.

    • The source codes are released on GitHub.
    • FastSDXL includes a trainable OLSS scheduler for efficiency improvement.
      • The original repo of OLSS is here.
      • The technical report (CIKM 2023) is released on arXiv.
      • A demo video is shown on Bilibili.
      • Since OLSS requires additional training, we don't implement it in this project.
  • Aug 29, 2023. We propose DiffSynth, a video synthesis framework.