Artiprocher
f88b99cb4f
diffusion skills framework
2026-03-17 13:34:25 +08:00
Hong Zhang
681df93a85
Mova ( #1337 )
...
* support mova inference
* mova media_io
* add unified audio_video api & fix bug of mono audio input for ltx
* support mova train
* mova docs
* fix bug
2026-03-13 13:06:07 +08:00
Hong Zhang
c927062546
Merge pull request #1343 from mi804/ltx2.3_multiref
...
Ltx2.3 multiref
2026-03-10 17:31:05 +08:00
Artiprocher
13eff18e7d
remove unnecessary params in cache
2026-03-09 14:09:30 +08:00
Zhongjie Duan
ff4be1c7c7
Merge pull request #1293 from Mr-Neutr0n/fix/trajectory-loss-div-by-zero
...
fix: prevent division by zero in TrajectoryImitationLoss at final denoising step
2026-03-02 10:21:39 +08:00
mi804
8b9a094c1b
ltx iclora train
2026-02-27 18:43:53 +08:00
mi804
586ac9d8a6
support ltx-2 training
2026-02-25 17:19:57 +08:00
Mr-Neutr0n
b68663426f
fix: preserve sign of denominator in clamp to avoid inverting gradient direction
...
The previous .clamp(min=1e-6) on (sigma_ - sigma) flips the sign when
the denominator is negative (which is the typical case since sigmas
decrease monotonically). This would invert the target and cause
training divergence.
Use torch.sign(denom) * torch.clamp(denom.abs(), min=1e-6) instead,
which prevents division by zero while preserving the correct sign.
2026-02-11 21:04:55 +05:30
Mr-Neutr0n
0e6976a0ae
fix: prevent division by zero in trajectory imitation loss at last step
2026-02-11 19:51:25 +05:30
Zhongjie Duan
1b47e1dc22
Merge pull request #1272 from modelscope/zero3-fix
...
Support DeepSpeed ZeRO 3
2026-02-06 16:33:12 +08:00
Zhongjie Duan
25c3a3d3e2
Merge branch 'main' into ltx-2
2026-02-03 13:06:44 +08:00
mi804
49bc84f78e
add comment for tuple noise_pred
2026-02-03 10:43:25 +08:00
mi804
25a9e75030
final fix for ltx-2
2026-02-03 10:39:35 +08:00
Zhongjie Duan
28cd355aba
Merge pull request #1232 from huarzone/main
...
fix wan i2v/ti2v train bug
2026-02-02 15:26:01 +08:00
Zhongjie Duan
005389fca7
Merge pull request #1244 from modelscope/qwen-image-edit-lightning
...
Qwen image edit lightning
2026-02-02 15:20:11 +08:00
mi804
9f07d65ebb
support ltx2 distilled pipeline
2026-01-30 17:40:30 +08:00
lzws
5f1d5adfce
qwen-image-edit-2511-lightning
2026-01-30 17:26:26 +08:00
mi804
4f23caa55f
support ltx2 two stage pipeline & vram
2026-01-30 16:55:40 +08:00
Artiprocher
ee9a3b4405
support loading models from state dict
2026-01-30 13:47:36 +08:00
mi804
b1a2782ad7
support ltx2 one-stage pipeline
2026-01-29 16:30:15 +08:00
Kared
8d0df403ca
fix wan i2v train bug
2026-01-27 03:55:36 +00:00
feng0w0
4e9db263b0
[feature]:Add adaptation of all models to zero3
2026-01-27 11:24:43 +08:00
Artiprocher
b61131c693
improve flux2 training performance
2026-01-21 15:44:15 +08:00
Artiprocher
d13f533f42
support auto detact lora target modules
2026-01-21 11:05:05 +08:00
Zhongjie Duan
dd8d902624
Merge branch 'main' into cuda_replace
2026-01-20 10:12:31 +08:00
Artiprocher
b6ccb362b9
support flux.2 klein
2026-01-19 16:56:14 +08:00
feng0w0
209a350c0f
[NPU]:Replace 'cuda' in the project with abstract interfaces
2026-01-15 20:33:01 +08:00
Zhongjie Duan
00f2d1aa5d
Merge pull request #1169 from Feng0w0/sample_add
...
Docs:Supplement NPU training script samples and documentation instruction
2026-01-12 10:08:38 +08:00
Artiprocher
dd479e5bff
support z-image-omni-base-i2L
2026-01-07 20:36:53 +08:00
feng0w0
507e7e5d36
Docs:Supplement NPU training script samples and documentation instruction
2025-12-30 19:58:47 +08:00
Artiprocher
1547c3f786
bugfix
2025-12-16 16:09:29 +08:00
Artiprocher
2883bc1b76
support ascend npu
2025-12-15 15:48:42 +08:00
root
72af7122b3
DiffSynth-Studio 2.0 major update
2025-12-04 16:33:07 +08:00