Merge pull request #648 from modelscope/flux-refactor

refine readme
This commit is contained in:
Zhongjie Duan
2025-06-30 11:44:47 +08:00
committed by GitHub
2 changed files with 6 additions and 4 deletions

View File

@@ -178,6 +178,7 @@ The script supports the following parameters:
* Dataset
* `--dataset_base_path`: Root path to the dataset.
* `--dataset_metadata_path`: Path to the metadata file of the dataset.
* `--max_pixels`: 最大像素面积,默认为 1024*1024当启用动态分辨率时任何分辨率大于这个数值的图片都会被缩小。
* `--height`: Height of images or videos. Leave `height` and `width` empty to enable dynamic resolution.
* `--width`: Width of images or videos. Leave `height` and `width` empty to enable dynamic resolution.
* `--data_file_keys`: Keys in metadata for data files. Comma-separated.
@@ -198,9 +199,9 @@ The script supports the following parameters:
* Extra Inputs
* `--extra_inputs`: Additional model inputs. Comma-separated.
* VRAM Management
* `use_gradient_checkpointing`: Whether to use gradient checkpointing.
* `--use_gradient_checkpointing`: Whether to use gradient checkpointing.
* `--use_gradient_checkpointing_offload`: Whether to offload gradient checkpointing to CPU memory.
* `gradient_accumulation_steps`: Number of steps for gradient accumulation.
* `--gradient_accumulation_steps`: Number of steps for gradient accumulation.
* Miscellaneous
* `--align_to_opensource_format`: Whether to align the FLUX DiT LoRA format with the open-source version. Only applicable to LoRA training for FLUX.1-dev and FLUX.1-Kontext-dev.

View File

@@ -180,6 +180,7 @@ FLUX 系列模型训练通过统一的 [`./model_training/train.py`](./model_tra
* 数据集
* `--dataset_base_path`: 数据集的根路径。
* `--dataset_metadata_path`: 数据集的元数据文件路径。
* `--max_pixels`: Maximum pixel area, default is 1024*1024. When dynamic resolution is enabled, any image with a resolution larger than this value will be scaled down.
* `--height`: 图像或视频的高度。将 `height``width` 留空以启用动态分辨率。
* `--width`: 图像或视频的宽度。将 `height``width` 留空以启用动态分辨率。
* `--data_file_keys`: 元数据中的数据文件键。用逗号分隔。
@@ -200,9 +201,9 @@ FLUX 系列模型训练通过统一的 [`./model_training/train.py`](./model_tra
* 额外模型输入
* `--extra_inputs`: 额外的模型输入,以逗号分隔。
* 显存管理
* `use_gradient_checkpointing`: 是否启用 gradient checkpointing。
* `--use_gradient_checkpointing`: 是否启用 gradient checkpointing。
* `--use_gradient_checkpointing_offload`: 是否将 gradient checkpointing 卸载到内存中。
* `gradient_accumulation_steps`: 梯度累积步数。
* `--gradient_accumulation_steps`: 梯度累积步数。
* 其他
* `--align_to_opensource_format`: 是否将 FLUX DiT LoRA 的格式与开源版本对齐,仅对 FLUX.1-dev 和 FLUX.1-Kontext-dev 的 LoRA 训练生效。