#python #auto_regressive_diffusion_model #diffusion_models #video_generation #wan_video
LightX2V is a fast, lightweight framework for generating videos from text or images, supporting models like HunyuanVideo-1.5 and Wan2.1/2.2 with up to 20x speedup via 4-step distillation, low VRAM use (8GB+), and features like offloading, quantization, and multi-GPU parallelism—outperforming rivals on H100/RTX 4090. You benefit by creating high-quality videos quickly on everyday hardware, saving time and costs for content creation, prototyping, or professional workflows, with easy Docker/ComfyUI setup and free online trials.
https://github.com/ModelTC/LightX2V
LightX2V is a fast, lightweight framework for generating videos from text or images, supporting models like HunyuanVideo-1.5 and Wan2.1/2.2 with up to 20x speedup via 4-step distillation, low VRAM use (8GB+), and features like offloading, quantization, and multi-GPU parallelism—outperforming rivals on H100/RTX 4090. You benefit by creating high-quality videos quickly on everyday hardware, saving time and costs for content creation, prototyping, or professional workflows, with easy Docker/ComfyUI setup and free online trials.
https://github.com/ModelTC/LightX2V
GitHub
GitHub - ModelTC/LightX2V: Light Video Generation Inference Framework
Light Video Generation Inference Framework. Contribute to ModelTC/LightX2V development by creating an account on GitHub.