Instructions to use Lightricks/LTX-Video with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Lightricks/LTX-Video with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Lightricks/LTX-Video", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Inference
- Notebooks
- Google Colab
- Kaggle
When I run as your example, I got an error, please help to fix.
python inference.py --ckpt_path 'model' --prompt "A slow cinematic push in on an ostrich standing in a 1980s kitchen" --height 736 --width 51
2 --num_frames 100 --seed 0
Running generation with arguments: Namespace(ckpt_path='/workspace/LTX-Video/model', input_video_path=None, input_image_path=None, output_path=None, seed=0, num_inference_steps=40, num_images_per_prompt=1, guidance_scale=3, image_cond_noise_scale=0.15, height=736, width=512, num_frames=100, frame_rate=25, bfloat16=False, prompt='A slow cinematic push in on an ostrich standing in a 1980s kitchen', negative_prompt='worst quality, inconsistent motion, blurry, jittery, distorted')
Input resolution or number of frames 736x512x100 is too big, it is suggested to use the resolution below 720x1280x257.
Padded dimensions: 736x512x105
Traceback (most recent call last):
File "/workspace/LTX-Video/inference.py", line 458, in
main()
File "/workspace/LTX-Video/inference.py", line 308, in main
with safe_open(ckpt_path, framework="pt", device=device) as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: No such device (os error 19)