Image-to-Video
Diffusers
Safetensors
LTX2Pipeline
text-to-video
video-to-video
image-text-to-video
audio-to-video
text-to-audio
video-to-audio
audio-to-audio
text-to-audio-video
image-to-audio-video
image-text-to-audio-video
ltx-2
ltx-video
ltxv
lightricks
Instructions to use Lightricks/LTX-2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Lightricks/LTX-2 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Lightricks/LTX-2", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Inference
- Notebooks
- Google Colab
- Kaggle
Add Arxiv ID to metadata and improve model card
#17
by nielsr HF Staff - opened
Hi! I'm Niels from the community science team at Hugging Face.
Congratulations on the release of LTX-2! I'm opening this PR to add the arxiv ID to the model card's metadata, which enables the Hugging Face Hub's paper integration, linking this repository to your technical report.
I've also added a direct link to the paper in the description and a BibTeX citation section at the bottom.
Please let me know if you have any questions!
Thanks!
jacobitterman changed pull request status to merged