Image-to-Video
Diffusers
Safetensors
English
Chinese
video generation
conversational video generation
talking human video generation
Instructions to use MeiGen-AI/MeiGen-MultiTalk with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use MeiGen-AI/MeiGen-MultiTalk with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("MeiGen-AI/MeiGen-MultiTalk", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
RTX5090
#5
by 5Past2Awie - opened
Is there any eta for when the software will catchup with the RTX5090?? I bought the RTX5090 specifically to run Multitalk just to find out there is no way of running a Multitalk workflow in ComfyUI and the standalone version any video longer than 5 seconds (That still takes 15min to generate) is useless the people end up looking like they have skin cancer or have aged 50years since the start of the video
Now u made me worried about rtx 4060
YOu gotta custom setup the 5090 - I am doing mine will post trhe specs and any problems