You ask — we answer!

Apps & Guides

Stable Video Diffusion

Generative neural networks can create various types of content. Stable Diffusion was created to generate images from text description. However, it can also be used to create music, sounds, and even videos. Today, we’ll show you how to create short videos from a single image using Stable Diffusion with WebUI and ComfyUI.

Install Stable Diffusion

Let’s begin by installing Stable Diffusion using our step-by-step guide. After installation, please interrupt webui.sh script execution by pressing Ctrl + C and close the SSH-connection. The system doesn’t allow you to install extensions with the enabled --listen (--share) options. This means that you need to set up port forwarding (7860 and 8189) from your local machine to the remote server. The first port is needed for WebUI and the second for ComfyUI.

For example, in PuTTY, you need to open Connection >> SSH >> Tunnels and add two new forwarded ports as shown in the following screenshot:

PuTTY port forwarding

Now, you can reconnect to the remote server and run ./webui.sh again.

Open this URL in your browser:

http://127.0.0.1:7860

Navigate to Extensions >> Available, then click on the Load from: button:

Load available extensions

The system will download the JSON file with all available extensions. Type ComfyUI in the search input box and click the Install button:

Download ComfyUI Reload UI

Web page will be reloaded and you’ll get a new tab ComfyUI in the main panel. Switch to it and click Install ComfyUI:

Install ComfyUI

When the installation is finished, interrupt the execution of the webui.sh script again by pressing Ctrl + C.

Install Stable Video Diffusion model

Open the model’s directory:

cd stable-diffusion-webui/models/Stable-diffusion/

Download the full Stable Video Diffusion model:

curl -L https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/resolve/main/svd_xt.safetensors?download=true --output svd_xt.safetensors

Return to the home directory:

cd ~/

And run the Stable Diffusion service again:

./webui.sh

Download the example of the Stable Video Diffusion workflow in JSON format. Erase the ComfyUI default workflow by pressing Clear, then Load the downloaded example:

ComfyUI workflow example

Ensure that you have the correct model selected in the Image Only Checkpoint Loader (img2vid model) node:

Select CKPT model

Click on the choose file to upload button in the Load Image node and select any single image that generative neural network will transform into a video:

Upload an image to ComfyUI

Try generating a video with all default parameters by clicking the Queue Prompt button:

Send task to queue

After the process is completed, you’ll get your video in WEBP format in the SaveAnimatedWEBP node. Right-click on the generated video and choose Save Image:

Here is the final result GIF.

Troubleshooting

If you get an error message: ModuleNotFoundError: No module named 'utils.json_util'; 'utils' is not a package, please follow these steps:

Rename the utils directory to utilities:

mv /home/usergpu/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/utils /home/usergpu/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/utilities

Edit custom_node_manager.py:

nano /home/usergpu/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/app/custom_node_manager.py

Replace this line:

from utils.json_util import merge_json_recursive

with:

from utilities.json_util import merge_json_recursive

Save the file (Ctrl + O) and exit the editor (Ctrl + X). Then edit main.py:

nano /home/usergpu/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/main.py

Replace this line:

import utils.extra_config

with:

import utilities.extra_config

Save the file, exit the editor, and run the Stable Diffusion service again:

./webui.sh

See also:



Updated: 04.04.2025

Published: 22.01.2025