Learn how to create, start, stop, and terminate Pods using the Runpod console or CLI.
[RUNPOD_API_KEY]
with your API key:
runpod/pytorch:2.8.0-py3.11-cuda12.8.1-cudnn-devel-ubuntu22.04
), ensure the host machine’s CUDA driver version matches or exceeds the template’s requirements.If you encounter errors like “OCI runtime create failed” or “unsatisfied condition: cuda>=X.X”, you need to filter for compatible machines:/workspace
directory will be preserved in the network volume, and you can regain access by deploying a new Pod with the same network volume attached./workspace
directory is preserved. To learn more about how Pod storage works, see Storage overview.
By stopping a Pod you are effectively releasing the GPU on the machine, and you may be reallocated 0 GPUs when you start the Pod again. For more info, see the FAQ.
runpodctl stop pod
command to stop the Pod. The &
at the end runs the command in the background, allowing you to continue using the SSH session.