Does anyone know if SDXL can run split tasks with SLI cards? I’ve been thinking of building a dual A80 tesla rig since they are so cheap but I want to be able to render on all 48gb as one.
The “Swarm” name is in reference to the original key function of the UI: enabling a ‘swarm’ of GPUs to all generate images for the same user at once (especially for large grid generations).
Does anyone know if SDXL can run split tasks with SLI cards? I’ve been thinking of building a dual A80 tesla rig since they are so cheap but I want to be able to render on all 48gb as one.
For OP – I run totally on OpenAI using API calls.
You can’t just increase your VRAM limit like that for single tasks, like working on a single massive high-resolution image.
There might be some way to get a series of queued tasks split.
googles
According to this, not in Automatic1111 currently, but there’s some other frontend that can:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1621
https://github.com/Stability-AI/StableSwarmUI