TL;DR
A new post-training training quantization paradigm for diffusion models, which quantize both the weights and activations of FLUX.1 to 4 bits, achieving 3.5× memory and 8.7× latency reduction on a 16GB laptop 4090 GPU.
Paper: http://arxiv.org/abs/2411.05007
Weights: https://huggingface.co/mit-han-lab/svdquant-models
Code: https://github.com/mit-han-lab/nunchaku
Blog: https://hanlab.mit.edu/blog/svdquant
Demo: https://svdquant.mit.edu/
You must log in or # to comment.