Skip to content

softwiredtech/stable-diffusion-webgpu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stable Diffusion on tinygrad WebGPU

This is a WebGPU port of Stable Diffusion in tinygrad.
Try it out here!
The python code I wrote to compile and export the model can be found here

How it works

The Stable Diffusion model is exported in three parts:

  • textModel
  • diffusor
  • decoder

If you open net.js you can see all the WebGPU kernels that are involved in the inference.
When you open the page for the first time, the model will be downloaded from huggingface in tinygrad's safetensor format. The model is in f16 to optimize download speed.
Since the computation is done in f32 in the model, and since shader-f16 is not yet supported in production Chrome, the model is decompressed to f32 using f16-to-f32-gpu WebGPU-based f16 decompression library.
When you open the site a second time, the model will be loaded from the IndexedDB cache into which it was saved on first visit. If you have a full cache hit, the model will be decompressed, compiled and ready to use. If you have a cache miss (usually due to QuotaExceededError), the model will be redownloaded.

License

MIT

Releases

No releases published

Packages

No packages published