Stable Diffusion is an open-source text-to-image diffusion model created by Stability AI in collaboration with the CompVis research group at LMU Munich and creative-tech company Runway. Launched in August 2022, it democratized high-quality generative imaging by enabling anyone with a modest GPU to turn natural-language prompts into detailed pictures, perform in-painting and out-painting, and fine-tune custom models.
Based on a latent diffusion architecture, Stable Diffusion compresses images into a lower-dimensional latent space, allowing rapid inference while maintaining visual fidelity. Its weights and code are freely available for research and commercial use, sparking a vibrant ecosystem of community forks, plug-ins and derivative models (e.g., SDXL, SD 3, ControlNet). Widely adopted across design, game development, advertising and scientific visualization, the model has accelerated creative workflows while also prompting important discussions around copyright, safety and responsible AI deployment.