Understanding Text-to-Image Models workshop will be now be virtual on Monday, Feb. 27

Stable diffusion models have been shown to have incredible success generating realistic images. Models like DALL-E have shown the power to generate brand-new photo-realistic scenes, in addition to mimicking and transferring an art style onto existing photos. These models can be used in a variety of ways and experimentation with them can be easy. However, getting the image you’re looking for requires a careful selection of the input prompt and an understanding of the parameters of stable diffusion models.

The Data Science Institute is offering a chance to go in-depth on prompt engineering at a workshop open to the Vanderbilt community.  Understanding Text-to-Image Models: Dall-E, Stable Diffusion, and more will be held from noon to 1p.m. on February 27. You can sign up for the workshop by filling out the form in this link.