Brianna White

Administrator
Staff member
Jul 30, 2019
4,656
3,456
For the past few days, I’ve been playing around with DALL-E 2, an app developed by the San Francisco company OpenAI that turns text descriptions into hyper-realistic images.
OpenAI invited me to test DALL-E 2 (the name is a play on Pixar’s WALL-E and the artist Salvador Dalí) during its beta period, and I quickly got obsessed. I spent hours thinking up weird, funny and abstract prompts to feed the A.I. — “a 3-D rendering of a suburban home shaped like a croissant,” “an 1850s daguerreotype portrait of Kermit the Frog,” “a charcoal sketch of two penguins drinking wine in a Parisian bistro.” Within seconds, DALL-E 2 would spit out a handful of images depicting my request — often with jaw-dropping realism.
Here, for example, is one of the images DALL-E 2 produced when I typed in “black-and-white vintage photograph of a 1920s mobster taking a selfie.” And how it rendered my request for a high-quality photograph of “a sailboat knitted out of blue yarn.”
DALL-E 2 can also go more abstract. The illustration at the top of this article, for example, is what it generated when I asked for a rendering of “infinite joy.” (I liked this one so much I’m going to have it printed and framed for my wall.)
What’s impressive about DALL-E 2 isn’t just the art it generates. It’s how it generates art. These aren’t composites made out of existing internet images — they’re wholly new creations made through a complex A.I. process known as “diffusion,” which starts with a random series of pixels and refines it repeatedly until it matches a given text description. And it’s improving quickly — DALL-E 2’s images are four times as detailed as the images generated by the original DALL-E, which was introduced only last year.
Continue reading: https://www.nytimes.com/2022/08/24/technology/ai-technology-progress.html
 

Attachments

  • p0008786.m08378.how_good_is_ai.jpg
    p0008786.m08378.how_good_is_ai.jpg
    23.9 KB · Views: 47
  • Like
Reactions: Brianna White