Aside from the fact that your comment applies to photography as well, I think it’s fair to point out image generation can also be a complex pipeline instead of a simple prompt.
I use ComfyUI on my own hardware and frequently include steps for control net, depth maps, canny edge detection, segmentation, loras, and more. The text prompts, both positive and negative, are the least important parts in my workflow personally.
Hell sometimes I use my own photos as one of the dozens of inputs for the workflow, so in a sense photography was included.
Aside from the fact that your comment applies to photography as well, I think it’s fair to point out image generation can also be a complex pipeline instead of a simple prompt.
I use ComfyUI on my own hardware and frequently include steps for control net, depth maps, canny edge detection, segmentation, loras, and more. The text prompts, both positive and negative, are the least important parts in my workflow personally.
Hell sometimes I use my own photos as one of the dozens of inputs for the workflow, so in a sense photography was included.