Opinions vary on what is the best image a machine can synthesize. In this talk I will discuss some possible answers: First, physical accuracy has been a classic and quantifiable objective, and I will give an example of a technique to robustly compute unbiased images. Another important aspect to discuss is computational efficiency. I will explain how rasterization abilities of modern graphics hardware can be used to introduce global illumination at interactive frame rates. Ultimately, images are made to be perceived by humans, in which case it is not easy to say which image is "better", e.g. when comparing to a reference. I will discuss the particular challenge of computing what is the best (stereo) image using a perceptual model. If the "best" image only exists in the mind of user, computing this image is an even more substantial challenge. I will discuss examples of using Internet image collections to guide users when realizing what they imagine. Finally, images have commonly been synthesized from complete information. I will discuss some alternatives, where only incomplete information is available and extrapolation from sparse example images is made to achieve better coverage by many images.