Compare and contrast Litwinowicz's paper with Haeberli's paper.
What are the advantages and disadvantages to using video as input to an animation system?
How much control does the artist have with Litwinowicz's system? How fully automated do you think this system is?
How intuitive (for an artist) do you think the parameters are in Hertzmann's system? Would it be hard to create a GUI front-end that would be easy for artists to use?
How easy or hard would it be to combine Litwinowicz's and Hertzmann's systems, i.e. a system in which individual curved brush strokes are moved forward from frame to frame based on optical flow? What technical challenges do you think would arise?
Imagine taking one of Meier's haystacks and generating a sequence in which a camera circles the haystack by 360 degrees (i.e. the camera starts and ends at exactly the same position.) Will you get the exact same imagery (i.e. set of strokes) at the beginning and end? Why or why not? Is this issue significant? Again, why or why not?
Meier's paper talks about using "reference images." Give some examples of these and what they are used for. How are these similar to and how are they different from G-buffers?
What is the "shower door effect"?
Meier's mentions a piece of related work in which 3D geometric elements are attached to surfaces in model space. What kind of different looking effect do you think that would give?
Do you think you could extend Meier's system to work with long, curved brush strokes? Why or why not?
What are moments?
Compare and contrast the Shiraishi and Yamaguchi paper with the Haeberli and Hertzmann papers.
Shiraishi and Yamaguchi make the point that image gradients can be sensitive to noise. This, of course, is why many researchers suggest blurring the image before doing the gradient detection. Name a non-trivial example when the Shiraishi and Yamaguchi method will result in a potentially better stroke orientation or placement.
What do Shiraishi and Yamaguchi mean when they say "intensity is merely one aspect of the colour. We must consider the chromaticity information in order to approximate the local region of the source image."
Santella and DeCarlo state that their system uses a "perceptual and intentional structure of an image" to render it. What is the perceptual structure? What is the intentional structure?
What do you think is the valuable contribution of the Santella and DeCarlo paper?
Why don't longer eye fixations automatically detect areas of higher interest?
How does the Gooch et al. system compare and contrast to the Hertzmann and Shiraishi/Yamaguchi papers? Can you think of images that will look best with one of each of the 3 systems and not the others?
We have now seen a whole collection of painterly rendering papers. Do you think the results continue to improve with each successive paper? Is this area worth continuing to explore? If so, what issues still need to be addressed? If not, why not?
Would you classify the Deussen et al. paper as an automated filter or an artist's tool? What about the Secord paper?
Deussen et al. state that stippling is "more than a set of randomly spaced dots representing a given image." How? How is stippling similar to (and different from) halftoning? How are stipple dots different from pixels?
Poisson disc distribution and blue noise are concepts that come up frequently in graphics. What are they?
What is a weighted centroidal Voronoi diagram, and how does Secord use it to avoid user segmentation of the input image?