Pick a particular artistic style, such as impressionist painting, classical oil painting, pastels, or charcoal. How would you need to extend Haeberli's ideas to better mimicking that particular style?
Can you think of a well-known artistic style for which Haeberli's approach is not well-suited? Why?
For Haeberli's paper, discuss the pros and cons of point-sampling versus area-sampling for color.
Saito and Takahashi make a point of doing their effects as 2D image processing techniques. Why? Can you think of an effect for which 3D might be easier/better?
What is a "G-Buffer"?
Would it be hard or easy for an artistically-challenged computer scientist to create a new rendering style with the Saito and Takahashi system? What about for a not-very-technical artist?
Compare and contrast Litwinowicz's paper with Haeberli's paper.
What are the advantages and disadvantages to using video as input to an animation system?
How much control does the artist have with Litwinowicz's system? How fully automated do you think this system is?
How intuitive (for an artist) do you think the parameters are in Hertzmann's system? Would it be hard to create a GUI front-end that would be easy for artists to use?
How easy or hard would it be to combine Litwinowicz's and Hertzmann's systems, i.e. a system in which individual curved brush strokes are moved forward from frame to frame based on optical flow? What technical challenges do you think would arise?
Imagine taking one of Meier's haystacks and generating a sequence in which a camera circles the haystack by 360 degrees (i.e. the camera starts and ends at exactly the same position.) Will you get the exact same imagery (i.e. set of strokes) at the beginning and end? Why or why not? Is this issue significant? Again, why or why not?
Meier's paper talks about using "reference images." Give some examples of these and what they are used for. How are these similar to and how are they different from G-buffers?
What is the "shower door effect"?
Meier's mentions a piece of related work in which 3D geometric elements are attached to surfaces in model space. What kind of different looking effect do you think that would give?
Do you think you could extend Meier's system to work with long, curved brush strokes? Why or why not?
What are moments?
Compare and contrast the Shiraishi and Yamaguchi paper with the Haeberli and Hertzmann papers.
Shiraishi and Yamaguchi make the point that image gradients can be sensitive to noise. This, of course, is why many researchers suggest blurring the image before doing the gradient detection. Name a non-trivial example when the Shiraishi and Yamaguchi method will result in a potentially better stroke orientation or placement.
What do Shiraishi and Yamaguchi mean when they say "intensity is merely one aspect of the colour. We must consider the chromaticity information in order to approximate the local region of the source image."
Santella and DeCarlo state that their system uses a "perceptual and intentional structure of an image" to render it. What is the perceptual structure? What is the intentional structure?
What do you think is the valuable contribution of the Santella and DeCarlo paper?
Why don't longer eye fixations automatically detect areas of higher interest?
How does the Gooch et al. system compare and contrast to the Hertzmann and Shiraishi/Yamaguchi papers? Can you think of images that will look best with one of each of the 3 systems and not the others?
We have now seen a whole collection of painterly rendering papers. Do you think the results continue to improve with each successive paper? Is this area worth continuing to explore? If so, what issues still need to be addressed? If not, why not?
Like the Saito and Takahashi paper, the Markosian et al. paper is about comprehensible rendering of 3D objects. Please compare and contrast the 2 papers.
How does the Markosian et al. paper "deliberately trade[] accuracy and detail for speed"?
What is "economy of line" in the context of the Markosian et al. paper and what features are rendered?
What are the definitions of a "silhouette edge" and "border edge" in the Markosian et al. paper? What is the intuitive meaning of a "generic view"?
In the Markosian et al. paper, what is the relationship between an edge's dihedral angle and it's probability of being on a silhouette?
What is the basic silhouette rendering algorithm in the Raskar and Cohen paper?
Compare and contrast the Raskar and Cohen paper with the Markosian et al. paper. What are the advantages of the Raskar and Cohen paper? What are the advantages of the Markosian et al. paper?
The Hertzmann and Zorin paper "introduce[s] an efficient, deterministic algorithm for finding silhouettes based on geometric duality." What is this geometric duality and how does it help?
Hertzmann and Zorin make a special point of stating that any mesh can serve as input to their approach. Why is this significant? What issue(s) are they alluding to?
How can the Hertzmann and Zorin silhouette detection method be used to accelerate the computation of shadow volumes?
What is a true contour? What is the difference between a true contour and a suggestive contour?
What are the three equivalent definitions of suggestive contours?
Differential geometry: What is the first-order approximation of a surface at a point? How is the normal curvature of a surface S at a point p defined?
Would you classify the Deussen et al. paper as an automated filter or an artist's tool? What about the Secord paper?
Deussen et al. state that stippling is "more than a set of randomly spaced dots representing a given image." How? How is stippling similar to (and different from) halftoning? How are stipple dots different from pixels?
Poisson disc distribution and blue noise are concepts that come up frequently in graphics. What are they?
What is a weighted centroidal Voronoi diagram, and how does Secord use it to avoid user segmentation of the input image?