Limits...
Hybrid visibility compositing and masking for illustrative rendering.

Bruckner S, Rautek P, Viola I, Roberts M, Sousa MC, Gröller ME - Comput Graph (2010)

Bottom Line: These tools behave just like in 2D, but their influence extends beyond a single viewpoint.Since the presented approach makes no assumptions about the underlying rendering algorithms, layers can be generated based on polygonal geometry, volumetric data, point-based representations, or others.Our implementation exploits current graphics hardware and permits real-time interaction and rendering.

View Article: PubMed Central - PubMed

Affiliation: Institute of Computer Graphics and Algorithms, Vienna University of Technology, Austria.

ABSTRACT
In this paper, we introduce a novel framework for the compositing of interactively rendered 3D layers tailored to the needs of scientific illustration. Currently, traditional scientific illustrations are produced in a series of composition stages, combining different pictorial elements using 2D digital layering. Our approach extends the layer metaphor into 3D without giving up the advantages of 2D methods. The new compositing approach allows for effects such as selective transparency, occlusion overrides, and soft depth buffering. Furthermore, we show how common manipulation techniques such as masking can be integrated into this concept. These tools behave just like in 2D, but their influence extends beyond a single viewpoint. Since the presented approach makes no assumptions about the underlying rendering algorithms, layers can be generated based on polygonal geometry, volumetric data, point-based representations, or others. Our implementation exploits current graphics hardware and permits real-time interaction and rendering.

No MeSH data available.


Related in: MedlinePlus

Comparison of implicit, explicit, and hybrid visibility approaches to compositing. Top row: Left—manually generated illustration of a sports car. Center—implicit visibility of a similar 3D model. Right—four individual layers of the model. Middle row: Explicit visibility of the layers from three different viewpoints. Bottom row: Hybrid visibility of the layers from three different viewpoints. While implicit visibility alone does not capture the subtle effects used in the manual illustration, explicit visibility leads to distracting results when changing the viewpoint. Hybrid visibility avoids the drawbacks of both approaches. Manual illustration courtesy of ©Kevin Hulsey Illustration, Inc.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2925605&req=5

fig2: Comparison of implicit, explicit, and hybrid visibility approaches to compositing. Top row: Left—manually generated illustration of a sports car. Center—implicit visibility of a similar 3D model. Right—four individual layers of the model. Middle row: Explicit visibility of the layers from three different viewpoints. Bottom row: Hybrid visibility of the layers from three different viewpoints. While implicit visibility alone does not capture the subtle effects used in the manual illustration, explicit visibility leads to distracting results when changing the viewpoint. Hybrid visibility avoids the drawbacks of both approaches. Manual illustration courtesy of ©Kevin Hulsey Illustration, Inc.

Mentions: Fig. 2illustrates the advantages of hybrid visibility for the generation of illustrations. In the first row, a manually generated illustration of a sports car is depicted in the left column. The center column shows the implicit visibility of a similar 3D model. In the right column, four individual layers of the car (chassis, tires, interior, and details) are shown. The second row shows an example of explicit visibility using the following stacking order from bottom to top: chassis, tires, interior, details. Even though a result similar to the manual illustration can be generated by employing explicit visibility, it does not translate to other viewpoints. The third row depicts results generated using our hybrid visibility approach which allows us to closely mimic the essential features of the manually generated image. Interior and tires form a visibility chain which uses implicit visibility. The result is combined with the chassis and the details using occlusion-based blending. These concepts are discussed in detail in the following sections.


Hybrid visibility compositing and masking for illustrative rendering.

Bruckner S, Rautek P, Viola I, Roberts M, Sousa MC, Gröller ME - Comput Graph (2010)

Comparison of implicit, explicit, and hybrid visibility approaches to compositing. Top row: Left—manually generated illustration of a sports car. Center—implicit visibility of a similar 3D model. Right—four individual layers of the model. Middle row: Explicit visibility of the layers from three different viewpoints. Bottom row: Hybrid visibility of the layers from three different viewpoints. While implicit visibility alone does not capture the subtle effects used in the manual illustration, explicit visibility leads to distracting results when changing the viewpoint. Hybrid visibility avoids the drawbacks of both approaches. Manual illustration courtesy of ©Kevin Hulsey Illustration, Inc.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2925605&req=5

fig2: Comparison of implicit, explicit, and hybrid visibility approaches to compositing. Top row: Left—manually generated illustration of a sports car. Center—implicit visibility of a similar 3D model. Right—four individual layers of the model. Middle row: Explicit visibility of the layers from three different viewpoints. Bottom row: Hybrid visibility of the layers from three different viewpoints. While implicit visibility alone does not capture the subtle effects used in the manual illustration, explicit visibility leads to distracting results when changing the viewpoint. Hybrid visibility avoids the drawbacks of both approaches. Manual illustration courtesy of ©Kevin Hulsey Illustration, Inc.
Mentions: Fig. 2illustrates the advantages of hybrid visibility for the generation of illustrations. In the first row, a manually generated illustration of a sports car is depicted in the left column. The center column shows the implicit visibility of a similar 3D model. In the right column, four individual layers of the car (chassis, tires, interior, and details) are shown. The second row shows an example of explicit visibility using the following stacking order from bottom to top: chassis, tires, interior, details. Even though a result similar to the manual illustration can be generated by employing explicit visibility, it does not translate to other viewpoints. The third row depicts results generated using our hybrid visibility approach which allows us to closely mimic the essential features of the manually generated image. Interior and tires form a visibility chain which uses implicit visibility. The result is combined with the chassis and the details using occlusion-based blending. These concepts are discussed in detail in the following sections.

Bottom Line: These tools behave just like in 2D, but their influence extends beyond a single viewpoint.Since the presented approach makes no assumptions about the underlying rendering algorithms, layers can be generated based on polygonal geometry, volumetric data, point-based representations, or others.Our implementation exploits current graphics hardware and permits real-time interaction and rendering.

View Article: PubMed Central - PubMed

Affiliation: Institute of Computer Graphics and Algorithms, Vienna University of Technology, Austria.

ABSTRACT
In this paper, we introduce a novel framework for the compositing of interactively rendered 3D layers tailored to the needs of scientific illustration. Currently, traditional scientific illustrations are produced in a series of composition stages, combining different pictorial elements using 2D digital layering. Our approach extends the layer metaphor into 3D without giving up the advantages of 2D methods. The new compositing approach allows for effects such as selective transparency, occlusion overrides, and soft depth buffering. Furthermore, we show how common manipulation techniques such as masking can be integrated into this concept. These tools behave just like in 2D, but their influence extends beyond a single viewpoint. Since the presented approach makes no assumptions about the underlying rendering algorithms, layers can be generated based on polygonal geometry, volumetric data, point-based representations, or others. Our implementation exploits current graphics hardware and permits real-time interaction and rendering.

No MeSH data available.


Related in: MedlinePlus