Limits...
Hybrid visibility compositing and masking for illustrative rendering.

Bruckner S, Rautek P, Viola I, Roberts M, Sousa MC, Gröller ME - Comput Graph (2010)

Bottom Line: These tools behave just like in 2D, but their influence extends beyond a single viewpoint.Since the presented approach makes no assumptions about the underlying rendering algorithms, layers can be generated based on polygonal geometry, volumetric data, point-based representations, or others.Our implementation exploits current graphics hardware and permits real-time interaction and rendering.

View Article: PubMed Central - PubMed

Affiliation: Institute of Computer Graphics and Algorithms, Vienna University of Technology, Austria.

ABSTRACT
In this paper, we introduce a novel framework for the compositing of interactively rendered 3D layers tailored to the needs of scientific illustration. Currently, traditional scientific illustrations are produced in a series of composition stages, combining different pictorial elements using 2D digital layering. Our approach extends the layer metaphor into 3D without giving up the advantages of 2D methods. The new compositing approach allows for effects such as selective transparency, occlusion overrides, and soft depth buffering. Furthermore, we show how common manipulation techniques such as masking can be integrated into this concept. These tools behave just like in 2D, but their influence extends beyond a single viewpoint. Since the presented approach makes no assumptions about the underlying rendering algorithms, layers can be generated based on polygonal geometry, volumetric data, point-based representations, or others. Our implementation exploits current graphics hardware and permits real-time interaction and rendering.

No MeSH data available.


Related in: MedlinePlus

Example of distance-based weighting for brush strokes. Two brush strokes (stroke 1, stroke 2) generated from the viewpoint view 1 are shown. In view 1, z=zl for both strokes, i.e., both strokes receive the maximum weight. When a novel viewpoint (view 2) is chosen, stroke 1 has  due to occlusion, i.e., it receives a lower weight, while stroke 2 remains visible.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2925605&req=5

fig5: Example of distance-based weighting for brush strokes. Two brush strokes (stroke 1, stroke 2) generated from the viewpoint view 1 are shown. In view 1, z=zl for both strokes, i.e., both strokes receive the maximum weight. When a novel viewpoint (view 2) is chosen, stroke 1 has due to occlusion, i.e., it receives a lower weight, while stroke 2 remains visible.

Mentions: Then, for each fragment of the stroke, the depth zl of the layer at the fragment's position is read. As this value is the first intersection of the viewing ray with the three-dimensional object represented by the layer, we can use it to estimate how much influence this fragment of the stroke should have for the current viewpoint. For instance, if the surface point we originally placed our stroke on is now occluded by another part of the surface, the difference between z and zl will be high. Conversely, if the same point on the surface we placed the stroke on is still visible in the novel viewpoint, the difference will be zero at that location. Fig. 5illustrates this behavior. It depicts two stroke centers rendered from two different viewpoints. From view 1, both stroke centers lie on the surface, i.e., z=zl. For view 2, stroke 2 still lies on the visible surface. The position of stroke 1, however, is occluded by another part of the object, i.e., the difference between z and zl is large.


Hybrid visibility compositing and masking for illustrative rendering.

Bruckner S, Rautek P, Viola I, Roberts M, Sousa MC, Gröller ME - Comput Graph (2010)

Example of distance-based weighting for brush strokes. Two brush strokes (stroke 1, stroke 2) generated from the viewpoint view 1 are shown. In view 1, z=zl for both strokes, i.e., both strokes receive the maximum weight. When a novel viewpoint (view 2) is chosen, stroke 1 has  due to occlusion, i.e., it receives a lower weight, while stroke 2 remains visible.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2925605&req=5

fig5: Example of distance-based weighting for brush strokes. Two brush strokes (stroke 1, stroke 2) generated from the viewpoint view 1 are shown. In view 1, z=zl for both strokes, i.e., both strokes receive the maximum weight. When a novel viewpoint (view 2) is chosen, stroke 1 has due to occlusion, i.e., it receives a lower weight, while stroke 2 remains visible.
Mentions: Then, for each fragment of the stroke, the depth zl of the layer at the fragment's position is read. As this value is the first intersection of the viewing ray with the three-dimensional object represented by the layer, we can use it to estimate how much influence this fragment of the stroke should have for the current viewpoint. For instance, if the surface point we originally placed our stroke on is now occluded by another part of the surface, the difference between z and zl will be high. Conversely, if the same point on the surface we placed the stroke on is still visible in the novel viewpoint, the difference will be zero at that location. Fig. 5illustrates this behavior. It depicts two stroke centers rendered from two different viewpoints. From view 1, both stroke centers lie on the surface, i.e., z=zl. For view 2, stroke 2 still lies on the visible surface. The position of stroke 1, however, is occluded by another part of the object, i.e., the difference between z and zl is large.

Bottom Line: These tools behave just like in 2D, but their influence extends beyond a single viewpoint.Since the presented approach makes no assumptions about the underlying rendering algorithms, layers can be generated based on polygonal geometry, volumetric data, point-based representations, or others.Our implementation exploits current graphics hardware and permits real-time interaction and rendering.

View Article: PubMed Central - PubMed

Affiliation: Institute of Computer Graphics and Algorithms, Vienna University of Technology, Austria.

ABSTRACT
In this paper, we introduce a novel framework for the compositing of interactively rendered 3D layers tailored to the needs of scientific illustration. Currently, traditional scientific illustrations are produced in a series of composition stages, combining different pictorial elements using 2D digital layering. Our approach extends the layer metaphor into 3D without giving up the advantages of 2D methods. The new compositing approach allows for effects such as selective transparency, occlusion overrides, and soft depth buffering. Furthermore, we show how common manipulation techniques such as masking can be integrated into this concept. These tools behave just like in 2D, but their influence extends beyond a single viewpoint. Since the presented approach makes no assumptions about the underlying rendering algorithms, layers can be generated based on polygonal geometry, volumetric data, point-based representations, or others. Our implementation exploits current graphics hardware and permits real-time interaction and rendering.

No MeSH data available.


Related in: MedlinePlus