Sunday, January 16, 2011

Point Rendering

The paper here gives a good background of how point rendering has evolved, in its introduction section.

What is challenging with point-rendering?
Filling the holes. Broadly, there are two ways you fill up the holes. One, is to generate polygonal mesh, by reconstructing the surface of the point-based model and then render then mesh.I In this method, the polygons ( triangles, usually) fill up the gaps and the color/depth values are interpolated within every triangle. So, effectively the gaps are filled. The second way to fill up the holes, is when you don't generate a triangle mesh and directly render points as points. Here, finding and filling up holes is relatively more work.

Triangle Mesh Generation Vs Direct Point Rendering
A few years ago, the 3D- scanner devices were not as fine as they are today. Hence, relatively lesser number of point samples were available. That made surface reconstruction ( building a triangle mesh over the points' surfaces and eventually render the triangles ) much more suitable that direct point rendering, as there were bigger and larger gaps to fill.
However, today, the number of samples that are captured are much larger. The gaps are smaller, and the problem with the surface reconstruction algorithms that are currently there is that, they grow super-linearly with the number of points. ( I didn't quite understand how is this making them bad? :O )
So, though, surface reconstruction methods have been better some time back, Now, direct point rendering has got an upper edge.

Future Readings:

Symposium on Point-Based Graphics is a conference that is dedicated to point-based graphics! Interesting!!

No comments:

Post a Comment