Monday, January 31, 2011

Progress

Well, I found some ( read : very very few ) implementations of bucket sort. Two of them were OpenMP based.
1) http://link.ece.uci.edu/~weiran/documents/eecs221_w08.pdf

2) http://www.moon.sannet.ne.jp/okahisa/sort/node32.html


The Third One:

It uses a library called MCSTL. It can be downloaded from here. It's corresponding research paper can be downloaded from here. Interestingly, my version of Ubuntu, viz. 10.10 or Maverick, does not have support for MCSTL. Even if I downloaded MCSTL and copy pasted the .h files at my "include" folder ( /usr/include ), i did not work somehow. Apparently, there is no way to install MCSTL, so I had to do this ugly copy-paste. Bottom-line, it did not work. Later I found that MCSTL requires g++-4.2.2 which Maverick doesn't support, so, conclusion is, forget MCSTL.

Fortunately, I found an interesting piece of information at the STXXL library page. It says STXXL internally may use MCSTL or libstdc++ parallel mode, where the later being a successor of MCSTL.
So, that's what I need. I need the libstdc++ parallel mode library. This page is a decent piece of documentation about that. Being part of Libstdc++, it has been very well documented at many places.

EDIT: Got some documentation for mcstl.







Other interesting Pages:


Wednesday, January 26, 2011

Frameworks: Monte Carlo RT & Photon Mapping

Monte-Carlo RT is extended concept of Classical ray tracing ( Whitted RT ). And in a few minutes you will know what I mean when I say that.
Classical ray tracing takes one ray, at every intersection of a ray with the scene, a single ray is spawned in three most important ( most significant ) & exact directions: towards the lights, mirror reflections & perfect refractions. This does not handle glossy or diffuse inter-reflections.
Monte-Carlo ray tracing has two main types.
Path tracing takes the approach of having a single ray reflect or refract through the scene, changing direction at each surface intersection. This may sound exactly similar to classical ray tracing, but there is an important difference- Monte-Carlo ray tracing picks us a random direction for each reflection or refraction ray. The direction is obviously not completely random, but is influenced by the BRDF. The BRDF, in some sense, acts like a probability density function. And the BRDFs can be arbitrary. For example, for a glossy surface with a narrow Phong reflection lobe, reflection rays would be much more likely to be spawned near the center of the lobe than anywhere else. Weighing the random direction in this way is called as importance sampling.
Distribution ray tracing spawns many random rays from every surface intersection.

Why distribution ray tracing would give us more photo-realism?
Lets come back to the analogy of BRDF as probability distribution function. This analogy is natural since definition of BRDF essentially is how much (in terms of ratio) of input radiance will be reflected in every specific possible direction. This ratio naturally defines the probability of any given ray getting reflected in that specific direction. So, the point is, BRDF = probability distribution function. So, say given 100 input (i.e. incoming) rays, 40 will go in a preffered mirror reflection dirction and the rest 60 will be reflected in rest of the infinite hemispherical directions. Note that, if we assume that the underlying surface roughly does follow the estimated BRDF, then these rays going in all these directions in all these proportions is physically happening.
Now, if we just ignored 99 of these rays and just followed a single ray in a single specific direction ( which could be random if path tracing, and fixed if classical RT), then, we are clearly losing out on a lot of rays which are physically contributing to the resulting illumination story at that point.
Distribution Ray tracing gives required respect and consideration to these 99 rays. It turns out, that these extra rays help us get closer to the reality, closer to the real picture.
I like that! So why not just use it?
Because, in reality, 99 becomes infinite. And that will make our program finish its computations after lightyears. You might be, but I, personally, am not OK with this. So, how to do distribution ray tracing today?
As you might have guessed, we don't sample all the infinite rays. Instead, we choose some number of random directions. Since we choose, we would like them to be the important ones.

As a final note, I should state this: Monte Carlo ray tracing fully solves Kajiya's rendering equation, given enough time and rays.



Reference: Real-time Rendering, section 9.8.2 ( ray tracing )

Further Reading:

My first parallel program : Bucketsort

Well, let's get to the point. How will we parallelize bucket sort?
What's The Scope? : Figuring out *how much* it can be parallelized?
Radix sort, as we know, is basically, doing Bucket Sort on every digit, from right to left such that least significant digit first. Broadly, can we parallelize the processing of individual digits? That is, given 4 digit integers, can we have 4 threads that independently sort four digits? No. Since, the whole idea of radix sort depends on sorting digits sequentially. That is, we must sort all numbers on least significant digits first. Then sort the resulting array on next-least-significant-digit, get the resulting array, then sort on next-least-significant-digit, and so on.
So, this part, has to be serial.

However, in the inner Bucket Sort, the numbers can be inserted into respective buckets completely independently. Great! But, there is just one problem. Say, there are a million numbers and we use the classic approach of ten buckets. Then, there are going to be numerous collisions, on any given bucket. So can we somehow get around with this?

Idea I
If we have only 10 buckets, and if there are thousands of threads, then there are bound to be a plenty of collisions. One way to avoid these collisions is to somehow increase the number of buckets. How can we increase the number of buckets?
We have 10 buckets because we examine one digit of each number. How many values a single digit can take? Ten. Hence we made 10 buckets. Now, say we examine two digits at a time. Two digits can take values ranging from 00 to 99. i.e. Hundred values. That means, hundred buckets. So, we'll make 100 buckets, and instead of examining one digit at a time, we examine two. Rest remains the same- e.g. considering rightmost ( least significant ) pair of digits in the first level, going towards leftmost end level by level etc.
So, do you get the essence of what we did? We increased buckets, by examining more digits at a time, to reduce number of collisions on any bucket. If we take this approach further, we are basically going towards pure counting sort. Remember counting sort? In counting sort, say if there are numbers not more than 6700, then we create 6700 buckets, and then every number i goes in its bucket meaning i'th bucket. It will have zero collisions when there are no/few repetitions. Zero collisions! What do you pay to get this is- lots of memory.
So, from bird's eye, we see two extremes.
One, is pure bucket sort where we examine 1 digit at a time and have 10 buckets, implying minimum memory foot-print, but high collision.
Two, is pure counting sort where we examine all d digits at a time and we have N_MAX buckets, meaning
huge memory foot-print but sub-zero collision.
Clearly, it makes us think that the best way is the middle way. Can we come up with a adaptive heuristic that tells me "dude if you examine p digits at a time, you'll get best possible balance of memory footprint & collision".


Some one has already spent some time on this!
So obviously I googled. I found this pdf which basically is a project report by Nie Weiran in 2008, which is dedicated to parallelizing radix sort. What Else Would I Want! :)
After I go through the paper which is not more than 3-4 pages, I would post the essence and of-course, my thoughts.



A word on performance measurement
I spent almost 2 hours searching and reading about the performance measurement tools for Linux multi-core environment. There were seemingly-good tools out there, important ones of which have been surveyed here.
For now, I didn't need a very hi-fi tool, I just needed to see the real-time utilization measurement of my CPU cores ( the machine I am using is Intel i5 quad-core ). For this primary purpose, I found "gnome-system-monitor" really good. My system already had it installed, may be it comes by default.


Progress  - I


Well, I found some (very very few :)) implementations of bucket sort. Two of them were OpenMP based.
1) http://link.ece.uci.edu/~weiran/documents/eecs221_w08.pdf
2) http://www.moon.sannet.ne.jp/okahisa/sort/node32.html
3) http://snippets.dzone.com/posts/show/4365


A Note about the the third one
It uses a library called MCSTL. It can be downloaded from here. It's corresponding research paper can be downloaded from here. Interestingly, my version of Ubuntu, viz. 10.10 or Maverick, does not have support for MCSTL. Even if I downloaded MCSTL and copy pasted the .h files at my "include" folder ( /usr/include ), i did not work somehow. Apparently, there is no way to install MCSTL, so I had to do this ugly copy-paste. Bottom-line, it did not work. Later I found that MCSTL requires g++-4.2.2 which Maverick doesn't support, so, conclusion is, forget MCSTL.
Fortunately, I found an interesting piece of information at the STXXL library page. It says STXXL internally may use MCSTL or libstdc++ parallel mode, where the later being a successor of MCSTL. So, that's what I need. I need the libstdc++ parallel mode library. This page is a decent piece of documentation about that. Being part of Libstdc++, it has been very well documented at many places.
I got some documentation of MCSTL here.

Progress II


Finally after spending a few more hours on which parallel radix sort algorithm to implement, I have finally come to the conclusion that I am going to implement Partitioned Parallel Radix Sort. I have gone through the algorithm and started with some basic implementation. One issue might have to be faced later that the paper is meant for distributed memory architectures whereas openMP is meant for shared memory architectures.


Miscellaneous interesting Pages


1. Parallel In-Place Radix Sort Simplified
2. 32 OpenMP Traps For C++ Developers
3. Histogram Sort (To find the size of the buckets before-hand so that we wont waste space by overestimating the bucket size)
4. A better idea of histogram sorting



Sunday, January 16, 2011

Point Rendering

The paper here gives a good background of how point rendering has evolved, in its introduction section.

What is challenging with point-rendering?
Filling the holes. Broadly, there are two ways you fill up the holes. One, is to generate polygonal mesh, by reconstructing the surface of the point-based model and then render then mesh.I In this method, the polygons ( triangles, usually) fill up the gaps and the color/depth values are interpolated within every triangle. So, effectively the gaps are filled. The second way to fill up the holes, is when you don't generate a triangle mesh and directly render points as points. Here, finding and filling up holes is relatively more work.


Triangle Mesh Generation Vs Direct Point Rendering
A few years ago, the 3D- scanner devices were not as fine as they are today. Hence, relatively lesser number of point samples were available. That made surface reconstruction ( building a triangle mesh over the points' surfaces and eventually render the triangles ) much more suitable that direct point rendering, as there were bigger and larger gaps to fill.
However, today, the number of samples that are captured are much larger. The gaps are smaller, and the problem with the surface reconstruction algorithms that are currently there is that, they grow super-linearly with the number of points. ( I didn't quite understand how is this making them bad? :O )
So, though, surface reconstruction methods have been better some time back, Now, direct point rendering has got an upper edge.



Future Readings:

Symposium on Point-Based Graphics is a conference that is dedicated to point-based graphics! Interesting!!

Saturday, January 15, 2011

Point Rendering

The paper here gives a good background of how point rendering has evolved, in its introduction section.

What is challenging with point-rendering?
Filling the holes. Previously, the 3D- scanner devices were not as fine as they are today. Hence, relatively lesser number of point samples were available. That made surface reconstruction ( building a triangle mesh over the points' surfaces and eventually render the triangles ) much more suitable that direct point rendering. However, today, the number of samples that are captured are much larger.

Symposium on Point-Based Graphics is a conference that is dedicated to point-based graphics! Interesting!!


Thursday, January 13, 2011

Sunday, January 9, 2011

Microsoft Kinect

following resources are really important:
Overall Statistical Information
  1. http://en.wikipedia.org/wiki/Kinect has very good statistical data about the device and a solid bunch of references.
Making novel use of Kinect
Point cloud rendering
  1. http://www.blendernation.com/2010/11/26/microsoft-kinect-in-blender-realtime-point-cloud-demonstration/ It talks about making use of Kinect device to capture point cloud and renders it using Blender at 2-15 FPS depending upon the point-cloud quality. The video, was not very impressive, basically because it's a very badly composed video, you can't make out anything from it!!!
  2. http://www.digitalsociety.org/2010/11/kinect-modified-to-capture-true-3d-video/ showcases an impressive video!!!!! A researcher at University of California Davis, Oliver Kreylos has produced one of the most amazing demonstrations of true 3D video using an off-the-shelf Microsoft Kinect.

Potential applications of Kinect
What makes Kinect an amazing device - is - the fact that it is able to capture purely 3D data ( 3D points ) at 30 FPS. Let me say it again more loudly - 30 FPS!
The original purpose of Kinect is clearly in gaming, as the device, when released, has been meant to be used along with XBOX 360. Using Kinect, a user would be able to interact / manipulate with the 3D virtual world, by just making suitable gestures, mostly as if he/she were physically there.
The first use-case of Kinect that comes to my mind is telepresence.

Saturday, January 8, 2011

Limitations of Ray Tracing

An article here, written by Dean Calver at the end of 2007, basically talks about the limitations of ray tracing. The good thing about this article is that it has been written from a neutral, unbiased perspective, which I find very rare.
As this article was written back in 2007-2008, it makes sense to evaluate it again, since things may have changed since then, so some arguments may no more be applicable and some new arguments may step in.
I would love to do that soon and post my views here.

In case the above link doesn't work, use this manual link: http://www.beyond3d.com/content/articles/94/1
I consider it very important that you become a expert person- gain tremendous mastery over one area in life. One single area, but, killer knowledge and expertise. It's really surprising sometimes when people who come from completely different background do miraculous work and/or become really important people in some amazingly different area. The guy who used to manage the whole DirectX graphics group in NVidia (where I worked for a year) was a PhD in nuclear physics. Yes. Or.. umm, a bright student here at IIT Delhi who is doing his PhD in algorithms had done his masters in VLSI! And these people have proved themselves to be much better than other people who may be there in the same field for quite a longer time.
We were discussing today what may be the reasons behind why this might have happened, and very important things came up. One was how the people who have done their masters/PhDs in mathematics or physics become potentially-good-at-any-analytical-field. They have become very strong minds, with really sharp abilities of critical thinking. They know how to attack problems, and give novel kick-ass out-of-the-box solutions. And our second conclusion was "yes, *this* is it"!!! This is exactly what a bunch of good profs, the work you do, your passion with your work, and the company you keep, end up doing to you. They do it to you and make it the person I mentioned above.
Isn't *this* something that you should be doing your PhD, and for that matter, even your masters for?

I started from a very different note than what I ended on...I hope it made some sense to you :)

One step ahead...:)

  • It will be really good idea if we could combine ray tracing and rasterization in an innovative way. Prof. Subodh gave an example of figuring out ambient occlusion at any point by putting a camera at that point looking out, rasterizing the scene from there and somehow ( say, by occlusion culling) find out how much of the rendered depths are more than some "radius" depth. By radius I mean radius of the sphere of influence, occluders inside of which only will matter. Similar to AO, can we make use of rasterization within ray tracing for much more than just using it for primary ray casting.
  • There were two projects he talked about. These two projects are being worked upon in coordination with some German university. On account of which, he mentioned, there would be good chances of doing a summer internship at Germany, to work on these projects. They are,
  1. Capturing and rendering fine surface details: He talked about capturing surface details from real-life photographs and using them. He was talking of achieving effect something similar to bump mapping.
  2. Microsoft Kinect Device: Making novel use of it to do various things: Doing things like ,one, creating point clouds or, two, extracting 3D models from Kinect captures. The device, as he said, basically captures RGB+depth info of what it "sees", -and-here-it-comes- in real-time. That makes it a really useful device.
I heard that the association is with Max Planck university. In his Research Mission, Prof. Dr. Marcus Magnor of the same university, on the university web, mentions- "In recent years, however, progress in rendering algorithms and graphics hardware has led to the insight that, despite faster and more complex rendering calculations, the modeling techniques traditionally employed in computer graphics often fundamentally limit attainable realism. I therefore concentrate on investigating suitable algorithms to import natural world-recorded realism into computer graphics. Such image/video-based modeling and rendering techniques employ conventionally taken photographs or video recordings of the real world to achieve photo-realistic rendering results."
These lines underline the importance of modeling and surface detailing. The same motivation is , in my opinion, behind the first project that Subodh mentioned.

Tuesday, January 4, 2011

Ray tracing in games and movies

Today I was going through HPG 2009 Report. It mentions Larry Gritz, in his key-note, saying that ray tracing making more sense for film rendering than the current REYES technique. He also gives a several reasons behind it. That read, I was wondering really how many and which games/movies use ray tracing for rendering, as of today. Hence this post.

It seems to me that ray tracing has to defeat rasterization in games industry, and, for film rendering, has to defeat REYES. Seems like ray-tracing is having a really hard time these days!
Ray tracing in games
1)
In 2007-08, students of the IGAD course of the NHTV University of Applied Sciences (Breda, The Netherlands) worked on two games using a real-time ray tracer for rendering. Here is the page that describes there work.
Though I did not find any good feedback from that place.

2) Quake 4, Quake Wars and Wolfenstein have been ray traced!
Here are the details.
Quake 3 ray traced : 2004
Quake 4 ray traced : 2006
Quake Wars ray traced : 2008
Wolfenstein ray traced : 2010

All these projects, seemingly, are CPU based.


This article at Intel's research site reports about the Wolf3D getting ray-traced-
  • Effects achieved : reflections, refractions,
  • Scene complexity: around 1 million triangles ( claimed )
  • Performance :
  • CPU/GPU/Hybrid:


Indirect Illumination : The thing that makes global illumination a challange!

What are the methods/techniques that are used to render ( a.k.a. approximate :) ) indirect illumination?
  • Brute Force
  • Irradiance map
  • Light Cache
  • Metropolis light transport ( is the the same as Light cache?)
  • Photon map
  • Path tracing, bidirectional path tracing
  • VPL ( Virtual Point Lights ) (based on the instant radiosity formulation)
  • VSL ( Virtual Spherical Lights )
What are the sub-challenges?
  • Losing out on small, sharp shadows
  • Losing out on glossy inter-reflections
  • noise!
References:
Approaches to indirect illumination used in GI
http://www.spot3d.com/vray/help/150SP1/render_params_gi.htm
http://www.spot3d.com/vray/help/150SP1/examples_GI.htm

Is ray tracing a saturated field?

Look back, briefly!
Ray tracing has been studied for decades. As far as I know, since 1968 (Ray Casting: Appel, 1968). The first paper on Recursive ray tracing came in 1980 (by Whitted). Till around 2000 the ray tracing had always been considered to be significantly slower than conventional rasterization techniques. From around 1997, people have started practically believing in making ray-tracing real-time. Since then a lot of efforts have been put to make it real-time, or at least as fast as rasterization if not faster. There used to be an important conference dedicated to research work in ray tracing known as The Symposium on Interactive Ray Tracing. After 2008 it got combined with another graphics hardware related conference into a single bigger conference known as High-Performance Graphics. The work published in HPG and ,of-course, SIGGRAPH and SIGGRAPH Asia will give you more detailed idea of how and where ray-tracing is going.


What makes me think that Ray Tracing still has opportunities?

Ok, think of it in this way. Why is it difficult to replace rasterization by ray-tracing today? lets list out the reasons... First, of-course, is the dominance of rasterization based hardware. Secondly, many problems have been solved/ partially solved for rasterization. At least, *a huge effort* has been put into rasterization, and that has resulted into really impressive techniques/effects.All in all, these are the primary reasons. Now lets see why would ray-tracing be the future of rendering. Lets consider one reason at a time.Hardware, it seems is slowly but clearly moving towards ray-tracing. Many NVIDIA researchers are working on ray-tracing, and the whole CUDA, GPGPU direction plus OptiX, tell us that NVIDIA is realizing the importance of new graphics hardware being able to support high performance ray-tracing. So,things *are*, changing.I don't know what AMD is doing about it, honestly. That is important, too.Second aspect is important. A lot of techniques have already been developed and a lot of research has already been done, assuming rasterization beneath. Back then, the bias towards rasterization was natural and obvious- because ray-tracing was really slow. Really really slow. May be that is why graphics hardware industry also preferred rasterization.But now, we see that ray tracing can be fast, actually, real-time. There maybe one thing to be verified here. As Prof. Subodh said, it may be that the"real-time" performance seen today is not for very complex scenes, or otherwise for some *specific *highly complex scenes.But the point we should focus our attention on, is that relatively lesser effort has been put up in ray tracing, than in rasterization, and most of the effort put in ray-tracing was all directed to eventually get it real time. Now, think about the rest of the effects. They are so many. We see that 2010 SIGGRAPH has motion blur and defocus within ray-tracing. Like this,there may be many more effects for which ray tracing has not been combined with. THIS is the opportunity.Some opportunities may be related to some effects. E.g. motion blur requires dynamic ADT ( acceleration data structure ) , which I really thought useful.


Here are examples of some attempts that were made in this direction:


What does ray-tracing lack? - Opportunities on the horizon
  • Performance : Push FPS more and more ( bandwidth, packet tracing, SIMD )
  • hybridizing RT with rasterization. To get the best of both worlds
  • dynamic data structures and related effects ( motion blur, etc )
  • Indirect illumination effects
  • how would we achieve good Caustics using ray tracing?
  • soft shadows and glossy reflections
  • Depth of Field? deFocus? Anti-aliasing?
  • micro-polygon RT, stochastic/distributed RT, monte carlo RT