Thursday, November 20, 2014

Cloud ray tracing in 2050

My recent reread of Turner Whitted's paper from 35 years ago made me think about what kind of ray tracing one can do 35 years from now.  (I am likely to barely see that).

First, how many rays can I send now?  Of the fast ray tracers out there I am most familiar with the OptiX one from NVDIA running on their hardware.  Ideally you go for their VCA Titan-based setup.  My read of their speed numbers is that for "hard" ray sets (incoherent path sets) on real models, that on a signle VCA (8 titans) I can get over one billion rays per second (!).

Now how many pixels do I have now.  The new computer I covet has about 15 million pixels.  So the VCA machine and OptiX ought to give me around 100 rays per pixel per second even on that crazy hi-res screen.  So at 30fps I should be able to get ray tracing with shadows and specular inter-reflections (so Whitted-style ray tracing) at one sample per pixel.  And at that pixel density I bet it looks pretty good.  (OptiX guys I want to see that!).

How would it be for path-traced previews with diffuse inter-reflection?  100 rays per pixel per second probably translates to around 10 samples (viewing rays) per pixel per second.  That is probably pretty good for design previews, so I expect this to have impact in design now, but it's marginal which is why you might buy a server and not a laptop to design cars etc.

In 35 years what is the power we should have available?  Extending Moore's law naively is dangerous due to all the quantum limits we hear about, but with Monte Carlo ray tracing there is no reason that for the design preview scenario where progressive rendering is used you couldn't use as many computers as you could afford and where network traffic wouldn't kill you.

The overall Moore's law of performance (yes that is not what the real Moore's Law is so we're being informal) is that historically performance of a single chip has doubled every 18 months or so.  The number of pixels has only gone up about a factor of 20 in the last 35 years and there are limits of human resolution there but let's say they go up 10.  If the performance Moore's law continues due to changes in computers, or more likely lots of processors on the cloud for things like Monte Carlo ray tracing, then we'll see about 24 doublings which is about 16 million.   To check that, Whitted did about a quarter million pixels per hour, so let's call that a million rays an hour which is about 300 rays per second.   Our 16 million figure would predict about 4.8 giga rays per second, which for Whitted's scenes I imagine people can easily get now.   So what is that in 2050 per pixel (assuming 150 million pixels) we should have ray tracing (on the cloud?  I think yes) of  10-20 million paths per second per pixel.

What does this all mean?  I think it means unstratified Monte Carlo (path tracing, Metopolis, bidirectional, whatever) will be increasingly attractive.  That is in fact true for any Monte Carlo algorithm in graphics or non-graphics markets.  Server vendors: make good tools for distributed random number generation and I bet you will increase the server market!  Researchers: increase your ray budget and see what algorithms that suggests.  Government funding agencies: move some money from hardware purchase programs to server fee grants (maybe that is happening: I am not in the academic grant ecosystem at present).

No comments: