Real time ray tracing is getting closer

TheGreySpectre

Limp Gawd
Joined
May 1, 2005
Messages
439
While still a couple years off a least for the consumer market intel is doing research into cloud based solutions for ray traced gaming. I thought it was quite interesting also the fact that they are able to get realtime working at all even if it is only 20-35fps on a workstation as opposed to a server is quite impressive. I am hoping that within the next 10 years we will have moved to real time ray tracing instead of rasturization for graphics processing. It's a pretty solid article as they have also looked at the latency issues that come with cloud based gaming.

I thought you would find this article rather interesting on the progress they have made. the latest demo was done with a ray traced version of wolfenstein.

Intel Experiment with Cloud Based Ray Tracing

Ray Traced Quake Wars

*for those of you not familiar with it ray tracing is a form of rending using bouncing light rays it is more processor intensive but produces much better images, being used for CG movies and realistic modeling of cars.
 
interesting stuff, but im pretty positive ive seen the wolfenstein comparisons a long time ago.
 
The problem really is rasterization has a long ways to go before it becomes inefficient compared to ray tracing, and hardware has a long way to go before it reaches a point of being excessive enough to afford ray tracing (somewhat like CPUs for gaming today).

When doing real time rendering the trade off isn't time/cost for quality but quality for quality. For instance if we get to a point we can do real time ray tracing at 1080p with current level of effects at 30 fps, the corresponding hardware could probably do it with rasterization at 4k resolutions with 4xAA with higher level effects in some areas at 60 fps.
 
It's still a long way off and not exactly practical because of how current game mechanics are programmed and are somewhat bound to traditional rendering paradigms, IIRC. The big heads of the industry seem to think that a hybrid approach is best; using traditional rasterization for most things and leaving raytracing for lighting and shadows, relfections/refractions, etc. or completely different paradigms e.g. going back to software rendering...

The point is rasterization is here to stay for quite some time yet.
 
For games? No thanks.

I can see it working fine for single player games. There are many accounts of people using online for single player and having it work just fine. I think there will definitely be problems though for twitch multiplayer. I think a lot of it depends on the speed of the game. You might notice lag more in Unreal Tournament then you would in Dirt 2.
 
Its already starting , like it or not. Gamers will be using the cloud more and more as time goes on.

Spare me the prophetics. They'll need to figure out a way around the latency issues first. Whether it be Harry Potter magic or stamping a datacenter every 50 miles.
 
Gamers will be using the cloud more and more as time goes on.
Or they might not. Nobody knows for sure.

Onlive was made by the same guy who brought Xband and WebTV. The only business model he knows is bilking investors and early adopters out of their money for a product that barely works, and convincing them for two or three years that it will work if they wait long enough and pump enough money into his business.


With Xband and WebTV, the technology just wasn't there yet; he only had advancement to contend with. His new foe is the speed of light itself. Unless we find a way to somehow surmount the nagging issue of c (besting Einstein in the process), Onlive is doomed to fail, at least for any game that requires lower input lag than 1/4th of a second.
 
Last edited:
Not long now, GPUs and CPUs are roughly doubling in raw power every 2 years that means we're very close to real time ray tracing done on reasonably complex scenes.

However rasterization is getting to the point where the next increase in graphical fidelity is to increase the accuracy of things like lighting and shadows. The problem is they use approximations (hacks) to work and these hacks have to become more elaborate to trick us, it's a lot faster to start with but the extra hacks add overhead and eventually them getting complex enough start to slow the system down significantly. You only have to look at modern engines, you still get an enormous amount of baked in lighting with lightmaps even in engines like the Unreal 3.0 engine, because dynamic lights hurt performance a lot, in ray tracing it's all dynamic.

But we still have a good 10 years I'd guess before hardware is there for real time raytraced games, 10 years is 5x 2 year cycle, which is an increase of about 32x the processing power.
 
And where's Intel's Larrabee card, that they said was going to surpass AMD and Nvidia?
 
People who are citing speed of light issues: You do realize that current networks don't allow speed of light transfers due to routers not being able to keep up.

To put it into perspective if they could switch at the speed of light, they could get a data center every 600 miles, and the latency would only be roughly 8msec at the furthest point. Now, that doesn't include rendering times on the servers, but we will at some point have routers that can route at the speed of light.
 
Has a long way to go, but very interesting nontheless.

What I would like is my own PCIe Knights Ferry card. :)
 
I am going to go freeze myself now, someone come melt me back to life in 10 years plz!
 
The problem really is rasterization has a long ways to go before it becomes inefficient compared to ray tracing, and hardware has a long way to go before it reaches a point of being excessive enough to afford ray tracing (somewhat like CPUs for gaming today).

When doing real time rendering the trade off isn't time/cost for quality but quality for quality. For instance if we get to a point we can do real time ray tracing at 1080p with current level of effects at 30 fps, the corresponding hardware could probably do it with rasterization at 4k resolutions with 4xAA with higher level effects in some areas at 60 fps.

While I can agree with your second paragraph, I don't agree with the premise of the first paragraph. I think your definition of a long way may be different than mine. The article in question if you scroll down showing some of the particularly detailed graphics areas shows a frame rate of 60-72 fps using '4' of the test machines as the cloud. The test machines weren't overly powerful machines though. They were i7-980x plus the pci card. I can't help but think that today's top end processor will be beat my mid-range processors in a year. Similar to how the 920 provided similar performance to the $1200.00 extreme processors of the Core 2 Quad series for 1/4th the cost. That being said, the cloud computer they made probably costed around $4800.00 to build.

Yes, $4800.00 today is a lot of money, but, 2 years from now you'll probably be able to purchase the same performance in hardware for $1800.00 or so which for many gamers on this forum would be 'affordable' for a new build and if it could provide great image quality at 60 fps, that's not so bad at all. Given 4 years, the cost might be closer to $900.00 for that performance level of hardware which is probably within even moderate hobbist's budget for a new build.

Maybe, I've been around longer, but I wouldn't consider 2-4 years a 'long time' per say but to me 2 years will pass by in no time.
 
While I can agree with your second paragraph, I don't agree with the premise of the first paragraph. I think your definition of a long way may be different than mine. The article in question if you scroll down showing some of the particularly detailed graphics areas shows a frame rate of 60-72 fps using '4' of the test machines as the cloud. The test machines weren't overly powerful machines though. They were i7-980x plus the pci card. I can't help but think that today's top end processor will be beat my mid-range processors in a year. Similar to how the 920 provided similar performance to the $1200.00 extreme processors of the Core 2 Quad series for 1/4th the cost. That being said, the cloud computer they made probably costed around $4800.00 to build.

Yes, $4800.00 today is a lot of money, but, 2 years from now you'll probably be able to purchase the same performance in hardware for $1800.00 or so which for many gamers on this forum would be 'affordable' for a new build and if it could provide great image quality at 60 fps, that's not so bad at all. Given 4 years, the cost might be closer to $900.00 for that performance level of hardware which is probably within even moderate hobbist's budget for a new build.

Maybe, I've been around longer, but I wouldn't consider 2-4 years a 'long time' per say but to me 2 years will pass by in no time.

Those PCIe cards are not a joke though. They are crunching most of the data, not the 980. They are capable of 128 simultaneous threads relative to the 980's 12. Even though they are only clocked at 1.2 ghz, they are able to crunch a lot of parallel data, similar to what videocards are capable of. Just running this demo on a set of 4 980s would result in significantly reduced framerate, and a significant increase in your time scale.
 
If it takes a cloud you know half of us will just build our own out of 20 years of old PCs and a few new ones :p
 
While I can agree with your second paragraph, I don't agree with the premise of the first paragraph. I think your definition of a long way may be different than mine. The article in question if you scroll down showing some of the particularly detailed graphics areas shows a frame rate of 60-72 fps using '4' of the test machines as the cloud. The test machines weren't overly powerful machines though. They were i7-980x plus the pci card. I can't help but think that today's top end processor will be beat my mid-range processors in a year. Similar to how the 920 provided similar performance to the $1200.00 extreme processors of the Core 2 Quad series for 1/4th the cost. That being said, the cloud computer they made probably costed around $4800.00 to build.

Yes, $4800.00 today is a lot of money, but, 2 years from now you'll probably be able to purchase the same performance in hardware for $1800.00 or so which for many gamers on this forum would be 'affordable' for a new build and if it could provide great image quality at 60 fps, that's not so bad at all. Given 4 years, the cost might be closer to $900.00 for that performance level of hardware which is probably within even moderate hobbist's budget for a new build.

Maybe, I've been around longer, but I wouldn't consider 2-4 years a 'long time' per say but to me 2 years will pass by in no time.

To clarify my point, the time I mention is not the time until real time ray tracing is possible at current resolutions and graphics levels, but until the issues in my second paragraph can be addressed. Either development using rasterization techniques need to reach a point of diminishing returns (or even there limit), making ray tracing techniques essentially more efficient (or the only way) in increasing rendering quality for a given amount of processing power. Or we need so much excess computing power that the performance difference between ray tracing and rasterization is essentially negligible.

Until those two issues are addressed even with real time ray tracing being possible (it actually already is possible, I believe demoed on a modified Quake 3 or 4? on consumer hardware) you still run into the issue of rasterization being able to do a better looking game given the same hardware. The ray tracing based engine will of course have much more realistic light interaction, which is why I believe the demos tend to insert mirrored metal surfaces all over the place to show this off, but will look worse in every other way using the same level of hardware.
 
why can't they ever show content that looks much better then something from 1999? Seriously, for shiney glass and metal it's nice and all, but the average environment has little of that, and the estimations we are doing now is more then sufficient...

so, what's the point really in going down this road?

show me voxel, unbiased rendering, etc... this particular ray tracing stuff just feels like old news, and it's the same every year for the past 10 years or so...

blarg, I guess i'm just impatient and feel it'll be surpassed soon after it becomes a standard. Things like this seem to me the next logical step:

http://www.youtube.com/watch?v=xHqRLLbfQt0&feature=related

Imagine if it used voxels as well, I think it's coming in the future.
 
why can't they ever show content that looks much better then something from 1999? Seriously, for shiney glass and metal it's nice and all, but the average environment has little of that, and the estimations we are doing now is more then sufficient...

so, what's the point really in going down this road?

show me voxel, unbiased rendering, etc... this particular ray tracing stuff just feels like old news, and it's the same every year for the past 10 years or so...

blarg, I guess i'm just impatient and feel it'll be surpassed soon after it becomes a standard. Things like this seem to me the next logical step:

http://www.youtube.com/watch?v=xHqRLLbfQt0&feature=related

Imagine if it used voxels as well, I think it's coming in the future.

A lot of it is research demonstrating what is possible hence they want content that is already created instead of having an entire team of people to create a game that nobody can run for another 10 years. Ray tracing still looks crappy if you are loosing low polygon count models and crappy textures. Also, if id is providing them cod and models for the games then they have to take what they can get for free.

Consider for example that movies are done with ray tracing. Consider how cool it would be if graphics in games were as good as the graphics in a movie like Avatar. Or hell, if things looked even as good as the starcraft teaser trailer.

Once this moves out of the research phase to the consumer phase then you will start to see really kick ass looking games.
 
Back
Top