DirectX 10 & the Future of Gaming

Brent_Justice said:
DX9 games running under DX9.0L under Vista should be more efficient (faster) than on XP.
No kidding? That's new info to me. That's a good reason to buy Vista....depending on how much of an improvement we're talking about.
 
jebo_4jc said:
No kidding? That's new info to me. That's a good reason to buy Vista....depending on how much of an improvement we're talking about.

It likely won't be a trivial improvement, that much is certain. One thing that'll also VERY likely happen with D3D9Ex on Vista is that alt+tabbing will occur very fast. Because of the new memory layout that the VDDM and D3D10 (and hence D3D9Ex) use, resources like textures and such don't have to be recreated when you alttab in or out of a game. Oh, and there's also the feature of VDDM that doesn't require you to reboot your computer when a new driver is installed. How's THAT for convenient, eh?
 
I'm upset that MS doesn't feel the need to keep older operating systems up to date in terms of game API. Unfortunately, it's well within their right. Where is OpenGL at a time like this? I can remember in the early days of UT when I would set my 32MB Riva TNT2 Ultra video card to use OpenGL drivers for UT because I got MUCH smoother play and better framerates. Once DX8 and 9 made their debut, OpenGL seemed all but gone in terms of most PC games. I don't like the idea of having to upgrade my OS to play a new game, nor should anyone else out there.

I'm not impressed with Vista, as it is. I don't care much for a 3D desktop. I prefer to keep my CPU temperatures down, and I currently turn off all of the special effects of the desktop, such as the fading-in Start menu, etc. It does nothing but slow me down from accessing my data. The OS is supposed to help me run the applications I need to do what I do, not be another application in itself that I have to account for.

I hope that nV's SLI Physics concept (and whatever ATI calls theirs for the Crossfire platform, I haven't seen any official name for it) will be available for XP, as this should alleviate a lot of stress already put on the CPU.

I think that this opens the floodgates for further OpenGL implementation, if the game makers take advantage of it. Of course, I don't know the state of OpenGL, so if it's far behind in terms of technology, it needs to be updated before anyone uses it for these games. Perhaps someone could go about creating a D3D10-to-OGL translation utility so that we can enjoy the games on something *other* than Vista.
 
Brent_Justice said:
DX9 games running under DX9.0L under Vista should be more efficient (faster) than on XP.
Can you explain why? or elaborate any hands on you might have had...
 
Trimlock said:
it removes alot of the overhead mentioned in the article

Well, that's part of it. A lot of other parts of D3D10 allow for less work on the CPU side, as well as less GPU work.

For example, when rendering a cubemap (to be used later maybe for a dynamic reflection or something), 6 textures are rendered to, one for each face of the cube. Each face requires the scene to be re-rendered once, resulting in 6 renders of the scene just to generate the cubemap. This means you have to set 6 new render targets, redraw a slew of objects, etc. With one of the capabilities of the geometry shader, specifically the ability to emit triangles to seperate render targets, it's possible to simply set all 6 render targets at the same time, transform the scene in the vertex shader, and in the geometry shader generates 6 seperate triangles, one to be sent to each render target, and normal drawing continues. This way, less time is spent re-transforming the same vertices, re-setting a bunch of textures, and so on.

What this means is that (regarding this specific example) D3D10 will allow for many more dynamic cubemap-based effects, including point light shadows, and dynamic reflections.
 
I'm so tempted to troll for bait... but seriously winxp code base is a mess, so much that they went back to windows 2000 for there server code windows 2003, which the 64 bit adition of xp was built apon. As to direct x next or what ever you want to call it 90% of the functionaly is gone and most of what is left is a fancy version of aeroglass. And that piece of hardware that has to read every piece of info that was already spilt into different types of info is going to be a spf (single point of falure), and a bottle neck, as instead of sending my data over several paths to different chips now I have to send everything in a single line to one chip, it does not matter how effiecent that hardware is on the other side if I have to wait for it get done. As to orginal pieces of geometry who the hell is going to spend hours creating a dozen more trees when the players half the time can not even tell the difference between the same tree sitting next to each other when one is rotated even 5 degrees? I really can not see hardware gaining any benifit to being slowed down and forced in a narrow path, like a space heater being retired anyone still think Prescott is better? Same concept and realize that all your gpu or cpu amounts to is a fancy calculator, which can do math that is broken done to the simpliest steps it can understand, the math does not get easier because of the order it is done, only when the same steps can be performed in an order when it can use results more than once do you see any benifit to ordering and often the gains are lost because the overhead of the ordering. ATi hardware is faster becuase it can get the info faster not because of some api change. the api change can actully slow it down since it is extra work right in one of the worst bottleneck area of the gpus' getting the data to areas that do the math.
 
drakken said:
I'm so tempted to troll for bait... but seriously winxp code base is a mess, so much that they went back to windows 2000 for there server code windows 2003, which the 64 bit adition of xp was built apon. As to direct x next or what ever you want to call it 90% of the functionaly is gone and most of what is left is a fancy version of aeroglass. And that piece of hardware that has to read every piece of info that was already spilt into different types of info is going to be a spf (single point of falure), and a bottle neck, as instead of sending my data over several paths to different chips now I have to send everything in a single line to one chip, it does not matter how effiecent that hardware is on the other side if I have to wait for it get done. As to orginal pieces of geometry who the hell is going to spend hours creating a dozen more trees when the players half the time can not even tell the difference between the same tree sitting next to each other when one is rotated even 5 degrees? I really can not see hardware gaining any benifit to being slowed down and forced in a narrow path, like a space heater being retired anyone still think Prescott is better? Same concept and realize that all your gpu or cpu amounts to is a fancy calculator, which can do math that is broken done to the simpliest steps it can understand, the math does not get easier because of the order it is done, only when the same steps can be performed in an order when it can use results more than once do you see any benifit to ordering and often the gains are lost because the overhead of the ordering. ATi hardware is faster becuase it can get the info faster not because of some api change. the api change can actully slow it down since it is extra work right in one of the worst bottleneck area of the gpus' getting the data to areas that do the math.


Your an SPF....so NER :p
 
I may have misread the article, but does it say that the current ati X1*00s supposrt Dx10? or just some weird bastardized version of it?

Dx10 support would be as good a reason as any to buy an ATI card.
 
No this generation from ATI is DX9.0L it will run in vista but not WGF 2.0 ATI should release the R600 GPU in Dec of 06.

You should read the Article . There are other articles also but this one is written so well and put in terms that are so easy to understand that all should be able grasp what vista is bring to the table . [H] Did a superb job on this article. :)
 
drakken said:
I'm so tempted to troll for bait... but seriously winxp code base is a mess, so much that they went back to windows 2000 for there server code windows 2003, which the 64 bit adition of xp was built apon.

As to direct x next or what ever you want to call it 90% of the functionaly is gone and most of what is left is a fancy version of aeroglass. And that piece of hardware that has to read every piece of info that was already spilt into different types of info is going to be a spf (single point of falure), and a bottle neck, as instead of sending my data over several paths to different chips now I have to send everything in a single line to one chip, it does not matter how effiecent that hardware is on the other side if I have to wait for it get done.

As to orginal pieces of geometry who the hell is going to spend hours creating a dozen more trees when the players half the time can not even tell the difference between the same tree sitting next to each other when one is rotated even 5 degrees? I really can not see hardware gaining any benifit to being slowed down and forced in a narrow path, like a space heater being retired anyone still think Prescott is better?

Same concept and realize that all your gpu or cpu amounts to is a fancy calculator, which can do math that is broken done to the simpliest steps it can understand, the math does not get easier because of the order it is done, only when the same steps can be performed in an order when it can use results more than once do you see any benifit to ordering and often the gains are lost because the overhead of the ordering.

ATi hardware is faster becuase it can get the info faster not because of some api change. the api change can actully slow it down since it is extra work right in one of the worst bottleneck area of the gpus' getting the data to areas that do the math.

Paragraphed for easy (well, ier) reading :p
 
Well, that's part of it. A lot of other parts of D3D10 allow for less work on the CPU side, as well as less GPU work.

does the supported version of DX9 on vista support this feature? the person asking was seeing what the differences were between the two seperate versions (one on XP vs. Vista), i was only aware of the decreased overhead
 
Trimlock said:
does the supported version of DX9 on vista support this feature? the person asking was seeing what the differences were between the two seperate versions (one on XP vs. Vista), i was only aware of the decreased overhead

Yes, as I mentioned in an earlier post, D3D9Ex will benefit from the lowered CPU usage.

I'm so tempted to troll for bait... but seriously winxp code base is a mess, so much that they went back to windows 2000 for there server code windows 2003, which the 64 bit adition of xp was built apon.

As to direct x next or what ever you want to call it 90% of the functionaly is gone and most of what is left is a fancy version of aeroglass. And that piece of hardware that has to read every piece of info that was already spilt into different types of info is going to be a spf (single point of falure), and a bottle neck, as instead of sending my data over several paths to different chips now I have to send everything in a single line to one chip, it does not matter how effiecent that hardware is on the other side if I have to wait for it get done.

As to orginal pieces of geometry who the hell is going to spend hours creating a dozen more trees when the players half the time can not even tell the difference between the same tree sitting next to each other when one is rotated even 5 degrees? I really can not see hardware gaining any benifit to being slowed down and forced in a narrow path, like a space heater being retired anyone still think Prescott is better?

Same concept and realize that all your gpu or cpu amounts to is a fancy calculator, which can do math that is broken done to the simpliest steps it can understand, the math does not get easier because of the order it is done, only when the same steps can be performed in an order when it can use results more than once do you see any benifit to ordering and often the gains are lost because the overhead of the ordering.

ATi hardware is faster becuase it can get the info faster not because of some api change. the api change can actully slow it down since it is extra work right in one of the worst bottleneck area of the gpus' getting the data to areas that do the math.

Cocaine's a helluva drug :confused:
 
Originally Posted by drakken
I'm so tempted to troll for bait... but seriously winxp code base is a mess, so much that they went back to windows 2000 for there server code windows 2003, which the 64 bit adition of xp was built apon.

As to direct x next or what ever you want to call it 90% of the functionaly is gone and most of what is left is a fancy version of aeroglass. And that piece of hardware that has to read every piece of info that was already spilt into different types of info is going to be a spf (single point of falure), and a bottle neck, as instead of sending my data over several paths to different chips now I have to send everything in a single line to one chip, it does not matter how effiecent that hardware is on the other side if I have to wait for it get done.

As to orginal pieces of geometry who the hell is going to spend hours creating a dozen more trees when the players half the time can not even tell the difference between the same tree sitting next to each other when one is rotated even 5 degrees? I really can not see hardware gaining any benifit to being slowed down and forced in a narrow path, like a space heater being retired anyone still think Prescott is better?

Same concept and realize that all your gpu or cpu amounts to is a fancy calculator, which can do math that is broken done to the simpliest steps it can understand, the math does not get easier because of the order it is done, only when the same steps can be performed in an order when it can use results more than once do you see any benifit to ordering and often the gains are lost because the overhead of the ordering.

ATi hardware is faster becuase it can get the info faster not because of some api change. the api change can actully slow it down since it is extra work right in one of the worst bottleneck area of the gpus' getting the data to areas that do the math.



What he's saying is in windows, there are eight billion .dll files for any one execution to go through. What they need to do is not try to make this system work, but start over from scratch and make the path shorter from the cpu to the gpu and from the gpu to the screen.
 
What he's saying is in windows, there are eight billion .dll files for any one execution to go through. What they need to do is not try to make this system work, but start over from scratch and make the path shorter from the cpu to the gpu and from the gpu to the screen.

Oh.

Wait, that's exactly what they did with Vista and D3D10!
 
Ok I'm going to admit Internet slang fascinates me so I have to ask what is an SPF?

I'm a coder by nature and work, and know programmers hate following my logic especially when it is faster since no one wants to to admit the code does not make sense for some reason, so people not understanding what I wrote is normal for me, so I tried breaking most of it down. I wrote a more techinal reply then realized it would make even less sense, I understand the code is in a beta state, but as a system var I have access

Oh the really short version I want the OS MS said they were making not the crap we are getting. lol I had to add this.

OK vista made the path longer, not shorter, it added the new d3d API as another hoop to jump through MS docs say everything was supposed to go through the API in the first place, it was such a kludge most just use the API to tell it what hardware the computer has which it does beautifully, then just parse what data they need through the Direct 3D render or through the OpenGL API which is efficient at sorting, or the brute force approach which is writing code direct to the card which is at this point almost only used for certain calls, all that is gone with vista everything has to go through a bastardized version of DX9, which includes a new data sorter, which is what I was referring to as a bottleneck where the data needed one like a hole in the head, whether it uses the new direct draw render or not. Most of the modified Direct X has to do with aeroglass being able to make calls to the API like a constantly running program. The nice features are gone, like the new file system and many other features, like clean code... what is left is a the DRM designed for winxp but there was nothing to sort the data before to protect content from being ripped. Yes that very inefficient data sorter is looking for flagged data, not a faster way to get data to the gpu, it reads the data then passes it out of order to the videocard.

Oh in simple terms the new ATi card can prefetch data faster and can store it a shorter distance than the Nvidia card, anytime you can look to cache instead of memory it is faster. The new chip stores all data as cache and more than likely the reason they went to unified shader since if it was separate it would have to be store differently meaning two chips would be the most efficient way and that would waste space on both chips as well as costing twice as much. Until we have a second gpu or cpu sorting the data separate data paths will be faster unless you need to be able to store them together and make calls as if they were the same data type. As far as I can see Ati took the most cost effective way to gain the most performance they could, and as long as they are faster than Nvidia, it does not matter how much more of a PITA it makes it to write code. A good parallel is AMD on board memory controller having faster access between cores on the new dual cores, that data is closer, than Intel's method of bridging the cores, which is why Intel liensed AMD's method. Which means for programmers only one set of code has to be written for the dual cores at this point.
 
The new technolagy elimilnates overhead by unifying shaders, that's it. Rather then having to have the API and hardware buffer between 2 independent shader technolagies, it makes them the same so the system only needs to decipher changes for one set of shaders, meaning that the buffer is reduced and less GPU power is needed from the API to calculate changes to graphics.
 
x800 Pro able to use DX10? kinda early to find out but I am just curious if it will be able to take any advantage of it
 
Glow said:
x800 Pro able to use DX10? kinda early to find out but I am just curious if it will be able to take any advantage of it
what a silly thing to ask. did you read any of the thread? :rolleyes: :rolleyes:
 
Gob said:
The new technolagy elimilnates overhead by unifying shaders, that's it. Rather then having to have the API and hardware buffer between 2 independent shader technolagies, it makes them the same so the system only needs to decipher changes for one set of shaders, meaning that the buffer is reduced and less GPU power is needed from the API to calculate changes to graphics.

Hmm...that's certainly an interesting, naive, and ignorant way of looking at it. Unifying the shader instruction sets removes barely any overhead (if there is any, it'd be in the HLSL10/FX10 API, not in the D3D10 one). The memory that the shader occupies isn't changed much either. If it is, it's a very insignificant change. A lot of the reduced overhead comes simply from making better code in the API, and better decisions regarding the interaction between the CPU and the GPU.
 
I'm building a new computer at the moment (the one in my sig), and some people have told me that getting a 2nd GPU within the next yr for SLI would be pointless, b/c DX10 will be around at that point. As one person put it 'buying a 2nd 7900GTX would be like buying a $600 paperweight'.

One person suggested that I buy a 7600 instead, so I have some $$ saved over so I can buy a DX10 GPU by next year.

Is it really worth it? I mean, to buy the 7600 instead? Will DX10 video cards be around by next yr?
 
Wait, something just occured to me...if current-gen GPUs won't be able to support to DX 10, will current-gen mobos be able to support DX 10 GPUs? Like would a DX10 card be physically compatible with, say, an ASUS A8N32-SLI Deluxe?
 
Great article. I was going to drop a few grand on a new gaming system yesterday but I got side tracked. I tend to upgrade my system and graphics every 2-3 years, so I would have been really ticked this time next year.
 
Wait, something just occured to me...if current-gen GPUs won't be able to support to DX 10, will current-gen mobos be able to support DX 10 GPUs? Like would a DX10 card be physically compatible with, say, an ASUS A8N32-SLI Deluxe?

yes
 
Brent_Justice said:
DX9 games running under DX9.0L under Vista should be more efficient (faster) than on XP.
meh.. already answered.. sort of. still doesn't make sense to me, because if dx10 is written from the ground up, then 9.0L would essentially have to "translate" everything to the new api.. in which case are you really saving overhead, and would the overhead "saved" make a noticeable improvement?
 
CaiNaM said:
meh.. already answered.. sort of. still doesn't make sense to me, because if dx10 is written from the ground up, then 9.0L would essentially have to "translate" everything to the new api.. in which case are you really saving overhead, and would the overhead "saved" make a noticeable improvement?
I don't think 9.0L has anything to do with 10. It's not translating anything....it's just running 9.0 like it always has. The difference with 9.0L is the communication between the API and the OS.
 
Hardware and MS have nothing to do with the future of gaming. They only have to do with the future of resolutions and image quality. This is such a small part of a game that it is insignificant. Anyone remember Doom3? Exactly. It was crap. It had tons of cool effects, but the game was garbage. Everyone for years was wondering what Carmack was coming up with and in the end the game wasn't enjoyable.
 
general said:
Hardware and MS have nothing to do with the future of gaming. They only have to do with the future of resolutions and image quality. This is such a small part of a game that it is insignificant. Anyone remember Doom3? Exactly. It was crap. It had tons of cool effects, but the game was garbage. Everyone for years was wondering what Carmack was coming up with and in the end the game wasn't enjoyable.

Doom 3 was crap eh? HOW??? because it was too dark?? :p if you think thats why, then you shouldnt have played doom 3 to being with
 
"Ati alined its self with MS and intel the core Arch OoO "

Kind of a funny comment now that ati has become an AMD division :)

There are so many people on here bashing MS because they must upgrade to vista to get some new and improved functionality.

Yet so many people critize MS for not introducing new features like Apple and so forth.

This thread amply shows why MS is caught between a rock and a hard place. 100's of millions of windows users would cry out if they lost backwards compatibility. MS cuts 1 chord and already we see the crying :)

I for one, have great expectations for "Dx10" and can hardly wait. This article does help to dampen some of that enthusiasm as the performance increase won't be as large as i previously thought. Or, it will be masked by more provided effects for games.

I thought this article was very well done --based on my very limited inside the box technical understanding.

:) This article is more timely now with r600 just visible on the horizon.
If r600 is a more complete "dx10" solution than the g80 ...is that the one to go for?
I ask because i need to choose a motherboard NOW. crossfire or sli --or will nvidia indeed carry through on its promise to open sli up on the intel 975x chipset?

*I have to buy a copy of windows XP just 5 months before vista --and i'm not crying (my win 2000 almost made it past) :)
 
something sorta confused me, does this mean next gen games that require dx10 won't run on dx9 cards? (Like the 7600GT?)
 
wait..........so we will have to buy vista to make our new dx10 card work properly? damn that smart on bill gates part.
 
Fredgeot said:
something sorta confused me, does this mean next gen games that require dx10 won't run on dx9 cards? (Like the 7600GT?)

Yes, but D3D10-exclusive games will not exist for a very long time. By the time they do, you will have upgraded, I guarantee it. Note however that there is a difference between D3D10-exclusive and D3D10-compatible. Games like Crysis are D3D10-compatible, meaning that they can run in D3D10 OR in D3D9.

tvdang7 said:
wait..........so we will have to buy vista to make our new dx10 card work properly? damn that smart on bill gates part.

You will have to buy Vista ino order to use D3D10 in games. Here are the four possible (relevant) configurations of OS/GPU that you can have:

XP and D3D9 card: Will not run D3D10.
Vista and D3D9 card: Will not run D3D10.
XP and D3D10 card: Will not run D3D10.
Vista and D3D10 card: Will run D3D10.
 
Work properly? no, they will work properly in XP, to get D3D10 features and DX9.0L features you will have to get Vista, D3D10 will not be released to XP, from the words of Microsoft the API is so built into OS (kernal side? no idea) that it will be impossible to implement into XP via a patch.

something sorta confused me, does this mean next gen games that require dx10 won't run on dx9 cards? (Like the 7600GT?)

you will not see a game that requires DX10 for a long time, your DX9 card will work fine for some time to come

we will be seeing games that utilize D3D10 features, like Crysis, Flight Sim X, and UT2K7 soon but like I said not required, and that will have to wait till Vista ships and gets the patch to activate D3D10 (rumor floating around out of the box vista won't be DX10 ready)

Don't worry your DX9 cards are still good, and if you get Vista with your DX9 card you should see some really nice performance improvements from the additional support, it seems that MS has no intention of fully leaving DX9 in the dust at this time.
 
i was sorta worried because i'm upgrading this week >_>; already got the new processor, just waiting on my motherboard and video card to get here...
 
Am I the only one, impressive and lovely as this all is, that severely doubts many, if any developers are going to care enough to model "hundreds of thousands of unique trees", for example?
 
I have no idea what you are implying, developers have always handled unique tree's as time went on and when needed have implemented some sort of random tree effect/defect model.

Look what they did with EQ when they changed the 3d model multiple times, granted when it first came out they had one tree model which varied in size, no different shape, or form (they had burnt trees that was about it). When it changed to D3D9 they had implemented a way to make all newly implemented trees unique, now its no forest but they still put in time to make them and i see other developers doing the same.

Hell from the screen shots of Vanguard it looks like each tree has some sort of uniqueness to it, so I wouldn't exactly bank on the developers being lazy.
 
just wondering about the directx10. when its released and video cards are on there way. how will it compare to the xbox 360?
 
Back
Top