Best way to measure video card heat?

DougLite

Supreme [H]ardness
Joined
Jan 3, 2005
Messages
4,764
Well, what is the best way?

We saw in the recent [H] X1800XT article, discussion on temperatures. However, the review team noted inconsistencies that made readily comparable temperature measurements difficult.

I'd like to make a suggestion on what is in my judgement a better method ... measuring power consumption. The card as a whole cannot dissipate current it doesn't draw. The team at StorageReview came to similar conclusions on hard drive heat - they measured top plate temperatures, but they found that the results were hard to make consistent and repeatable, even though the measures were easier to take.

Admittedly there are some obstacles to this. One is the impact of RAM on power usage, however that can be readily measured by measuring, for example a 256MB X800XL and comparing it to a 512MB X800XL. All other things being equal, the extra power consumption can be safely presumed to come from the extra RAM. Once a large database of power consumption figures is built, it becomes fairly easy to judge the cooling requirements of a card based on its power consumption. Thoughts?
 
DougLite said:
Well, what is the best way?

We saw in the recent [H] X1800XT article, discussion on temperatures. However, the review team noted inconsistencies that made readily comparable temperature measurements difficult.

I'd like to make a suggestion on what is in my judgement a better method ... measuring power consumption. The card as a whole cannot dissipate current it doesn't draw. The team at StorageReview came to similar conclusions on hard drive heat - they measured top plate temperatures, but they found that the results were hard to make consistent and repeatable, even though the measures were easier to take.

Admittedly there are some obstacles to this. One is the impact of RAM on power usage, however that can be readily measured by measuring, for example a 256MB X800XL and comparing it to a 512MB X800XL. All other things being equal, the extra power consumption can be safely presumed to come from the extra RAM. Once a large database of power consumption figures is built, it becomes fairly easy to judge the cooling requirements of a card based on its power consumption. Thoughts?

I don't think this would work. Several factors come in to mind, the most obvious is the cooling efficiency of the video card fan. If one fan works much better than the other cards fan but both cards pulling the same power consumption then the card with the better fan would run cooler.

I think the best way that they should have conducted the heat experiment on the reference card was to put the 2 cards in the same case with all the same parts and take internal case temperature reading, making sure that the ambient room temperature is the same at start of each card experiment. Take readings after 10 mins with nothing running then run resource hog program and take readings another 20 mins later. Maybe set up a rig just for this that can be used on future cards so we can compare to past cards.

But would like to thank Brent for including a section on heat in the review :)
 
Whitewolf said:
I don't think this would work. Several factors come in to mind, the most obvious is the cooling efficiency of the video card fan. If one fan works much better than the other cards fan but both cards pulling the same power consumption then the card with the better fan would run cooler. ...
Ahh, but this is precisely the reason why power consumption should be measured. A given product may run "cooler" in terms of operating temperature, but yet may actually dissipate _more_ heat into the case because it has a more powerful cooler. Also, the thermal density and die size of the GPU can have a huge impact on how "hot" the GPU runs - a 90nm part is likely to run "hotter" in terms of temperature than a 110 or 130nm part, as it packs more transistors into the same or smaller area, which means there is less heatsink material actually in contact with the GPU, which means that it will have to reach a higher level above ambient than a GPU with a larger die and/or fewer transistors before the cooler becomes effective.

Another tremendous advantage of measuring power consumption is would give builders a better idea of what kind of PSU they would need.

Thnk of this example. You have a candle and a space heater - the candle may be hotter, but the space heater puts out much more heat. Heat and temperature are not the same thing. Measuring temperature gives no real insight into how much heat a GPU or CPU is giving off.
 
You hit the nail on the head

Power consumption is what you want to measure

temps can be informative but you have already addressed the problems posed in measuring it. It seems like it would be useful for isolating problems though
 
and those thermal imaging cameras would be the best for using temp data. Not that the hottest spots would always be a problem, but that after enough data was collected about different cards you could tell which parts were getting into temperature ranges that caused problems. I'm guessing the videocard makers use something similar to this and it would help advance overclocking also

But the power consumption should be basic information
 
DougLite said:
Ahh, but this is precisely the reason why power consumption should be measured. A given product may run "cooler" in terms of operating temperature, but yet may actually dissipate _more_ heat into the case because it has a more powerful cooler. Also, the thermal density and die size of the GPU can have a huge impact on how "hot" the GPU runs - a 90nm part is likely to run "hotter" in terms of temperature than a 110 or 130nm part, as it packs more transistors into the same or smaller area, which means there is less heatsink material actually in contact with the GPU, which means that it will have to reach a higher level above ambient than a GPU with a larger die and/or fewer transistors before the cooler becomes effective.

Another tremendous advantage of measuring power consumption is would give builders a better idea of what kind of PSU they would need.

Thnk of this example. You have a candle and a space heater - the candle may be hotter, but the space heater puts out much more heat. Heat and temperature are not the same thing. Measuring temperature gives no real insight into how much heat a GPU or CPU is giving off.

Good explanation Doug- so few people understand the difference between heat and temperature. While I agree with what you're saying in principle, I think there's two concepts here. One is the amount of power being dissipated "heat" would be important to someone who wants to keep their case cool, or otherwise has inadequate case cooling or a SFF case. The temperature becomes important to people who are worried about actually physically damaging their $$$ video card when they overclock, or by folks worried about stability.

Knowing the power consumed by a card will not enable one to calculate its temperature- though some generalizations can be made "all things being equal" that a card that draws more current will run "hotter", too often all things are not going to be equal. Someone mentioned an IR camera in an earlier post. This would be more accurate than a simple temp gun, but ultimately would suffer from the same weakness- emmissivity. Expensive temp guns allow the emmissivity to be changed, while cheap versions usually put it at .95. The temperature indicated is only correct when this is the exact emmissivity of the surface being observed. This is rarely the case, and another variable that is often overlooked.

For temperature measurements, a calibrated contact pyrometer works best- but this would be hard to use on a running video card. The card's onboard thermocouple has the best chance of getting the temperature everyone wants, but it looks to me that these are often not calibrated correctly or else the OEM changes the output voltage scaling in a driver with the goal of putting people's minds at ease rather than shooting for accuracy.

Embedding a calibrated thermocouple in the heatsink, especially directly above the GPU, would enable one to know the GPU is at least the temperature reported, but again cannot tell the true temperature because of the variable thermal resistance of the heat transfer paste.
 
I mentioned the use of a Thermal Imaging Camera (TIC) to test for heat in the X1800XT thread. The advantages are:

- accurate (as long as there are no reflective surfaces, IR just bounces of it)
- save-able (most thermal imaging cameras allow you to save an image or report. The uber expensive ones ($$K) allow you to record video too)
- presentable results
- reproducible

I used to use one in my work to test for heat damage, maintenance etc. I am lucky enough to be able to borrow it time to time. I'll be borrowing it again soon to test my soon to be built parallel water cooling rig. That will give me factual results of the temp of the coolant, blocks, radiator, power mosfets, chipset, psu and even the cables, in fact anything you point the TIC at as long as it's not reflective, but this may have changed with the latest TICs.

Here's some examples:

My car was underperforming, and I suspected the chargecooler pump wasn't pumping enough coolant through the chargecooler rad, so took a thermal image of it:

thermal_cc_06.jpg

That chargecooler should be cool to the touch, it certainly isn't!

Here's a picture of the window in the firewall (above the SP1 marker), notice how it reflects an almost perfect image.

thermal_engine_05.jpg

Infra Red is an amazing thing and can give strange results to the uninitiated.

And just for fun:
My fat cat sitting on the sofa:
thermal_cat_01.jpg


and then a picture after she moved - notice the residual heat where she was sitting:
thermal_cat_02.jpg


I don't work for a thermal imaging company, I just wanted to show that there is technology to help you achieve your goal.

Regarding power measurement of video cards, how about putting an ammeter in series with the power connector on these cards. That will measure the current drawn from the psu on that particular rail (5V 12V) and from that you can calculate the power consumption. However, if the card takes supplimental power from the PCIE connector, that's a lot more difficult!

Good luck on deciding which path you take.
 
I've been thinking about how we could measure power consumption. At this current time we just don't have the setup or expertise for it right now. I think all the points brought up here are good points that I have no answer to in how to provide a reliable power draw test. I believe the information would be useful but right now I don't see it happening but I just wanted you all to know it is on my mind and something I'll keep researching.

How much do thermal cameras run? That is an interesting idea.

I like the measuring the inside case temp idea, i've thought about that too.

All of these things are on my mind, I want to keep providing heat and noise results in reviews, I think it is an important part of the experience of owning a video card.
 
Hey Brent, check out this gizmo that StorageReview uses. Admittedly, measuring video card power usage has added complexity because it's not easy to hook up such a measuring device to the slot's power lines, however there is a way around this.

If you establish the power draw of your review system's part on an IGP system, and note the power consumption of that system at idle and load, then you can guage the overall increase in power consumption when an add-in card is added. That would be at least within a few watts.
 
Mysterae, you don't mention emissivity specifically, but as you may know emissivity and "reflectivity" are two different ways of looking at the same thing. E+R=1 There are no perfect emitters and there are no perfect reflectors. It would be wrong to conclude that the temperatures indicated in your pictures are accurate. The emissivity of all the objects in the frame is not the same- if you doubt this, change the emissivity from .95 to .25 and see what temperatures your camera reports. They will increase tremendously.
The most obvious place the emissivity is much less than .95 is on the first picture. The words "Lotus Chargecooler" are reported to be somthing like 25°C. As this is a contiguous block of metal, they are the same temperature and closer to 70° in fact. The difference is purely because of the better finish that is no doubt on these raised letters.

This is a good illustration why thermal cameras, and by extension thermal guns cannot be used reliably to determine the temperatures of different objects, or even the temperature of one object, unless the emissivity is first measured. Since this takes a contact pyrometer- it would not make sense to add the subsequent steps of changing the emissivity value in the camera and then take the picture, unless the picture frame is limited to the object for which the emissivity is correct. Taking a picture of an entire card, for example, would completely misrepresent the temperatures in various locations because there would not be uniform emissivity.

I have an idea that would allow the power dissipated by a video card to be measured indirectly not by measuring the actual current, but rather building an enclosure around the card only. measuring the air flow into the card and the inlet/exit temperatures will allow the power dissipated to be calculated. There will still be variables that need to be measured- two temperatures and a flowrate, but there are devices that can do this with accuracy that are relatively inexpensive. As an aside, if only the power dissipated from the GPU is of interest, the same concept could be used with the water temp in and out plus the flow using a water cooling setup- again all necessary variables easy to measure in an accurate, repeatable, and inexpensive way.
 
Brent_Justice said:
At this current time we just don't have the setup or expertise for it right now.

do you know of anyone that works for a powersupply company or motherboard company that posts here?
 
yevaud, where talking about comparing video cards for a review, not building spaceships to Mars. Lighten up.

I don't know the science of thermal imaging or pretend too. It's a great advancement, for many reasons, but especially it's ease of use. I'm sure the designers know everything they need to when they build cameras, for the military, NASA, medical, nuclear power etc.. You got me sounding like a bloody advert now! :p

I know I can point it at something and see how hot it is without getting my fingers feckin' burnt. Take an image of one card, take an image of the other, compare and conclude. Easy. Whether you trust a £1,000 or a £30,000 camera calibrated annually is up to you.

How many times do you wish you had thermal imaging eyes before taking a sip of hot tea?

thermal_cup_01.jpg
 
Mysterae said:
yevaud, where talking about comparing video cards for a review, not building spaceships to Mars. Lighten up.

I don't know the science of thermal imaging or pretend too. It's a great advancement, for many reasons, but especially it's ease of use. I'm sure the designers know everything they need to when they build cameras, for the military, NASA, medical, nuclear power etc.. You got me sounding like a bloody advert now! :p

I know I can point it at something and see how hot it is without getting my fingers feckin' burnt. Take an image of one card, take an image of the other, compare and conclude. Easy. Whether you trust a £1,000 or a £30,000 camera calibrated annually is up to you.

....

Forgive me if I came across as a "know it all" I didn't intend disrespect. I simply wouldn't want thermography to become another ambiguous test method applied to video card reviews. While the technology has its place, IMO this is not it. It would be different if the error in temperature was one or two degrees, but they could easily be off by 30-40°C. Indeed, red pigment may have a different emissivity than green pigment- you can imagine the bickering that would ensue.
 
DougLite said:
Ahh, but this is precisely the reason why power consumption should be measured. A given product may run "cooler" in terms of operating temperature, but yet may actually dissipate _more_ heat into the case because it has a more powerful cooler. Also, the thermal density and die size of the GPU can have a huge impact on how "hot" the GPU runs - a 90nm part is likely to run "hotter" in terms of temperature than a 110 or 130nm part, as it packs more transistors into the same or smaller area, which means there is less heatsink material actually in contact with the GPU, which means that it will have to reach a higher level above ambient than a GPU with a larger die and/or fewer transistors before the cooler becomes effective.

Another tremendous advantage of measuring power consumption is would give builders a better idea of what kind of PSU they would need.

Thnk of this example. You have a candle and a space heater - the candle may be hotter, but the space heater puts out much more heat. Heat and temperature are not the same thing. Measuring temperature gives no real insight into how much heat a GPU or CPU is giving off.

Think you talking apples and oranges. :p
Take CPU chips... if the 90nm architecture is done right then it should be cooler as in the 90nm AMD chips though if it is not done right as in Prescott then it will run hotter. It just matters how well it was engineered.

The space heater vs the candle is not a good example cause you not considering surface area of the heat source. If you made a space heater with a core the size of the candle flame that would have to be one heck of a space heater to equal the heat output of the candle flame.

I asked Brent to do heat output on the cards a while ago for us people who live in hot climates. If the card is running within it specs and you live in the frozen white north who cares how hot it gets unless you looking to overclock the hell out of it. The heat output of a computer can raise the temperature of a room to unbearable levels in the south. So you either turn the AC down and freeze everyone out in the rest of the house or be miserable playing games. :(
 
Whitewolf said:
Think you talking apples and oranges. :p
Take CPU chips... if the 90nm architecture is done right then it should be cooler as in the 90nm AMD chips though if it is not done right as in Prescott then it will run hotter. It just matters how well it was engineered.
This example applies to chips of equal dissipation - since the 90nm chip is smaller, it will be hotter than a 130nm chip that can dissipate heat over a larger area, even if the 90nm chip uses the same amount of power.
The space heater vs the candle is not a good example cause you not considering surface area of the heat source. If you made a space heater with a core the size of the candle flame that would have to be one heck of a space heater to equal the heat output of the candle flame.
No, it would have to be one monster candle. Try heating your house with a candle. The candle may have higher temperature, but it produces _much less_ heat than a space heater
I asked Brent to do heat output on the cards a while ago for us people who live in hot climates. If the card is running within it specs and you live in the frozen white north who cares how hot it gets unless you looking to overclock the hell out of it. The heat output of a computer can raise the temperature of a room to unbearable levels in the south. So you either turn the AC down and freeze everyone out in the rest of the house or be miserable playing games. :(
This is precisely why measuring power consumption is more important than measuring temperature. If I slap an XP-120 on a 3GHz Northwood, it may run at a lower temperature than a 800MHz PIII with a craptastic aluminum heatsink w/ a 60mm fan, but the Northwood will still dissipate much more heat.
 
Hmm...well, to measure power use a voltmeter and an ammeter on each leg of the power connection, then voltage x amperage = power for each leg; just add it up.
 
I just have to say I'm impressed by the level of knowledge and the method of expressing it on dis here thread.

I like more of this kind of thread and less " OMG U NOOB 7800 SLI RULEZ OVA UR 1800XT" threads.

:rolleyes:
 
DougLite said:
If you establish the power draw of your review system's part on an IGP system, and note the power consumption of that system at idle and load, then you can guage the overall increase in power consumption when an add-in card is added. That would be at least within a few watts.

that sounds like it should be within a few watts

What you need is an agp receptacle that can measure what the card is using (plugs inbetween the agp slot and card)
 
needmorecarnitine said:
What you need is an agp receptacle that can measure what the card is using (plugs inbetween the agp slot and card)

It's called an extender-card. Easy for someone with pcb and soldering skillz to make. Bottom of the card has a PCIE edge connector, pcb has pin-outs and/or leads to measure the voltage/ current and copies of the original traces up to a PCIE slot soldered in place. Fit the board on test to the extender card, power it up and measure what you need remotely with a multimeter or scope. The extra mm's of the tracks of the PCIE bus shouldn't make a difference, but should be tested.
 
DougLite said:
This example applies to chips of equal dissipation - since the 90nm chip is smaller, it will be hotter than a 130nm chip that can dissipate heat over a larger area, even if the 90nm chip uses the same amount of power.

That cant be a true statement. We are talking how efficiently something works. Lets say you have two electric furnaces. The both pull the same amount of current load. One was built 40 years ago and the other is a top of the line model today. You telling me that they both will work the same? If you compare a new version of AMD 3000+ vs a old 130nm version I believe the new 90nm runs cooler unless I am greatly mistaken.... again we talking overall heat emitted by one chip compared to the other chip.

DougLite said:
No, it would have to be one monster candle. Try heating your house with a candle. The candle may have higher temperature, but it produces _much less_ heat than a space heater. This is precisely why measuring power consumption is more important than measuring temperature.
So you think a space heater the size a mouse would use would heat a house? :D I think you missed the equal volume part ... ie surface area. A space heater that is 1/200 the size of a normal one and putting out 1/200 the heat level would not heat your house or be hotter than a candle.


DougLite said:
If I slap an XP-120 on a 3GHz Northwood, it may run at a lower temperature than a 800MHz PIII with a craptastic aluminum heatsink w/ a 60mm fan, but the Northwood will still dissipate much more heat.
I thought this was to measure video cards. As I pointed out in my original post the testing equipment would have to remain constant for the tests to have any reliable results.... ie make a test computer just for this and keep all the hardware constant but the testing video card.
 
Whitewolf said:
As I pointed out in my original post the testing equipment would have to remain constant for the tests to have any reliable results.... ie make a test computer just for this and keep all the hardware constant but the testing video card.

What do you think would be the margin of error on your proposed method?
 
needmorecarnitine said:
What do you think would be the margin of error on your proposed method?

If you conducted it like a experiment and made everything exactly the same then I think you could get some good results. Everything would have to be timed the same and same room temperature at start (the bigger the room the better with no ac vents near you) and same load materials running on the computer. The only thing I see might be a problem is placement of the gauge so no fans are blowing directly on it. Also the more temp readings taken would minimize the margin of error too.
 
No speculation on numbers?

How accurate would the thermometers have to be? tenth of a degree? hundredth?
 
needmorecarnitine said:
No speculation on numbers?

How accurate would the thermometers have to be? tenth of a degree? hundredth?

hundredth....Ha! :D
Well I dont think for my needs or most people anything better than 1-2 degree's F would be needed and I think that could be obtained pretty easy. After all can you tell the difference from 70 and 72 degrees F outside on a nice day? :p
 
You think that a thermometer that can't tell the difference between 70 and 72 degress will be useful in calculating the power consumption of a videocard?
 
needmorecarnitine said:
You think that a thermometer that can't tell the difference between 70 and 72 degress will be useful in calculating the power consumption of a videocard?

I was asking if a human could tell the difference. My interest in heat output is for my comfort while playing games. 2 degrees is not going to kill me but 20 degrees sure will not get me laid that night and that will hurt! :D
If a thermometer cant tell the difference then I would ask for a refund on it. :p
 
I think the intent of this thread is to measure power usage or heat to help with overclocking

not so much in how it will heat up your living room
 
Back
Top