Just got my GX640 and my CPU is idling around 55C and GPU around 68C! My GPU temp seems really high for idle. Do I have a defective unit??
What are the temps for other GX640 owners?
-
Mine idles at 48/42 for CPU and 63 for GPU when at default clocks of 625/1000. Downclocking the GPU gets me around 54 degrees, though.
However, this is with an ambient temperature of 23 degrees Celsius. What's your ambient temp?
Your temperatures under load are more important, though. -
Mine temperatures are roughly the same as those of lackofcheese. Ambient temperature is similar, CPU idling at 1,2GHz at ~44, GPU idling with stock clocks on 61, downclocked at 54-55 C.
-
CPU: ~50C
GPU: ~63C
@ room temperature
Mine was also a bit hot during idle when I first unboxed it, but after heavy usage for a day my idle temps dropped down a bit. Probably the thermal compound needs to settle. -
How do I check ambient temp? I'm using HWMonitor. Is it the ACPI? Mine reports 60 C
-
BenLeonheart walk in see this wat do?
-
-
Just ran 3dMark06 and my max GPU temp was 93 C and CPU at 83 C. These seem way high.
-
The CPU isn't too bad, but the GPU temp is somewhat high. For reference, my max temps in 3DMark06 were 81 for CPU and 86 for GPU.
However, if it's, say, 30 degrees Celsius in the room you're sitting in, the temps you're getting wouldn't be anything out of the ordinary. -
My ambient is 25 C.
Do you guys think it's because I just got my laptop and like the poster above said, the thermal compound hasn't settled yet? -
Well, the best way to know for sure is to give it another day or two.
93C for the GPU with an ambient of 25C is definitely on the high side of things. -
Typically, new applications of thermal compounds takes a bit of time and heat to fill in the uneven spaces or gaps between the heatsink and chipset surface. The compound "thins" and fills when heated, while it "thickens" when it cools.
edit:
here's a link to a review as an example of how some thermal compounds perform over time from initial use
[H]ard|OCP - Thermal Paste Shootout - Q209 -
-
Well, it's probably still an acceptable temperature, it's just higher than what other people have been getting with the GX640 - I got around 86C during 3Dmark06. I don't think 3DMark06 is that much more stressful than a typical game, however, especially because the tests aren't very long.
The GX640 is a great laptop, though, and even though the cooling could be better, it's definitely preferable to a 7.5 pound Sager NP8690, even though the Sager has an HD 5870.
Also, for reference, I hit 101C running Furmark for 8 minutes (Full screen at 1680x1050); see my post in the Owners' thread.
The strange thing is that the 5850 didn't downclock, but the fan kicked in at an even higher speed when it got that high, and yet the fan wouldn't run at full speed below 100C.
I don't intend to run FurMark again, though, at least not for 8 minutes. -
Yeah i've found this with my machine too, the fan doesnt really go off the gpu alone but all the components, which is quite strange,
-
NotEnoughMinerals Notebook Deity
downclocked gpu to 450/450 is 49/55/50
at standard clockers my gpu idles at 56/62/58
cpus in the mid 50s
don't have a thermometer around lol... I'd guess ambient is 22-25... -
you guys should also post the temperature monitoring software. This isn't a thermometer that the programs are using, each program assumes as Tjmax value for each sensor.. and, often times they get it wrong.
-
NotEnoughMinerals Notebook Deity
I use gpu-z for gpu and a mix of speedfan and hwmonitor for cpu
speedfan for quick reference and just because I can incorporate it into a samurize config, even though I know it's not the most precise reading -
-
NotEnoughMinerals Notebook Deity
-
Used the AMD sensor tool, that'l tell you, or get a manual reader.
-
-
NotEnoughMinerals Notebook Deity
-
-
NotEnoughMinerals Notebook Deity
Definitely not, i5 is definitely cooler. But how much I'm not sure, theres 10 more watts of power to dissipate on the quad
-
-
Here's something cool I put a bit of effort into:
I ran 3DMark06 and got a score of 11500 at stock clocks, which is nothing special for this card. However, what I did do was make an awesome graph of all the important status figures for the CPU and GPU.
I spent a few hours working out the best way to set it up, but as an Engineering student, if I can't make a decent graph, what good am I? Of course, now that I have my tools set up (including a Python script to automatically generate the graph), I can make this kind of graph for anything I like.
In any case, the graph follows (thumbnailed):
Ambient temperature was my usual 23C.
What's really awesome is you can see which data correspond to which tests. -
Neat!
Kudos to lackofcheese for an awesome work...
Pity to consider myself as an engineer too.... but I'm learning, so I'll get there!
Anyways,
what were your GPU's downclocked clocks?
Everybody mentions these 'downclocks' but nobody actually says what clocks.. -
Well, PowerPlay downclocks to 100/1000, which doesn't seem to make much of a difference to the level of heat - I think the memory plays quite a big role in the card's idle power consumption. However, because PowerPlay seems to cause trouble, people have been manually underclocking. From experimentation, 300/300 is a good choice; going any lower doesn't seem to make much difference in temperatures.
-
Very nicely done there cheese, didn't realise the c states influenced it that much, you can see quite a temp change between the two,
The Temperatures are also nice, although its pleasent to know other parts of the GPU run cool while the memory and core run at a respective temperature, Has the machine endured any thermal repasting? -
By the way, if anyone's interested I can distribute my Python script, but I want to improve it and comment it properly first.
@meraki1990
105C in Furmark is high indeed, but the main thing to take out of that is not to run Furmark. Once my script is nicely done up you can do a 3DMark06 run similar to mine so we can compare temperatures.
I guess your 920XM isn't helping your temps; you'd be much better off reducing the CPU's power consumption than the GPUs if you want good performance. -
Ahh i thought they were c states....what an idiot haha, but its interesting to see that 3dmark is multcore processing but not trying anything parallel,
The temps i think are good for stock 5850, if i had one i'd be taking of the heatsink and replacing the pads with paste, Death2theworld really dropped his temps with that,
Really look forward to the completed python script too, if released, it'l be a tool i definately use. -
Okay, here's the python script:
http://pastebin.com/WYBJEV12
First of all, the tools you need :
- Python 2.6 with the Numpy and Matplotlib addons (They're nice for graphing)
- GPU-Z and RealTemp from Techpowerup. The reason I chose these is because they contain relatively full data GPU and CPU monitoring data respectively, and are able to save data to logfiles.
In GPU-Z you will need to tick "Log to file" in the Sensors tab.
In Real Temp, there is a Log File option in the settings; tick it for best results. I also recommend ticking the "TM Load" option, which means the CPU load will be represented the way it is in Task Manager, because the default load measurement is something different. You should also verify that the TJMax setting matches your CPU (mine defaulted to 105C which is correct for my Arrandale CPU).
Finally, you need to change the file addresses at the beginning of the script to match up to wherever the log files happen to be on your system.
To avoid trouble with file conflicts, close both GPU-Z and RealTemp before running the script. Addditionally, it's best to make a copy of your log files so you can regenerate the same graphs, or modify them, whenever you like - the raw data is much more valuable than the graphs for keeping on your system.
When you run the script, it should read the data from the files and produce a graph.
If you have any questions, they'll probably have to wait until I wake up. Hopefully, I'll see some awesome graphs from other people when I do.
EDIT: I think one aspect that needs a little work is how it handles having GPU-Z and/or RealTemp off or when they miss readings. In my experience, they both seem to do it every so often. I can't seem to find a way to stop it from joining any points that have a gap of more than 1s between them, which would presumably be the best solution.
Currently, it has plots in the load graph that spike up to show when GPU-Z and RealTemp miss readings, and while this is good for when they are turned off altogether, it doesn't look too good when they seem to occasionally miss one reading.
EDIT2: I've fixed and improved a number of things. In particular, I've added generation min/avg/max statistics to the script. I could add these to the plot window later, but at the moment I can't be bothered working on fitting the text in. The only major improvement left is what I mentioned before - leaving gaps if there are no readings from GPU-Z or RealTemp for a certain period of time. I think I know how to do it, but it will require some extra effort. Another thing I could do is label the x axis with actual times instead of the number of seconds from start. -
http://pastebin.com/E03Eg8L6
Updated version. Now there will be gaps in the graph whenever GPU-Z or RealTemp miss readings.
A screenshot of the results of the script - the top is the Python window with statistics, and the bottom is the graph window. I prefer to full screen the graph window when making graphs, though, especially ones like this one considering it covers a few hours and many thousands of data points. Matplotlib gives you some nice options, including the ability to save the graph directly to png which is what I did for my previous one. That graph was my GX640 mostly idling (video card downclocked to 300/300) for a few hours while I went to sleep yesterday, though uTorrent was probably on at the time, and maybe some other stuff.
I look forward to criticism and/or modifications to my code. If anyone has any suggestions on how to improve it, that would be cool too. I think one thing that would improve it is adding framerates to the graph, but I'd have to find a framerate logging tool. -
BenLeonheart walk in see this wat do?
Temps are low..
thats nice...
85C while on full 100% load... -
Here's something that should make everyone here happy. It would seem that the temperature we (or rather, GPU-Z, HWiNFO32, Furmark and HWMonitor) thought was the core was likely not. The AMD GPU Clock Tool only sees three sensors, but the extra sensor in the other tools seems to match the MemIO very closely suggests that perhaps it's just the same sensor. The slight differences between them are a little strange, though.
In particular, this would be a good explanation for why my GX640 merely spun the fan faster when I hit 100C in Furmark - the core was still significantly cooler than that value, and so obviously the GPU didn't throttle or shut down.
While 100C for MemIO seems high, according to ziddy123: -
Cheers, -
my gpu hit 67 playing Warcraft III for an hour. haha
-
-
NotEnoughMinerals Notebook Deity
gpu-z likes to sometimes show rpm and sometimes not, as well as constantly saying my fan spinning at 30%
-
If you are going to tell your GPU temps, you ought to either....
Post screenshot of HWInfo32. If you play a game, keep it running and then post after you are playing so we can see the MAX temps.
or
Post screenshot with the AMD GPU Clock tool, same method as above.
This thread is sorta of meaningless without proof. Anyone can say whatever for their GPU and CPU temps. -
NotEnoughMinerals Notebook Deity
because why would we lie?
kind of demanding for someone who just started posting on these boards. We've uploaded screenshots everywhere -
None have been posted in this thread and this thread is specifically about it. And it's not demanding at all, print screen, post picture.
As for lying, yeah people lie about their notebook all the time. It's not intentional, but they just guess based on memory or they exaggerate.
Ok just an example. I just played a few races in Dirt 2, 3 of them. GPU overclocked, core 800, memory 1,100. As you can see, it doesn't matter whether you use HWInfo32 or AMD GPU Tool, the memory readings are identical. TSS0 = GPU DispIO = GPU Core. TSS1 = MemIO = Memory Controller. TSS2 = GPU Shader = Shader Core.
There was someone in the G73 thread asking questions about reading the temperatures, a GX640 owner. The important one is the Core, which is TSS0 or DispIO. The Memory Controller sensor and the shader sensor are both on the Core btw. Memory controller does not equal video ram temperatures. There aren't any sensors on those, never have been on any mobile GPU, ever...
My observations tracking G73JH temps from various owners is that the IDLE temps vary among us. Some have idle core temps around 49C. Others as high as 57C. Some have Core and MemIO temps close, other far apart. But under load, when we are gaming, our temps are stable around 77-79C for Core and 88-92C for MemIO an Shader around 79-82C. You guys may observe the same thing, idle temps varying, but under load, temps the same, which is the important part anyways right?
Core: 79C Shader: 81C Memory Controller: 90C
-
What, so someone couldn't photoshop their temps if they wanted to? A screenshot doesn't really constitute proof once you get down to it.
-
If you want to post your temperatures, just post the screenshots. It's not a hard thing or time consuming thing to do... You can almost always tell if someone photoshops a screenshot also.
-
I'm happier posting a nice graph like I did on the previous page, for the most part. It gives you much more information than just min/avg/max.
-
Looking at your screenshot, it's clear that TSS0 = GPU DispIO, TSS1 = GPU MemIO = GPU Thermal Diode, and TSS2 = GPU Shader, because those figures quite clearly match up between GPU Clock Tool and HWiNFO32. -
If this is such a hassle then don't bother. But also mind you, my GPU clock is 800 mhz and memory is 1100 mhz -
BenLeonheart walk in see this wat do?
I'm also interested if MSI's 1 fan solution is great...
on either the GX640 and the GX740...
GX640 Owners - Post Your Temps!
Discussion in 'MSI' started by fadegs, Apr 23, 2010.