The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    D901C / Sager 9260/9262 / Pro-Star 9191 Owners Thread

    Discussion in 'Sager/Clevo Reviews & Owners' Lounges' started by Wu Jen, Jun 26, 2007.

  1. Wu Jen

    Wu Jen Some old nobody

    Reputations:
    1,409
    Messages:
    1,438
    Likes Received:
    0
    Trophy Points:
    55
    Just what I was thinking....after my upgrade I'll have to get back to updating it :D
     
  2. Vedya

    Vedya There Is No Substitute...

    Reputations:
    2,846
    Messages:
    3,568
    Likes Received:
    0
    Trophy Points:
    105
    WHat are you upgrading to WJ?
     
  3. Wu Jen

    Wu Jen Some old nobody

    Reputations:
    1,409
    Messages:
    1,438
    Likes Received:
    0
    Trophy Points:
    55
    Going from a 2.93Ghz X6800 to a 2.83Ghz Q9550, and going from 2x7950GTX's to 2x9800GTX's, when they are released in June or July. My system already has 4GB of 800Mhz RAM, and 3x200GB 7200RPM drives so no upgrade needed there. I also dual boot Vistax64 and WinXPpro.
     
  4. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    WHOA thats gonna be a beast..

    i got my Q6700 a couple of weeks before the Q9450 were released.. gahhh!!

    oh well looks like im stuck with it..
     
  5. Wu Jen

    Wu Jen Some old nobody

    Reputations:
    1,409
    Messages:
    1,438
    Likes Received:
    0
    Trophy Points:
    55
    I'm personally hedging my bets that eventually more games/apps will take advantage of quad cores etc. The GFX update for me is a no brainer :) As is the motherboard update. I have one of the older mobos, since my D901C is almost a year old now.
     
  6. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    awesome dude.. post benchies when you get your beast :D
     
  7. Vedya

    Vedya There Is No Substitute...

    Reputations:
    2,846
    Messages:
    3,568
    Likes Received:
    0
    Trophy Points:
    105
    Do you have taht diamond upgrade plan or what?
     
  8. Wu Jen

    Wu Jen Some old nobody

    Reputations:
    1,409
    Messages:
    1,438
    Likes Received:
    0
    Trophy Points:
    55
    Yes, I do. I had thought about it when my notebook was first bought and I wanted the no dead pixel policy. Since it was included in the Diamond Plan and their offer of 'buyback' of old parts, I decided to go with it. I'll be testing it out firsthand soon and will post my results.

    It should be interesting.
     
  9. Vedya

    Vedya There Is No Substitute...

    Reputations:
    2,846
    Messages:
    3,568
    Likes Received:
    0
    Trophy Points:
    105
    Keep Us Updated ;)
     
  10. Doodles

    Doodles Starving Student

    Reputations:
    178
    Messages:
    880
    Likes Received:
    0
    Trophy Points:
    30
    well me AND wujen will be going for the same parts wen they're available from the same retailer.... ill RACE YA!
     
  11. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    @Argh i re did my 3d mark 06 test and here they are ..

    note this is running vista x64 SP1 with modded bios running at 600/975/1500


    SM 2.0 Score 4917

    HDR SM3.0 6297

    CPU score 3849

    and i get a grand total of 13118

    woohoo im stoked.. prolly can tweak it a bit more. i reckon
     
  12. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    ok i have also done my 3dMark Vantage benchie too

    GPU score 7256

    Cpu Score 9461

    overall P7705
     
  13. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    Right chaps i have done my obligatory 3dmark 06 and vantage benchies..

    how does that compare in the general scheme of things with the same lappy as me running vista x64 :D
     
  14. Wu Jen

    Wu Jen Some old nobody

    Reputations:
    1,409
    Messages:
    1,438
    Likes Received:
    0
    Trophy Points:
    55
    That seems mighty low. :( Could be due to Vista. I get 10.5k with my 2x7950GTX's in 3dMark06.

    Just having one 8800 is = to 2x7950's. I would think you should be getting more than a 2.5k increase.
     
  15. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    hmm that is interesting..??.. argh was getting more that 12K with his setup..?
     
  16. ARGH

    ARGH Notebook Deity

    Reputations:
    391
    Messages:
    1,883
    Likes Received:
    24
    Trophy Points:
    56
    i get 12,800 with my setup.

    but that means nothing. crysis is where the real benchmark is. that will determine if your overclock has worked.
     
  17. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    hmm ok will have to install crysis now. but 13118 was more than what i was expecting. :D
     
  18. ARGH

    ARGH Notebook Deity

    Reputations:
    391
    Messages:
    1,883
    Likes Received:
    24
    Trophy Points:
    56
    i would have expected 14k with a gpu OC and your setup.
     
  19. Wu Jen

    Wu Jen Some old nobody

    Reputations:
    1,409
    Messages:
    1,438
    Likes Received:
    0
    Trophy Points:
    55
    Yeah, something fishy there guys. Like your SLI isn't working. Both of you should have a higher score than I, just because of the Quad core Pro you both have vs. my dual core. Do me a favor, disable your SLI and run it again and post your score. See if it goes down any.

    With me getting 10.5k with my 7950's at stock resolution i.e. 1280x1024, you guys should be hitting at least 14-15k. That would be my expectation anyhow.
     
  20. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    hmm yeh i have checked with GPU z and it shows 2 gpu's enabled.

    im using 175.75 drivers too.. ok will disable SLi and run 3dmark06 again.
     
  21. ARGH

    ARGH Notebook Deity

    Reputations:
    391
    Messages:
    1,883
    Likes Received:
    24
    Trophy Points:
    56
    i get 10k with single card and 12,800 with sli. my scores are actually fine for 8800m sli. i know it looks like i should be scoring higher but that's how these things are right now. it was discussed in previous threads. everyone expected higher numbers.

    i mean my scores are in-line with the new d900c.
     
  22. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    kool thats good to hear ARGH. i got slightly higher that with my rig so that should be right. but i havent as yet seen what other people have scored with the same rig as me. :D
     
  23. Wu Jen

    Wu Jen Some old nobody

    Reputations:
    1,409
    Messages:
    1,438
    Likes Received:
    0
    Trophy Points:
    55
    You normally expect to see at least 35%-45% increase with SLI. I would have thought that the 8800's would generate more than that. I wonder if its a problem with the D901C mobo and chipset.

    Aren't the Dell 1730's with 8800SLI getting 15k? Or am i mistaken?
     
  24. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    hmm really.. ive only heard from other guys on this forum that the M1730 got around 14000... that was running xp though

    there was talk bout the d901c chipset and gfx drivers had some issues which was preventing to get more out of these cards..
     
  25. Wu Jen

    Wu Jen Some old nobody

    Reputations:
    1,409
    Messages:
    1,438
    Likes Received:
    0
    Trophy Points:
    55
    Yeah I just googled it and found one where a 1730 SLI 8800 user got 14,512. That makes me a bit sad. I wonder if the 9800's will suffer because of the chipset as well. More than likely.
     
  26. Doodles

    Doodles Starving Student

    Reputations:
    178
    Messages:
    880
    Likes Received:
    0
    Trophy Points:
    30
    SO our mobo's are an inherent bottleneck....
     
  27. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    i hope not ay... hope there is a new bios to fix this.. or new gfx drivers.
     
  28. DFTrance

    DFTrance Notebook Deity

    Reputations:
    317
    Messages:
    754
    Likes Received:
    0
    Trophy Points:
    30
    Hi, actually the score posted for the 8800M GTX SLi is quite standard. Look on this forum forum Justin & Chaz benchmarks.

    http://www.notebookreview.com/default.asp?newsId=4392

    Dazzd might have less 500 points in 3DMark06 due to drivers of some sort. Or different test OS then the one used by Chaz. Actually you can do the test several times and values vary that ammount of points depending on conditions.

    I have been telling people that there is something not quite fine tunned with Clevo SLi and this generation of cards. That is, the real performance increase is at most 30% in SLi. Far from 40% or more (the minimum I would expect on average).

    I doupt it is from the chipset as this time around the chips are built by NVIDIA (so the same for everyone brand I guess). This was actually the reason why Clevo told us that we needed to replace our motherboards if people have a good memory as I do.

    Trance
    PS: I saw some posts refering to 14000 plus basically with both the GPU and CPU OC'ed in a Clevo sensored way. Pretty much what DELL can do with while OC'ing in a promoted fation.
     
  29. lastrebelstanding

    lastrebelstanding Notebook Evangelist

    Reputations:
    265
    Messages:
    510
    Likes Received:
    0
    Trophy Points:
    30
    Where do you actually download a new bios for the 9262.
    I've tried the Sager website but there's only drivers to download and it's the same for the Clevo website.

    "Everest" reports my bios version as 6.00. Is that the newest?
     
  30. Shyster1

    Shyster1 Notebook Nobel Laureate

    Reputations:
    6,926
    Messages:
    8,178
    Likes Received:
    0
    Trophy Points:
    205
    Did you happen to see what CPU that 1730 was running? My bet's on a dual-core of some sort. One of the biggest "problems" with the D901C and Sli, in particular, is the quad-core CPU (which is really more of an ersatz quad since it's really 2 dual cores taped together).

    The basic problem appears to come from the increased bus latency generated by, among other things, the inordinate amount of L2 cache coherency traffic that has to flow across the FSB when coherency transactions between cores 0 or 2 (1st pair) and cores 1 or 3 (2nd pair) take place. In that case, the bus is monopolized to the exclusion of other functions (since the FSB is a common bus, use by one component excludes use by any other, and since cache coherency has a very high priority number, it usually trumps any other ongoing operation). The reason for this is that, while there is on-chip communication between the L2 cache for each of the core pairs (i.e., 0 & 2, and 1 & 3), communication between the L2 caches of each core-pair is not on-die and must pass across the FSB. If I recall correctly, each coherency transaction that crosses the bus takes up 14 bus cycles, 14 cycles during which no other traffic, such as GPU-related traffic, can pass on the bus.

    A dual-core CPU does not suffer from this infirmity as all of its L2 cache is on-die and therefore generates no bus traffic during cache coherency transactions. As a result, some of the dual-core CPUs will outperform the quad-core CPUs even under stock clocks, and accordingly, will generate higher benchmark numbers.

    This theory was, at least indirectly, substantiated, at least in theory if not in detail, by the OC'ing efforts of an NBR poster named dexgo - who seems to have disappeared as of late - who OC'd a Q6600 and, more importantly the FSB (more importantly to my mind :D ), and was able to break through the benchmark limitations he was seeing with a single 8800M on the Q6600.

    So, if you take a dual-core 1730 that has a high stock clock frequency, and OC it, you will almost certainly see better benchmark results out of that 1730 than you would out of a D901C with a quad-core CPU.

    Incidentally, this problem (i.e., inordinate bus latency due to L2 cache coherency transactions) is one of the underlying reasons for Intel's redesign of the Nehalem quad-core architecture, which gets rid of the FSB in favor of a new point-to-point communication called QuickPath - thereby overcoming the bus bottlenecking caused when one component with high priority monopolizes the FSB - and a new L3 cache; one of the benefits of this new architecture is that L2 cache coherency transactions among the four cores will be carried out over Quickpath lines between the L2 cache components, yielding faster communication but also avoiding the undue latency caused by having such communication monopolize the common communications bus.
     
  31. DFTrance

    DFTrance Notebook Deity

    Reputations:
    317
    Messages:
    754
    Likes Received:
    0
    Trophy Points:
    30
    It is a well known fact that software (games included) using just one or two cores, perform slightly better in an equivalent Dual Core CPU then in a Quad Core CPU (equivalent in terms of Ghz, say a 2.4 Ghz vs a 2.4 Ghz). This considering that background processes are kept to a mininum. Indeed it performs even better if the Dual Core has a greater clock. That just it, slightly and in some scenarios.

    But we are comparing is the performance achieved while gaming by a Core 2 Quad (desktop) against Centrino mobile processor technology. Pick any Extreme mobile processor against a Q6700 and the second wins the performance match. Just get your wPrime index with only two threads on a Quad Core and the same on Core 2 Centrino at 2.7 and you'll have winner. For instance my Q6600 with only two threads get 30.281 (Default Test). You can use this table to compare with others:

    http://forum.notebookreview.com/showthread.php?t=123570

    If Shyster was correct, then wPrime scores with only two threads in a Quad Core would be much slower then the same with a Centrino mobile. And its quite the contrary.

    Taking the above in consideration the impact of CrossPath on the bus overload in our systems might be more or less depending on the software we use. If almost all task priorities (high) are equal then the CrossPath actually helps a lot, since the bus overload is lightened even when the task switching rate is increased.

    Since we are talking about games that is seldom not the case. Gamers in this forum usually do a lot of bechmarking to test their systems. Most of them, including me, actually disable all the backround processes that we can. Some of them go to the extreme of disabling visual styles while on XP. So in this case, high priority tasks might be created mainly by in game/benchmarking threads, so not overloading so much the BUS as they usually work within the scope of the same process, so the same core per processing time, and so not breaking multi-core racing conditions in such high degree.

    Due to this I think that are the IRQ racing conditions imposed by the motherboard the ones more seldom broken with newer cards. IRQ racing conditions that slow down the system that others don't so much with newer cards. The reason they are broken is because both the 8700M GT and the 8800M GTX actually can process shaders much much faster then the older generation, so are able to send data at higther rates to the CPU and other components. So increasing the DPC latency generated by higher IRQ switching rates as they compete with other I/O components such as RAID Controllers, Sound Controllers, USB Controllers and so on, for the BUS.

    IMHO it as nothing to do with the CPU Chipset for the kind of software we are talking about ... games. People forget that the d901c with the Go 7950 GTX consitently have beaten other laptops of the "same" grade. This even with a Quad Core.

    So why just by changing cards makes the Quad Core CPU and Chipset a possible justification for lower benchmarks? It does not make sense to, less so when indirectely comparing mobile CPUs/Chipsets with desktop CPUs/Chipsets as explained above.

    Trance
    PS: Just a quick note the most heavy loaded cores are core 0 and core 2 in my system. Core 1 and core 3 are being used mainly for background tasks. While gaming, I disable all the background tasks that I can, leaving all cores available to the game. On COD4 both core 1 and 3 are prety fresh in comparison with the others.
     
  32. AlanP

    AlanP Notebook Evangelist

    Reputations:
    123
    Messages:
    393
    Likes Received:
    0
    Trophy Points:
    30
    If you look at these Charts and only gather trends, not the nuts and bolts numbers, as these are for Desktops; the 8800 GTX gains more from SLI than the 9800 and the 8800 gain for SLI is about 21%, while the 9800 SLI gain is about 8 or 9%. I expect the same drop in performance from Desktop 8800 to Laptop 8800 will occur with the Laptop 9800's. I realize that the board doesn't exist as yet, but the same reasons for detuning the 8800's will drive detuning the 9800's

    reference: http://review.zdnet.com/graphics-cards/zogis-geforce-9800-gtx/4505-8902_16-32909910.html

    My Laptop has been thru surgery (at Sager -> thanks Ty) and should be back on Friday 30May08 :) I will run some numbers on this reborn D901C, having 8800 SLI, XP Home and a X6800 cpu. I will then replace this Cpu with a E8400 and will run additional timings and then post. I assume the same software that overclocked my 7950's will overclock the 8800's....
     
  33. ARGH

    ARGH Notebook Deity

    Reputations:
    391
    Messages:
    1,883
    Likes Received:
    24
    Trophy Points:
    56
    i am doubting the quad core bus latency cache issue because you can easily simulate a dual core rig by setting affinities for the programs to only run on one chip. for example, set affinity for your game to core 0 and 1 so only one cpu is being used to run it. doing that will still result in the same quality performance you are seeing in your dual core-enabled programs as you would see with allowing them to use all cores.

    check your task manager. run a game in windowed mode. when you run a program all 4 cores are being utilized while the information is being sent to them randomly. the same thing happened when i built my SMP dual processor rig back in the hay-days.
     
  34. Chris McCarthy

    Chris McCarthy Notebook Enthusiast

    Reputations:
    8
    Messages:
    30
    Likes Received:
    0
    Trophy Points:
    15
    One of the reasons I am waiting for the Q9550 is because this generation of processors are supposed to have the bus latency L2 cashe issue in quad cores solved. (I am waiting for the q9550 to get greater speed, I know the Q9450 is already out). Am I wrong that the q9xxx series has solved this slow-down issue?

    Chris.
     
  35. dazzyd

    dazzyd Notebook Evangelist

    Reputations:
    111
    Messages:
    498
    Likes Received:
    2
    Trophy Points:
    31
    hmm ok i can rest easy now knowing that my score is around what most people are getting.. sigh only wish there was some way around the bus latency problem :(
     
  36. Shyster1

    Shyster1 Notebook Nobel Laureate

    Reputations:
    6,926
    Messages:
    8,178
    Likes Received:
    0
    Trophy Points:
    205
    Taking into account the well-written arguments contrary to my position vis-a-vis bus latency caused by cache coherency, if that is, in fact, the problem, then it will not be solved with the Q9550, although the increased clock speed may make it less noticeable (there was another NBR member, dexgo, who managed to do away with the graphics stutter, and get high benchmarks, with a Q6600 that he OC'd to 3GHz with an OC'd FSB as well). To the extent that cache coherency is causing problems due to bus latency, that problem will not be solved until the Nehalem processors are out - those processors get rid of the FSB, replace it with a new point-to-point communications system called Quickpath, add a new L3 cache, and as a result - mainly from the Quickpath - intercache communications will both speed up and will no longer block communication by other components. Sorry.
     
  37. Shyster1

    Shyster1 Notebook Nobel Laureate

    Reputations:
    6,926
    Messages:
    8,178
    Likes Received:
    0
    Trophy Points:
    205
    For what it's worth, to get around the bus latency, you could either discover a way to OC the Q6700 (which is apparently more difficult to do than with the Q6600), or switch processors to a Q6600 and then OC the processor and the FSB. If you search for posts by dexgo from several months ago, you should find a post discussing how he OC'd the Q6600 to 3GHz.
     
  38. DFTrance

    DFTrance Notebook Deity

    Reputations:
    317
    Messages:
    754
    Likes Received:
    0
    Trophy Points:
    30
    Just a note or not such a note but a point :) Found sometime yesterday to write technically the reason behind my ranting since January.

    All systems have BUS/FSB latency. In most new high performance systems (motherboards and components) that is not a problem. That is basically one of the benefits of getting a good motherboard with good components.

    Indeed, if an SLI System suffers from bus/fsb latency (usually the ones with older motherboards) the only real solution is to increase the FSB clock if one does not want to loos quality. The problem we have, is that by increasing the BUS clock you also increasing CPU and RAM speeds.

    That is one of the reasons NVIDIA "abandoned" the LinkBoost technology on their NForce systems. This technlogy would the increase the BUS speed automatically when 3D graphics kicked, in orderd to create space for the demands of the video card. The side effect was that it could burn (as go on fire) or pacifically shutdown your system if either the CPU or the Memory couldn't cope with it.

    The demands of the new video cards 8* Series and future 9* over the FSB Bus are greater then the previous generation. Indeed, not only the bus but also on IRQ timmings (have you ever experienced sound stuttering or mouse lag?). Some of this problems are resolved by new drivers. What the new drivers usually do is lower the information resolution (lower the quality) in order to generate less traffic.

    If you don't want to suffer from BUS latency so much while not overclocking just go for a CPU that used FSB1333Mhz natively. This provided that the Clevo motherboard actually is fully supporting that FSB speed (that I have some doupts since the RAM memory is 800Mhz top).

    Another solution is to lower the quality of your games, especially texture/shaer related options (but it somewhat defeats the purpose of having a high end SLi System). That is what I did with COD4 (no more video stuttering or sound suttering). This will decrease the Bus traffic enough in order not overload the queues. Notice, the video card can handle processing and storing more information, but the BUS cannot handle its transport.

    If you want the reasons behind the stuttering et al proceed with reading on your Clevo, if you don't care stop here. These are my deductions.

    Above I wrote in lame terms:

    "if a system suffers from bus/fsb latency"

    What I mean by this is the following. Sometimes in the SPECs of some motherboards you can find the following:

    "Frontside bus (FSB) pipeline depth of 12 in-order queue depth"

    An example: http://www.nvidia.com/page/pg_20041015917263.html

    What this means is that information sent to the FSB is first sent to a Queue. The depth of a Queue is number that denotes the amount of information it can cache before it is processed (don't confuse this with L2 Cache or what so ever).

    So what happens when this Queue is overloaded? Either you loose information or you experience deja vu's :). What I mean by the later is for instance, you can hear the same sound twice (this happens in COD4 in a motherboard like mine). Both lead to suttering.

    Most notory of course in sound and video (our eyes and years) but also in other areas that we don't see, as for instance networking (usually not so much of a problem as network latency is of greater degree) and so on. Just notice that or instance in COD4 if you increase the game quality, the pings to the game servers are also delayed (higher pings). Notice that pings don't carry much data, leaving the only thing in common between the network card and the video card is actually the FSB they are sharing (The CPU can handle the workload perfectely as is it does not go over 60% in my system while gaming).

    The reason you can ear the same sound twice, is becouse the drivers are signaled that an error as ocurred (Queue overload). Once that happens the system clears this Queue (resets) and the sound driver resends the information in order to try to recover the transaction. But what if the Queue overload was due to some other device sending information? It does not matter, it sends the information again and you ear the same sound twice. Notice that when this Queue is overloaded, all components that sent information are signaled (BROADCAST), and some of them send the information again, others don't (driver depended), or simply rely on the upper software layer to recover the problem (Software Layer).

    Now this are all a side effect of the new demands created by the NVIDIA 8*M series in SLi. Simply becouse they demand much more from the FSB clock and the Pipeline Queue.

    I personally suspect this was actually the reason behind the need to upgrade our motherboards in order to support the 8800M GTX, not so much the MXMIV interface and Chipset. Notice that the FSB is a service proovided by the motherboard (in lame terms as I don't remember the chip) and not the CPU, RAM or Video Card.

    These are the things IMHO that DELL had accounted for on their systems/motherboards or video cards and Clevo did not in both first generation motherboards. Notice that DELL does not use MXMIV, or supports SLI with two discrete cards. What I suspect DELL did, is either increase the FSB Bus Queue Depth or actually implemented a second Queue on their version of 8800M X2 card. This would Queue data processed by the video cards at very high rates (rates that probably the FSB and its queue cannot handle) and only send data to the Bus/BUsQueue as it could handle it). By doing this it is actually using both the video processing power and the its BUS to the optimum.

    MXMIV is quite well indeed, but you see, it does not imply upgradeability, the motheboard needs to do its work too. I suspect that the weak part of Clevo motherboards in the d901c is actually how it managed the FSB (makes use of the P965 Chipset), that is why we can't overclock easily.

    That is why the BIOS supporting the new Quads was not yet officially been supported, and I suspect that the next generation video cards Series 9* will also need a new motherboard to fully supported, otherwise they will be capped down by either the BIOS or VBIOS. This considering that people are already experiencing stuttering effects in high settings with Crysis. I can only imagine the side effects of using 128 stream processors in SLI (256 SPs in theory, but we still don't know has maybe it will be lazer cut simply becouse ....)

    If you use RivaTunner have you ever noticed while playing COD4 or Crysis that for instance you have 60FPS and all of the sudden goes to 25FPS goin back up right aftwards without no visible justification (sometimes it seams that you get stuttering at those poins, either on video or sound)? There you have it! FSB Bus Queue Oveload, Queue Reset, Signal all devices to synchronize!!!!!

    Now, you can say, hey Trance I had a look on the Internet and found a lot of people with problems similar to this so this seams to be NVIDIA problem. Well some things to consider:

    1) You don't know what motherboard they are using
    2) You need to know what components they are using (CPU, CHIPSETS, Controllers etc etc)
    3) You need to know what game and resolutions they use when that happens
    4) SLi or not SLi?

    This to conclude if it is similar or not. What I can tell you is that if they are not using the NForce or similar technology of at least 2006! The ones that I've found that do not have this problem use top quality memory and motherboards (chip sets et all).

    All of them can cost 50%-30% less then our rigs if DIY (not going for branded system like DELL XPS or Alienware, HP HDX).

    This why I'm so disappointed considering the price we pay for good engeneering (in Europe). I do congratulate Clevo for the boldness to build these kind of systems :) Has I told you before I would buy the d901c again with only one card as it is an emazing machine for work but a comparatively bad SLi machine. Sometimes I play dumn & stubbern just becouse I don't have the time in the forum to be smarter.

    This is my last post regarding these issues as my idea is now hopefully complete on your eyes (wether I'm right or wrong).

    Stay cool,

    Trance
    PS: This is all my personal opinion made trhough deduction (as an IT person), in no way what so ever I have data to back it up since I would need some measurement tools (hardware). I do have a fair ammount of knowledge about how IT systems work but I'm more of a sofware man. Welcome to the Matrix
     
  39. DFTrance

    DFTrance Notebook Deity

    Reputations:
    317
    Messages:
    754
    Likes Received:
    0
    Trophy Points:
    30
    By the way, sometimes I have a short memory. The chip responsible for managing the FSB is either the P965!!!!! Something it was old already in the beginning of 2007, and again I congratulate I Clevo for the boldness to push it further to new limits (that include the Q9*), but you do loose something in the process while you might win in production costs.

    If you want to know more about FSB start here:

    http://en.wikipedia.org/wiki/Front_side_bus

    Stay cool,

    Trance
    PS: Intel QuickPath is not a new idea :). When I was in the University, both Sun ans NeXT machines already use point-to-point technology (15 years ago).
     
  40. Shyster1

    Shyster1 Notebook Nobel Laureate

    Reputations:
    6,926
    Messages:
    8,178
    Likes Received:
    0
    Trophy Points:
    205

    It's also somewhat analogous to what AMD has been using for years, at least with respect to cache memory, given that AMD has been using an on-die memory controller for quite a while - Nehalem will be the first Intel architecture to use an on-die memory controller (replacing the old memory controller in the chipset that sat astride the FSB).
     
  41. DFTrance

    DFTrance Notebook Deity

    Reputations:
    317
    Messages:
    754
    Likes Received:
    0
    Trophy Points:
    30
    If you think about what I've written all issues posted regarding sound stuttering, video stuttering etc etc become quite clear. Moreover you can predict the ways Clevo might move forward regarding its technology.

    As DELL have demonstrated with their 8800M GTX SLi embedded solution there are ways to get around the FSB Bus limitation while mazimizing the output capacity of the video careds, since its becoming more and more apparent with the advent of faster video cards, sound cards, ESATA etc etc.

    Indeed, eleron probably inadvertedly has nailed the point that with the current FSB arrchitecture the near future for mobile video cards in SLi might be similar to the ones DELL is using at the moment. But it carries a penalty.

    If you look and the recent reports, an dual SLi with discrete cards perform faster then an embedded two core solution (aka X2). IMHO the reason behind this fact is that this second queue implemented in these cards "actually serialize" the information sent to the Chipset and the FSB, while the descrete cards do not. This carries a penalty as you might think but has the benefit of not overloading the FSB. It seams that NVIDIA NForce SLi architecture (not needed if one uses just a single GX2 Card) actually uses these kind of Queues in order to deal with 2, 3 a 4 way SLi solutions. Something that Clevo seams not to.

    We have been hearing news about the future of the d901c mostly by Wu Jen. One that particularly interested me was ESATA. If a system already has dificulty in transporting information, ESATA will only agravate the situation in certain usages.

    One then might think that Clevo already has an encompassing a technical solution to overcome the FSB Bus limitation. That might include abandoning the P965 chipset for newer ones I hope :)

    Trance
     
  42. DFTrance

    DFTrance Notebook Deity

    Reputations:
    317
    Messages:
    754
    Likes Received:
    0
    Trophy Points:
    30
    As far as I understand, putting the memory controller on die is different from point-to-point kind of architecture in abstract terms. Indeed Intel is actually integrating the memory controller directely in the CPU (buy the CPU, you buy also the memoy controller). What this means is that istead of connecting the RAM to the Chipset, ones connectes it to the CPU.

    This is solution to lighten up the BUS as memory transfers from CPU and the RAM are done without the need of the FSB Bus, freeing it to be used by other devices.

    One might be wandering why this wasn't done before. A simple answer is HEAT. But now with the current 40nm technology it is becoming feasible as it generates less heat as it is more energy efficient. But still the CPU will run hotter then we what we experience today with the new CPUs I suspect.

    As you said, Intel also wants to free up the FSB Bus, for inter core information transfer by using what the company called QuickPath architecture. That seams to be a natural move.

    Interesting enough still a competitor of HyperTransport, and in this case, PC based on AMD motherboards also from suffered sound and video stuttering issues. So I still have some reservations if this will be the solution.

    What I believe is that the solution is indeed in the devices using their own private queues, to queue data sent to the FSB Bus, instead of relying on and central FSB Queue to manage throughput. Much what I suspect DELL did with its 8800M X2 (kind of) solution for mobile cards, and what NVIDIA is doing with GX2 and ATI with X2 for desktops.

    Clevo might also event a new approach. For instance by integrating these Queue in each of their Video Card (in SLi). It would be up to the drivers to pickup the peaces thought. But bevouse Clevo does not "own" the drivers it might have its hands tied on this one.

    Trance
     
  43. Shyster1

    Shyster1 Notebook Nobel Laureate

    Reputations:
    6,926
    Messages:
    8,178
    Likes Received:
    0
    Trophy Points:
    205
    @DFTrance:

    Yes, it is different; the point of the reference to AMD wasn't for an exact technical comparison point-by-point, but rather by broad analogy inasmuch as AMD avoided the problems Intel is having with overloading the current FSB with memory transactions by placing the memory controller on-die in an earlier iteration of AMD's architecture.
     
  44. DFTrance

    DFTrance Notebook Deity

    Reputations:
    317
    Messages:
    754
    Likes Received:
    0
    Trophy Points:
    30
    "problems Intel is having with overloading the current FSB with memory transactions"

    That is were we disagree. I don't think Intel has a problem at least within the scope of gaming system (at least imediate). The proof is that some motherboard vendors have surpassed this limitation as I have explained before (NVidia with NForce solutions for desktops).

    So CLEVO has a problem if the company wants to be on the forefront of mobile gaming systems in SLi as it is the mobo manufacturer. Or it can wait untill all this new stuff comes to town from others and apply to their systems. In this regard DELL was more innovative.

    But speaking only about CPU manufacturers, indeed this is were AMD good stuck (had problems were Intel did not). How could they be slower while doing all this cool next generation stuff (like puting the memory on die and HyperThreading). Well one answer, POWER and HEAT. Something that AMD did not took in consideration. Indeed this side effect actually prevent AMD moving to faster clocks, integrating more cores etc etc etc.

    Now with 40nm seams more feasible and he doors are open once again for AMD to move on, if it can restructure its production facilities to this techn soon enough.

    Trance
     
  45. Shyster1

    Shyster1 Notebook Nobel Laureate

    Reputations:
    6,926
    Messages:
    8,178
    Likes Received:
    0
    Trophy Points:
    205
    @DFTrance,

    I have to respectfully disagree with you on the issue of Intel having problems with, among other things, memory transactions eating up clock cycles on the FSB. My disagreement is based primarily on the Intel docs and whitepapers I've seen discussing precisely this sort of issue, including some of the briefing papers on the upcoming Nehalem architecture.

    That the symptoms of the problem can be massaged by sufficiently sophisticated, nuanced software, and perhaps by some creative remodelling of the motherboard I have no doubt; that is precisely what dexgo did, in a more blunt fashion, when he cleared the stuttering and upped his benchmarks by OC'ing the Q6600 and the FSB.

    What the _Dell example illustrates is that there are still work-arounds available for the current generation of CPUs/chipsets that will ease the symptomatic behaviour - which is good, at least if you're sophisticated enough to replicate those work-arounds - at bottom, however, is the simple fact that Intel has more or less extracted the most that it can from the FSB concept, and is now hitting the limits of that concept with noticeable results, particularly with respect to the less-than-sophisticated software that comes out of most shops, including first and foremost, MS.
     
  46. DFTrance

    DFTrance Notebook Deity

    Reputations:
    317
    Messages:
    754
    Likes Received:
    0
    Trophy Points:
    30
    @Shyster

    "I have to respectfully disagree with you on the issue of Intel having problems with, among other things, memory transactions eating up clock cycles on the FSB. My disagreement is based primarily on the Intel docs and whitepapers I've seen discussing precisely this sort of issue, including some of the briefing papers on the upcoming Nehalem architecture."

    I think we could easily be mates :)

    I don't think we disagree on that as in this respect I agree with you. If we think that Intel needs to do the kind of changes on their Platform in order to move forward. Intel FSB architecture is now reaching its limits.

    "What the _Dell example illustrates is that there are still work-arounds available for the current generation of CPUs/chipsets that will ease the symptomatic behaviour - which is good, at least if you're sophisticated enough to replicate those work-arounds - at bottom ..."

    Not only DELL, but NVIDIA with NForce, EVGA, XFX (I think) and some others.

    If we think that both AMD and Intel set the core plaforms overwhich others build their systems taking the most of it we can't just "blame" them for all ineficiencies since some make use of the platform better then others.

    What you call work arounds, I call solutions. Much as CLEVO did when it adapted the P965 to be used on a laptop, and we all clapped the company for that.

    What has been "proven" as that Clevo solution does not address the FSB Bus limitations for the newer cards as well as others (it might be even agravating them), and this can be probably solved not only by a change in the platform. What ever the solutions we customer have been paying big bucks for it with motherboard upgrade requirements and premium video card prices.

    In the end of the day, if you buy an SLi system, the last thing you expect is to have to disable it (SLi) as I did in order to play games properly, or reduce texture/shader options while knowing that the video card can actually perform accordingly. Wether I buy it from Clevo, NVIDIA, AMD, DELL or anyone else.

    Just notice that a Clevo motherboard plus 2xVideo Cards cost $1600. So for the price we would expect the architecture to take the most out of the platform. How much does it cost us the Intel Platform? So it is only natural pose our arguments to whom we pay much more.

    Think about this Shyster. If there was no stutter in d901c (or performed better)and some stutter in the M1730 (performed worst) wouldn't we attribute the glory to the emazing Clevo tech? I know I would, what about you? But becouse it happens the other way around, now we call it work-arounds?

    Hope I'm not being disrespectfull as we are just exchanging points of view and our observations.

    Trance
    PS: Since the start of my rantz over motherboard replacements et all I knew that by poping up the FSB volume I would solve the stuttering. Not due so much to a faster CPU, but mainly becouse the FSB could dispatch the data faster and so less Bus Queue overloads and so no stuttering (I didn't need dexgo for that but to know how stable was the system after the fact). Indeed who actually called my attention to it was inadvertedly Audigy in some other forum when he mention the LinkBoost tech of NVIDIA. The problem with the d901c is that we cannot (should not) as I think time for some people will or has already demonstrated.
     
  47. Doodles

    Doodles Starving Student

    Reputations:
    178
    Messages:
    880
    Likes Received:
    0
    Trophy Points:
    30
    my brain hurts from you two
     
  48. Shyster1

    Shyster1 Notebook Nobel Laureate

    Reputations:
    6,926
    Messages:
    8,178
    Likes Received:
    0
    Trophy Points:
    205
    Maybe this'll help? :D :D
    [​IMG]
     
  49. DFTrance

    DFTrance Notebook Deity

    Reputations:
    317
    Messages:
    754
    Likes Received:
    0
    Trophy Points:
    30
    Actually I've been with a headache all day. After reading Doodles observation I just found out why. Gladly I followed Shyster advice and ... it works better then banging my fingers with a hammer :) :)
     
  50. Deodot

    Deodot Notebook Consultant

    Reputations:
    29
    Messages:
    215
    Likes Received:
    0
    Trophy Points:
    30
    Shyster, Trance thanks for all the interesting posts. I think I'm in love with this forum :)

    Originally Posted by Wu Jen
    "Yeah I just googled it and found one where a 1730 SLI 8800 user got 14,512. That makes me a bit sad. I wonder if the 9800's will suffer because of the chipset as well. More than likely."

    I just want to say that i'm getting 14400 3Dmarkpoints on a low OC (600-1500-950) I could easily push it up and make a few hundreds more. Perhaps a 1000point gain, landing on 15000?? So i'm not sad at all. And i haven't experienced any stutter..

    I still think it's a little strange that you're only getting, what was it, 12500+? Perhaps Vista is the only reason. I'm on XP now..
     
← Previous pageNext page →