The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    920XM VS Modern Cpus - Really ?

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by flingin, Feb 9, 2014.

  1. flingin

    flingin M17x R2 Mafia

    Reputations:
    444
    Messages:
    1,173
    Likes Received:
    306
    Trophy Points:
    101
    So how much time i have before it really starts to bottleneck any GPU ?

    Talking about Games mostly, no synthetic benchmarks ( That test CPU performance ONLY )
    I was thinking about upgrading to like AW 18, but when i see 3Dmark results i do not see a reason for it.

    It would be like spending another £1000 to get a refreshment for nothing.

    And with a worse screen.

    Im not here to whine about anything, the opposite actually, i am proud, and happy, that i did not had to buy a new gaming laptop every year to keep the FPS high...( well maybe just changing the GPUs )

    So for how long you think First Gen XM can keep up ?

    More than 4 Years passed and all i see is this

    920XM vs 4800MQ

    10% in physics score in favor of the newer Gen,

    Result

    For how long i will remain bottle-neck Free - happy ?
     
  2. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    You've been bottlenecked for years now and you don't even know it.

    When crazy, insane O/C'd desktop platforms are the bottlenecks for GPU's - don't you think that even the i7-4800MQ will be a bottleneck, depending on the gpu and game combination? Let alone your technologically ancient platform?


    You also ask for no synthetic benchmarks and then link to them. :)


    You were bottlenecked when SNB was released, btw. And that is ancient now too.
     
  3. fatboyslimerr

    fatboyslimerr Alienware M15x Fanatic

    Reputations:
    241
    Messages:
    1,319
    Likes Received:
    123
    Trophy Points:
    81
    What overclock do you run your 920XM at? I game at 23x multiplier (3Ghz) but my 940xm is happy at x25 or even x26 across 4 cores with my cooling mod (see link in sig) keeping everything nice and cool.

    I think you will always get games that are very CPU dependent like FC3 and Crysis 3, where we may start to feel the age of our very old CPUs but the vast majority of games will run perfectly on this old tech.
    My modest overclock 3d Mark 11 score also makes me think what is the point of upgrading. I'm thinking more of a steam machine than another laptop.

    Interesting discussion...

    Bottlenecks in which games? Proof? I hear that very little difference has been observed in 7970M CF testing with an i7 740qm and an i7 920xm in Metro Last Light. Not talking about bottlenecking x2 R9 290Xs in CF but laptop GPUs here.
    Mantle may help the cause even more.
     
  4. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Mantle? Games? Sorry, I don't speak that language. :)

    But the processors you've 'heard' about not making a difference belong to the same era; no wonder they're similar.

    Look at any modern test platform setup for gaming; it is not running even 1 year old tech. CPU's do make a difference to how much performance you can get from any and every other component in a system.

    To think otherwise is just fooling yourself.
     
  5. Qing Dao

    Qing Dao Notebook Deity

    Reputations:
    1,600
    Messages:
    1,771
    Likes Received:
    304
    Trophy Points:
    101
    They run the fastest overkill CPU's possible when benchmarking graphics cards to be on the safe side so that the CPU can have absolutely no influence on the results. Not only that but they are running multiples of the fastest desktop GPU's, which are themselves much more powerful than what you would find on a laptop. If one were to make a desktop computer to use those same top of the line graphics cards, you could use a much slower processor and still perform almost identically the overkill processor used by the reviewers.

    An overclocked 920xm or 940xm is enough processor for today's gaming needs and will not bottleneck any laptop GPU's. Saying anything else is somewhere between hyperbole and hogwash.
     
  6. qweryuiop

    qweryuiop Notebook Deity

    Reputations:
    373
    Messages:
    1,364
    Likes Received:
    89
    Trophy Points:
    66
    best way to detect a CPU bottlenecking is in fact synthetics benchmark, since they run seperate graphics test and physics test, the way to detect a CPU bottleneck is when you see the GPU score much lower than what it should've been, which tells you that the CPU cannot handle the workload given from the GPU (which i might say, the 920XM bottlenecks SLI GTX780M if i'm not mistaken, but for single 780M it is fine)
     
  7. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631

    You're welcome to keep believing that. And please do keep changing the goal posts (OP is not talking about O/C'ing).

    The reality is somewhere in between (for gaming); but it is a safe assumption that the reason 920xm/940xm's aren't being made anymore is because they have been superseded in all areas.

    The old stuff was good (at the time). Today, saying stuff like what you're repeating sounds like so much '640kb is all the ram we'll ever need' all over again.


    This is Today. 2014.

    You're welcome to join.
     
  8. fatboyslimerr

    fatboyslimerr Alienware M15x Fanatic

    Reputations:
    241
    Messages:
    1,319
    Likes Received:
    123
    Trophy Points:
    81
    But why bother when I can play all new games on high to very high settings at 1080p? Even AW 18 is still only 1080p!

    Think this discussion is more about longevity of old tech for gaming rather than why new stuff is so great. Because quite frankly new stuff isn't that great unless you have a lot of $$$$$. I'd much prefer my M15x to a brand new AW 14!

    Sent from my One S running 4.4.2
     
  9. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Do you even play games? Doesn't sound like it.
     
  10. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631

    The new stuff is great. Period.

    Why games don't take advantage of it is the issue (imo). That is what the thread should be about.

    The game developers are being lazy and don't give gamers a new engine that leverages and pushes new hardware to it's limits and the issue instead is that new platforms aren't needed? Huh?

    You may be glad to be able to play new games at only 1080p on old hardware; that doesn't mean the tech (new or old) is good or bad. It means you are being mostly shortchanged by the game developers, or can't upgrade to current level platforms (mostly shortchanged, I think).

    Have they run out of ideas? (Don't think so).

    Have they found a cash cow they won't let go until it kicks them to the other side of the field? (Yeah).

    That cash cow being; slight alterations of what they were delivering 5+ years ago, with pretty new lipstick and nothing new under the hood (investment or otherwise).


    You need to see this scenario for what it is, not for what you want it to be.



    Games? No. Ever, yawn...
     
  11. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    @OP
    Your CPU is absolutely fine for modern games and stands up to the latest mobile Haswell quads quite well. 3.73 GHz is rockin' for the 920XM and at that speed you're essentially tied with the 4700MQ.

    The clock-for-clock difference between Nehalem and Haswell is only about 20-30% at best and you're almost never gonna see an IPC improvement of that magnitude manifest itself in any significant way in games since they are primarily GPU-bound anyway unless you're purposely running low settings and resolutions to make them more CPU-bound.

    Take the clock-for-clock comparison here between i7-975X @ 3.46 GHz and 2600K @ 3.5 GHz: AnandTech | Bench - CPU

    Clock-for-clock, Sandy Bridge is 10-15% faster in the synthetics and 5-10% faster in the game tests.

    And then looking at 2600K @ 3.5 GHz vs. 4770K @ 3.7 GHz: AnandTech | Bench - CPU

    Taking into account the 0.2 GHz clock speed difference, we can extrapolate from the results the clock-for-clock difference between Sandy Bridge and Haswell to be 10-15%.

    And that's how I got my 20-30% number for Nehalem vs. Haswell clock-for-clock.

    Makes sense when you crunch the numbers in OP's 3DMark score. In Fire Strike Physics, If 4800MQ @ 3.5 GHz is 15% faster (9793 vs. 8504) than 920XM @ 3.73 GHz while clocked 6% lower, then the IPC difference comes out at ~23% after calculation, which is right in that 20-30% range.
     
  12. Charles P. Jefferies

    Charles P. Jefferies Lead Moderator Super Moderator

    Reputations:
    22,339
    Messages:
    36,639
    Likes Received:
    5,075
    Trophy Points:
    931
    We've had to remove/edit a number of posts in this thread; you're better than this. Insults (especially personal) have no place here in the public forum.
     
  13. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    octiceps,

    I don't know how or why you keep going back to an O/C'd 920xm processor - nor why you keep comparing 'clock for clock'.

    Has the OP mentioned this and I've missed it again? (I've read this thread for the tenth time now)?

    Even if I believe your numbers as presented; 20% to 30% faster for the Haswell platform is still VERY significant and goes a long way towards removing the cpu from being the bottleneck. Which it is. Always.

    A certain game may not show it, or a given resolution may mask it - but the fact remains that for the ultimate gaming performance (which I view as just another workflow), a desktop is always the platform of choice. Because not only are the gpu's more powerful; but so are the cpu's which drive them.

    Mobile systems still do not hold a candle to desktop based systems in raw performance.

    And the most important point? I'll repeat it because it is that important.

    If 3 or 4 generation games plays effectively the same over 3 or 4 generations of hardware, that is the gaming engines and therefore the game developers that should be questioned. As in; what have they been doing for the last half dozen years?

    Not come to the conclusion that hardware is not progressing anymore.

    If Adobe CSx Suite made the same non-progress over the same time frame; they would be out of business now.
     
  14. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Because his results are from a 920XM overclocked to 3.73 GHz. Did you not read the OP at all?
     
  15. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yes, I did read his post - the link I disregarded because he said:


    But as I said, it doesn't matter. OC'd or not, the difference is still significant enough and my points stand.
     
  16. fatboyslimerr

    fatboyslimerr Alienware M15x Fanatic

    Reputations:
    241
    Messages:
    1,319
    Likes Received:
    123
    Trophy Points:
    81
    Flingin what temps do you get on your i7 920xm at 3.73ghz? Have you overvolted it? It must run very hot?

    Sent from my One S running 4.4.2
     
  17. gdansk

    gdansk Notebook Deity

    Reputations:
    325
    Messages:
    728
    Likes Received:
    42
    Trophy Points:
    41
    I play a lot of games that are easily CPU limited. It may not be your case. Toward the end of games like Crusader Kings II, Rome Total War 2 and Civilization V my R9 290 sits largely idle waiting for the i5-4670K. I don't game on mobile, but I'd assume the problem would be worse.
     
  18. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    No there is no significant difference, especially in actual games. Since you don't even play games, anything you say in the matter has to be taken with a huge shaker of salt.

    And it's ironic that you conveniently ignored looking at OP's 3DMark scores, because not only do you lack context for the rest of his posts thus making yourself uninformed, but synthetic tests like 3DMark actually help your argument since they show a bigger improvement compared to real-world games.

    Why are you even comparing games to production-level DCC software? Games are nowhere near as parallelized in terms of CPU usage and never will be, plus high-level graphics API's will always be a limiting factor as well. And some CPU overhead is ideal to account for the unpredictability prominent in games, especially in large-scale online multiplayer shooters, which are typically the most CPU-bound games today.

    Plus games need to be able to scale to a much greater variety of hardware and software configurations. The developers of professional DCC software know they serve a niche target audience which is almost guaranteed to have high-end workstation-grade hardware, but game developers can't make their games the next Crysis, otherwise the game gets a bad rap, it cannibalizes sales and increases piracy, and may even hurt the fortunes of the entire company.

    And saying that CPU is always the bottleneck for performance is just plain wrong, unless you've got a nonsensical setup with a massively overpowered GPU paired with a massively underpowered CPU. But for sensible hardware configurations, GPU is nearly always the bottleneck, and scores of benchmarks all over the Internet will confirm that. And that is by design, as devs and gamers both know that it is much easier to scale performance up or down in a GPU-bound game than in a CPU-bound one. Excessively CPU-bound games have traditionally been the hallmark of poorly-optimized console ports. Just see Assassin's Creed III and perhaps the most infamous of them all, GTA IV, which still can't maintain 60 FPS maxed out on a GTX 680, a top-of-the-line GPU which came out a full four years after the game's original release.

    And you are absolutely wrong about games not making progress over the last several years. It's not just about prettier graphics, which we've had in spades. What about new ideas, gameplay mechanics, business models, means of interaction, etc. Oh wait, you don't actually play games so you wouldn't know any of that.

    And about games playing the same over several generations of CPU hardware, that's because as far as gaming performance is concerned, real meaningful progress has stalled on the Intel side since Sandy Bridge, and on the AMD side since Phenom II. And the fact that games are primarily GPU-bound.

    And what have AAA devs primarily been doing over the last six years? The obvious answer is designing their engines around last-gen console hardware. Since we just entered a new console generation, the target development platform for most AAA games switches to much more powerful console hardware and PC hardware requirements will go up accordingly.

    And when did the conversation shift over to laptops vs. desktop? Who's getting off-topic now?

    So much of what you say here is just so wrong that it's hard to take you seriously beyond an oft-uninformed forum troll that likes to just jump into threads without the intention of actually contributing meaningfully to a discussion, just to bolster your already massive post count.
     
    Qing Dao and fatboyslimerr like this.
  19. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    No the problem is nowhere near as severe, because the performance gulf between desktop and laptop GPU's is much greater than the performance gulf between their respective CPU's. The TDP requirements of extreme desktop GPU's is much higher than that of CPU's. An Intel Extreme Edition, which you can already find in admittedly mammoth "notebooks," sits somewhere north of 100W, but GTX 780/Titan/780 Ti are at 250 W, and R9 290/290X are almost 300 W! :eek:
     
  20. gdansk

    gdansk Notebook Deity

    Reputations:
    325
    Messages:
    728
    Likes Received:
    42
    Trophy Points:
    41
    Here's how it goes if you're not familiar. You set speed to 'fastest' or you press 'end turn'. Your CPU starts to process the AI, occasionally submitting draw calls. It doesn't matter if your GPU is a 7970M or a GTX Titan: You're stuck until the CPU finishes. Most people, admittedly, do not play primarily turn-based and grand strategy games. But if you do (and I do), CPU performance is paramount.
     
    alexhawker likes this.
  21. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Yep I get you. RTS, chess games, and the like are primarily CPU-bound, but they're the exception to the rule. Most games are GPU-bound though. And as far as RTS goes, I don't actually believe it is the CPU's we have being too weak, it's that they're not being utilized to their full potential because of the massive overhead of high-level graphics API's severely limiting the number of draws we can have in a scene. You can easily find benchmarks of even last-gen console hardware beating PC's with i7's and high-end GPU's by several times in draw call benchmarks. Mantle supposedly solves DirectX's small batch problem and can actually make RTS games GPU-bound for once. If the results from the Star Swarm demo, which is supposed to model a next-gen RTS game and engine, is anything to go by, I think it looks promising.
     
  22. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    octiceps,

    I think you're out of your element and your level of understanding of how a computer works.

    CPU+RAM=WORK done by a computer (notice that a gpu is not required). Notice especially that a gpu cannot run an O/S. Notice also that everything other than the CPU+RAM combo is there to be driven by the CPU+RAM combo. Not the other way around.


    The quote above doesn't indicate that processor performance is not important; it more likely indicates that the game engines haven't been developed to take advantage of the newer platforms. As I've been trying to get across to you.

    I also don't need to be a physicist to understand the challenges real scientists are dealing with when trying to understand the inter-relationships between objects. Just as I don't need to play games to understand how a gaming system works.


    The OP contradicts him/her self and I'm at fault? Great deductive powers you have there. But anything to take another jab at me, right?



    And that, is circular reasoning at it's finest.
     
  23. yotano21

    yotano21 Notebook Evangelist

    Reputations:
    67
    Messages:
    570
    Likes Received:
    332
    Trophy Points:
    76
    You guys are all funny, going back and forth. Now its back and forth on how many posts you have.

    Hello, I have 34 posts and I work has a IT Engineer for a large School district in NV on both Macs and PCs. I will continue reading what people say and not post anything further.
     
    fatboyslimerr likes this.
  24. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
  25. Qing Dao

    Qing Dao Notebook Deity

    Reputations:
    1,600
    Messages:
    1,771
    Likes Received:
    304
    Trophy Points:
    101
    What are you talking about? I mean really, you write some random meaningless equation and that is it. The GPU doesn't do any work? All I need is a fast CPU and some RAM and I am good to go with anything I can throw at my PC? What if I say that the GPU=WORK? What work is the RAM doing exactly? You always throw this nonsense out there in the majority of your posts on this forum, but anyone who actually has a clue about how a computer works knows that it is hyperbole and hogwash.

    A holistic approach must be taken with computers. Everything works together to provide an output for the user. There are lots of stages in the pipeline and each can be a weak link. Also depending on what it is you are doing is hugely important in determining which part or parts of the system is the weakest link or bottleneck. You can just use your CPU and RAM approach and blindly say that the CPU is always the bottleneck and that more is better. Or you can use some common sense, look at what everyone is saying, check out a whole slew of benchmarks for yourself and clearly see that an overclocked Clarksfield processor is enough CPU for even some rather powerful desktop graphics cards.

    You are like a rabid animal about your CPU and RAM idea, but even outside of gaming it is quite wrong. What about doing something that heavily taxes the storage subsystem? The CPU isn't doing a whole lot of work while it waits. What about something like Photoshop or any of the other myriad programs these days that use GPU computing? My dad's passion is photography and he uses a Core 2 Quad based desktop to do his work. I know you would say it is prehistoric, but it does work in Photoshop faster than any laptop on the planet after putting in a few powerful Radeon GPU's.

    There are different types of games, and this can matter too, but in general games are not developed to take advantage of powerful processors for two reasons. One is that (most) games belong to the type of programs that are difficult to parallelize. A certain amount of the work done by the processor simply cannot be done in parallel, so this limits the performance increase possible by adding more and more cores. They try to split off as much parallelizable work to other cores as possible, and this works reasonably for a few cores. But going beyond this shows little to no performance increase. The second reason is that Computer games are developed to run on a variety of different hardware. This means different CPU and GPU configurations. Most games can't easily be tailored to work well on a dual core processor yet also be able to bring a six core processor to its knees. Although there are some settings that can change the CPU load a bit, most of the requirements are fairly static. With games being developed in-tandem on consoles and PC's as well as most people not using the most powerful desktop processors available, there is not much that developers can do to get around this. On the other hand, graphics settings and resolution are much easier to change and adapt to take best advantage of the available GPU.

    And stop talking about "newer" platforms as if they are somehow magically better. Their are plenty of older processors that are very powerful and plenty of newer processors that are not.

    So? That still doesn't mean that you don't anything about the topic at hand, which everyone can clearly see.

    LOL! :rolleyes:
     
    fatboyslimerr likes this.
  26. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Both of your grasping for anything I say and then twisting it so that you can throw back at me is very sad to see. Obviously we can't have a grown up conversation about this with you two.

    Your common, recurring theme is to simply insult. Insult how many times I've contributed (post count), insult my user title, insult anything, it seems, that you can't do here; like have a meaningful conversation.

    Instead of providing information to prove your point or disprove mine. Or simply; even asking questions to clarify my points which I would have gladly done.

    I come here to learn and to share what I know. You don't want to do either of that with the posts seen in this thread - even when others also indicate I may have a point, you discount their input with your circular reasoning and excuses about how it is the programming language for the gpu that is at fault, instead of conceding that I might (shock) possibly be right.


    I wish both of you all the best in the future. But I am done talking to you unless your attitudes change.

    I will continue to contribute and participate where and when I want.

    I also hope the OP sees the point of my posts here.

    The answers we get depends on the questions we ask; not on what we (think) we (already) know.


    Everyone else;
    In my posts here, I have tried to provide the backdrop of 'computing' for anyone wanting to know why a newer platform should be better for gaming (at least for certain games, even if it seems that most are 'gpu' dependent). And apologies for my contribution for the noise introduced with the immature interactions of mostly the posters below.

    Sure, I can see and (already) know the reasons for not getting PC gaming engines up to speed with regards to pushing the hardware we have available to the max. That doesn't mean that that excuse holds water with me though. And it seems obvious to me that that is the issue; not that processors haven't made progress in the last half dozen years either (what a simple minded conclusion; especially after considering all the facts).

    This series may be interesting with regards to the topic here:

    See:
    Debunking the Myths of Graphics Card Performance - Tom


    Contrary to what has been expressed on my behalf; I'd rather be wrong (which means I've learned something new) than stubbornly continue with outdated ideas.

    Can't wait for the next installment of the link above.


    Take care.


     
  27. flingin

    flingin M17x R2 Mafia

    Reputations:
    444
    Messages:
    1,173
    Likes Received:
    306
    Trophy Points:
    101
    This
    You have just made my day !

    :D

    Also thinking AMD Mantle to help us in the near future, most of the HiEnd gaming laptops GPUs can be changed without a problem, leaving some CPUs a bit behind sometimes.

    Some games will become choppy even on most powerful desktop CPUs (4770K etc) because of amount of what is going on on the screen...
     
  28. Qing Dao

    Qing Dao Notebook Deity

    Reputations:
    1,600
    Messages:
    1,771
    Likes Received:
    304
    Trophy Points:
    101
    Tiller, you are a joker.

    If all you can do is try to force the discussion to revolve around your absurd beliefs about computing in general and make quips degrading those who disagree with you, how exactly do you expect to be treated in this forum? But of course, since your argument has absolutely no substance other than the goop between your ears, you get on your high horse and act like all those who disagree with you are the ones who are stubbornly holding on to their beliefs and are beyond being reasoned with. Replying to your posts is like talking to a brick wall.
     
  29. djembe

    djembe drum while you work

    Reputations:
    1,064
    Messages:
    1,455
    Likes Received:
    203
    Trophy Points:
    81
    If I'm understanding the question correctly, you want to know how long you can continue to play games on your current system before it becomes irrelevant. This depends on what games you play and what settings you prefer. There are games currently available that your system will assuredly not be able to play at maximum settings and full 1920x1200 or 1920x1080 resolution. If you insist on playing everything at maximum settings and full resolution, your system is going to feel outdated if you want to play Company of Heroes 2, The Witcher 2:Assassination of Kings, or Metro: Last Light (and most likely others as well). However, if you are fine playing at medium settings or lowering the resolution of games, then you will likely still be able to play even the most demanding games (such as those I just mentioned).

    At some point, you will find a game your system simply cannot handle below your own tolerance (resolution, settings, frames per second, etc.). At that point, you can decide if you want to play that particular game enough to buy a new system. For a high-end gaming notebook, I'd expect it to last for around two processor & graphics generations before it can no longer play all games at maximum, and roughly four generations before it starts to struggle playing the latest games on medium settings or lower resolutions. In other words, you are likely to run into a game your system can't play in the next couple years.

    I'm glad you've gotten such good use out of your system!
     
  30. Charles P. Jefferies

    Charles P. Jefferies Lead Moderator Super Moderator

    Reputations:
    22,339
    Messages:
    36,639
    Likes Received:
    5,075
    Trophy Points:
    931
    While there's some good debate in this thread, the arguments are too intertwined for me to effectively remove. Thread is now closed.

    OP - it looks like your question was answered; feel free to start a new thread if not.
     
    flingin likes this.