The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Modding AGA

    Discussion in '2015+ Alienware 13 / 15 / 17' started by mertymen2010, Nov 26, 2016.

  1. mertymen2010

    mertymen2010 Notebook Consultant

    Reputations:
    5
    Messages:
    213
    Likes Received:
    26
    Trophy Points:
    41
    Is it at all possible to stick the AGA parts into a small desktop case so thats its easier to put a much bigger card in and keep it even cooler?
     
  2. rinneh

    rinneh Notebook Prophet

    Reputations:
    854
    Messages:
    4,897
    Likes Received:
    2,191
    Trophy Points:
    231
    Hmm not without actually building a case around the same mounting points. but you can dremel out some plastic tabs and with low profile pci express cables you can fit the highest cards on the market. i did the same.
     
  3. spiralzz

    spiralzz Notebook Evangelist

    Reputations:
    10
    Messages:
    329
    Likes Received:
    133
    Trophy Points:
    56
    mertymen2010 likes this.
  4. mertymen2010

    mertymen2010 Notebook Consultant

    Reputations:
    5
    Messages:
    213
    Likes Received:
    26
    Trophy Points:
    41
    Ok so now I know it is possible..... So what if I buy another AGA and strip the internals out and have 2 inside the tower? Anyway to SLI the cards? Must be a way, right?
     
  5. spiralzz

    spiralzz Notebook Evangelist

    Reputations:
    10
    Messages:
    329
    Likes Received:
    133
    Trophy Points:
    56
    no afraid not

    the aga connects to the alienware pc/laptop via PCIE x4 proprietary connector so only one card can connect so SLI is impossible currently
     
    mertymen2010 likes this.
  6. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    You can do whatever you want but the whole thing is going to be bandwidth limited because the port runs @PCIe3 x4.

    The AGA is designed for the 9X0 series and the bandwidth was fine back then.
    It's just not worth it with the new 10X0 series been bandwidth limited by the port.
    Not to mention you can get almost the same cards in the laptop.
     
  7. judal57

    judal57 Notebook Deity

    Reputations:
    274
    Messages:
    1,164
    Likes Received:
    650
    Trophy Points:
    131
    in fact you have less bandwidth but more performance using the AGA than a type C external graphic amplifier like razer core
     
  8. rinneh

    rinneh Notebook Prophet

    Reputations:
    854
    Messages:
    4,897
    Likes Received:
    2,191
    Trophy Points:
    231
    Where did you base that on? i use it with the 1070 and the performance is excellent. only 8% loss with an external screen used. it os already shown more than once that current day graphic cards are hardly bottlenecked by the bus speeds.
     
    ElCobrito likes this.
  9. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    Based on this comprehensive review by TechPowerup https://www.techpowerup.com/reviews/AMD/R9_Fury_X_PCI-Express_Scaling/18.html

    R9 Fury X is bottlenecked by ~5% on PCIe 3 x4 vs PCIe 3 x16

    A GTX 1070 is ~20% more powerful than that Fury X and you can be sure there is definitely going to be a bottleneck.

    I see you have a 980M so it makes sense for you to have the AGA but for any current gen Pascal laptop, any eGPU solution is not going to be ideal.

    You yourself noticed a 8% loss and that's with current gen games.
    When games get more demanding, the bottleneck is going to be even more noticeable.
     
  10. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    Yeah. I knew the AGA port has slightly higher bandwidth for eGPU than TB3 but a eGPU actually makes sense for a laptop like Razer Stealth since the iGPU is not suitable for gaming and you get portability with the Stealth.
    For the bandwidth, a 1060 would fit in nicely. >>> paying a reasonable amount of $$$ for performance and portability.

    As for Alienware, if you already have a 1060 in your laptop, dumping another $700 for a 1070 and AGA is not going to get you much more fps.
    Even a Alienware 13 is not exactly portable. >>> paying a large amount of $$$ for marginal improvement and limited portability.

    Of course, if you want to game OTG then forget about eGPU.
     
  11. sste

    sste Notebook Geek

    Reputations:
    9
    Messages:
    95
    Likes Received:
    32
    Trophy Points:
    26
    Just because the card is 20% more powerful doesn't mean it uses the bus or DMA more. The amount is highly application dependent. For games the biggest consumer will usually be data swapping, which happens as needed, i.e. as you move around the world, but that has nothing to do with your frame-rate.
    In terms of draw commands, both OpenGL 4.3+/Vulkan and the Direct3D 12+ horror move towards indirect draw calls, native compute programs and generic GPU storage buffers. This means that the GPU can generate its own workloads, meaning less draw calls which is becoming a huge client-side (CPU) bottleneck. This also translates to less bus requirements. Likewise some of the stuff that previously was done on the client, today can be done on the GPU side, again reducing data transfer. I highly doubt we will see large increase in bus usage anytime soon.
     
  12. rinneh

    rinneh Notebook Prophet

    Reputations:
    854
    Messages:
    4,897
    Likes Received:
    2,191
    Trophy Points:
    231
    This is not how it works. Its just a bottleneck thats there but not suddenly growing. The 8% bottleneck that I encounter is simply because I use laptop CPU as well you lose 8% with an old card, and 8% with a newer card. In the end the thing that I like most about it is how quiet an external GPU is as a docking station on your desk in combination with a laptop. These laptops are over engineered if they just need to cool a single CPU and not a GPU. So for longevity and convenience at home I still can recommend an AGA. Also the desktop GPU's are still quicker. My 1070 is running at 2150mhz now which is loads faster than the laptop version even though the laptop version has more cuda cores.
     
  13. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    If you're referring to Vulkan or DX12 then you have a point, provided no post-processing is done on the CPU.
    We can argue all we want but various tests done by reputable websites have shown that the last gen cards are right at the limit of the bandwidth of PCIe 3 x4.
    That is not going to improve with the current gen cards.
    PCIe 3 x16 or even x8 is fine for now but not x4.

    I'm not sure what you're referring to with the 8% bottleneck if you're not comparing the same card placed in AGA vs placed in a desktop (with similar specs).
    That's how you should test for the bottleneck.
    Loading time is not dependent on the GPU but HDD/SSD.
    The laptop version has more CUDA cores to compensate for a lower clock. The desktop 1070 is universally known to be more powerful than the laptop version.

    I agree with your point that a laptop with eGPU is going to cool much better than a laptop only but what's the point of buying a powerful expensive gaming laptop and expensive eGPU setup?
    It makes more sense to buy a cheaper and actually portable laptop and an eGPU, i.e. Razer Stealth + Core.
     
  14. sste

    sste Notebook Geek

    Reputations:
    9
    Messages:
    95
    Likes Received:
    32
    Trophy Points:
    26
    Referring to? I specifically stated OpenGL 4.3+, Vulkan and the other stuff. :)
    Post-processing on the CPU, in real-time applciations?! Image post-processing? Yes, no one does that. I said that bus usage won't increase, definitely not significantly, so if x4 is on the limit then it is good enough.

    Edit: Like I said, with games the main thing using the bus is swapping data. Data does get larger (though slowly and proportionally to VRAM, which seems to doubles every couple of generations or so), but other costs slowly decrease. x4 seems to be on the low side, that is true, never argued against it, but saying that a stronger card uses more bus bandwidth is simply not true.
     
    Last edited: Dec 2, 2016
  15. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    I'm referring to you referring/specifically stated to DX12/Vulkan, aka quoting you. Nothing to smile about.
    That is not common thing. Most games are still running on DX11.

    Yes, there is such a thing as post processing in games that can be offloaded to the CPU.

    Oh you said that bus usage won't increase? From what? Last gen cards? This gen cards?
    Maybe provide some references to back that up?
     
  16. sste

    sste Notebook Geek

    Reputations:
    9
    Messages:
    95
    Likes Received:
    32
    Trophy Points:
    26
    There is no image post-processing done to the CPU, and won't be. Animations, skinning, etc. are still sometimes done on the CPU, this is going to end, especially with newer APIs.
    I explained why bus usage increases slowly and what it is used for, and for the very same reasons that you stated (new APIs) newer titles might actually use a little less.
     
  17. judal57

    judal57 Notebook Deity

    Reputations:
    274
    Messages:
    1,164
    Likes Received:
    650
    Trophy Points:
    131
  18. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    Let me quote TechPowerup on PCIe scaling: "The final rendered image never moves across the bus except in render engines that do post-processing on the CPU, which has gotten much more common since we last looked at PCIe scaling."
    There's such a thing mate.

    The issue here is tests have already shown that there is a bottleneck using PCIe 3 x4 and last gen cards.
    Yet you are claiming the same bandwidth would be fine with current gen cards that are almost twice as powerful.
    By that logic, we should all be fine with PCIe 1 x16 if bandwidth wasn't an issue.
    You need to show me some benchmarks with GTX1080 @ 2K/4K @ 120/144Hz to convince me the bandwidth is not an issue.
     
  19. rinneh

    rinneh Notebook Prophet

    Reputations:
    854
    Messages:
    4,897
    Likes Received:
    2,191
    Trophy Points:
    231
    If you put a 980 in the AGA you have an 8% drop in performance loss compared to a desktop with a higher end CPU and PCI X 16 3.0. The same with the 1070GTX, 980TI, Radeon RX480 etc. Its a constant that you lose about 8% with an external screen with the AGA. It is just the overhead not the card hitting an actual bottleneck.

    The same goes if you use an older gen CPU with a lower bus speed in a desktop. The performance is just slightly lower but did not became worse with newer cards. Thus it is not a problem.
     
  20. sste

    sste Notebook Geek

    Reputations:
    9
    Messages:
    95
    Likes Received:
    32
    Trophy Points:
    26
    If some reviews site says so then it must be true... It goes against everything we do today, stalling the pipeline to do *image post-processing* on the client actually makes sense to you. Ok. I am actually a software engineer and computer scientist, and one of my research areas is rendering...
     
  21. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    15859 for Firestrike OCed to 2100mhz really isn't that great.
    Check this out http://www.3dmark.com/fs/9048138
    It doesn't have the latest desktop CPUs so it's Physics score is lower but its Graphics score is much higher.

    8% drop in performance across all cards due to overhead with no other performance hit? Do you have some reference/evidence for that?

    We're really nitpicking here.
    I don't see any issues with the methodology in the tests conducted by TechPowerup so I choose to believe in their results over your words, which you're providing absolutely no references for.
    Again, GTX 1080 in a AGA/PCIe 3 x4 vs PCIe 3 x16.


    edit: typo
     
    Last edited: Dec 2, 2016
  22. sste

    sste Notebook Geek

    Reputations:
    9
    Messages:
    95
    Likes Received:
    32
    Trophy Points:
    26
    In the one that you linked? The one that shows 2-4% difference between x4 and x16? No I agree, I don't see a problem.
     
  23. judal57

    judal57 Notebook Deity

    Reputations:
    274
    Messages:
    1,164
    Likes Received:
    650
    Trophy Points:
    131
    @ezzo you may have a brain problem ... i say look at the gpu score!! and the other gpu is overclocked
    and if that was the case .... 3,4ghz vs 2,6 ... you have to be kidding me
    [​IMG]
     
  24. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    That's for a last gen card. A GTX 1080 is almost twice as powerful.
    Please read what I typed and the article, think about it then reply me.

    My brain is fine.
    Again, go read up on the reference scores for a 1080 on Firestrike then come back and reply me. 15800 is considered on the low side for an OCed 1080.
     
  25. sste

    sste Notebook Geek

    Reputations:
    9
    Messages:
    95
    Likes Received:
    32
    Trophy Points:
    26
    And there is nothing significant that would use the bus's bandwidth in proportion to the GPU "power"..
     
  26. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    Sure. Stick with PCIe 1 x16 then.
     
  27. sste

    sste Notebook Geek

    Reputations:
    9
    Messages:
    95
    Likes Received:
    32
    Trophy Points:
    26
    If I'd be develping a game, and my game had absolutely no data to upload during normal rendering, it would be more than enough. Bus usage is *highly* application specific. A GPU doesn't doesn't use the bus to render to run compute programs, you only use to download/upload data.
     
  28. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    Yeah. You're gonna develop a Candy Crush clone and tell me your PCIe 1 x16 is more than enough for your 1080.
    Lol.
    Maybe read the whole article again.
     
  29. mertymen2010

    mertymen2010 Notebook Consultant

    Reputations:
    5
    Messages:
    213
    Likes Received:
    26
    Trophy Points:
    41
    So what are we saying here? Is my system in my sig going to be redundant soon? Is there really no point in upgrading the GPU? Will there be more options for this laptop or should I sell the lot now while its worth something?
     
  30. sste

    sste Notebook Geek

    Reputations:
    9
    Messages:
    95
    Likes Received:
    32
    Trophy Points:
    26
    I understand that your grasp on technical details is lacking, but let me rephrase (yet again): You can have the absolutely best graphics up-to-date and use very very little bus bandwidth (and with newer APIs, practically none). Once you start swapping textures, modifying vertex buffers, or any kind of ring buffers (or over DMA), especially if you reading back data, queries, etc., you start using the bus. I assume most benchmarks, as well as in-game benchmarks have more than just a static scene, and that is where the bus is used. "Power" doesn't equal bus utilization, the latter is application specific.
    Seeing as most of the details just go over your head and all you can link is some review site article, I won't bother anymore. For others: I am not claiming that x4 is good or bad, I don't know, nor care. I am stating that just because card X is much more powerful than card Y has nothing to do with the amount of bus bandwidth is requires, that is application-specific.
     
  31. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    Benchmarks have shown that using PCIe 3 x4 (connection that AGA uses) bottlenecks about 3-5% on previous gen cards.

    I would say the AGA will only go as far as the current gen of cards (1070/1080) before being severely bottlenecked.

    Others here will tell you otherwise.

    Well, you can keep the GPU since the AGA isn't worth much anyway.

    https://www.techpowerup.com/reviews/AMD/R9_Fury_X_PCI-Express_Scaling/

    I'm quoting the article because they have tested on limits of different PCIe connections using proper methodology, while you provided nothing.
    It's common sense that the bus saturation is going to be dependent on the application, as well as the game design. No one is arguing with you on that.

    Please stop your personal attacks and read what you said previously. You claimed that because of DX12/Vulkan, PCIe 3 x4 is going to be fine despite the (3-5%) bottleneck already shown by the article and others. Now you're claiming you don't know.
    I'm done talking to you.
     
  32. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    Hey, check this thread out.
    It's by the same guy whose benchmark you showed me.

    "Will there be any other driver coming out soon. My Firestrike scores with my GTX1080 on my Alienware 17 R3 with AGA are still fairly low (around 14500) versus over 20,000 reported when the GTX1080 is used in desktops with the same CPU. You mentioned a new driver late June. Is that still coming?"

    Score of 14500 when running the video back to the laptop.


    "Slight improvement when connected to external monitor directly from GTX1080 (via AGA).

    www.3dmark.com/.../9045893

    Firestrike Score at 1080p= 15,859"

    Improvement when running video out from the AGA eGPU.

    Yeah. No bottleneck right.
     
  33. sste

    sste Notebook Geek

    Reputations:
    9
    Messages:
    95
    Likes Received:
    32
    Trophy Points:
    26
    No, I said that newer APIs, like OpenGL 4.3+/Vulkan, facilitate reduction of bus usage. I even touched on how.

    And yet, you think that just because a card is more powerful it immediately requires more, even on same titles. Claiming some non-sense like image post-processing done on the CPU and whatnot. o_O

    Now I actually went over that test that you keep singing about. Notice please that the major offenders are something called "Civilization: Beyond Earth", which looks like a strategy game, and only at lower resolution you get heavy reduction in performance with lower bus bandwidth. That is because they constantly update buffers from the CPU and probably use plenty of draw calls (clearly, DX11). Funny how the "bottleneck" doesn't increase with resolution, which again means it has nothing to do with "post-processing" or quality.
    Likewise, if 3.0 x4 would actually saturate the bus, then you would expect 2.0 x4 (half the bandwidth) to greatly hamper performance, and yet it doesn't. Even 1.1 x4 (25% of the bandwidth of 3.0 x4) is usually within 10-15% of *3.0 x16* (16 times the bandwidth) expect in some cases due to reasons I explained. I wonder why...
    2-4% difference between 3.0 x4 and x16 probably comes from the higher bus latency affecting a part of the rendering pipeline. Newer cards and newer titles won't suffer much more. Stop spreading conclusions when you have no idea what you are talking about.
     
  34. judal57

    judal57 Notebook Deity

    Reputations:
    274
    Messages:
    1,164
    Likes Received:
    650
    Trophy Points:
    131
    hhahah firestrike is for friky kids .... in games the history is another ... if you use SSD the benchmark rise .. if i enable samsung rapid mode score rise ... and i dont care the entire result, i only look the graphics score... in games dont care if you are playing at 4Ghz or 3ghz freccuency on cpu ...
    stop complaning about AGA kid... you dont have an alienware ... stop ****ing here nubby
     
    Cerreta28 likes this.
  35. ezzo

    ezzo Notebook Enthusiast

    Reputations:
    12
    Messages:
    45
    Likes Received:
    13
    Trophy Points:
    16
    Try harder.
    I'm done talking to you as well.
     
  36. sste

    sste Notebook Geek

    Reputations:
    9
    Messages:
    95
    Likes Received:
    32
    Trophy Points:
    26
    ElCobrito and rinneh like this.
  37. ElCobrito

    ElCobrito Notebook Geek

    Reputations:
    0
    Messages:
    77
    Likes Received:
    26
    Trophy Points:
    26
    You do realize you're talking about a 3/4 fps loss at 4k? Even if it was a 10 fps loss you would still stay over 60 fps on almost every game at 1440p on a hypotetical gtx 2080. And you could avoid a part of that bottleneck by sending the signal to an external monitor instead of sending it back to the laptop.