The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Making a DIY eGPU On The Cheap (~$250), Some Questions

    Discussion in 'e-GPU (External Graphics) Discussion' started by K_Wall_24, Jul 30, 2012.

  1. K_Wall_24

    K_Wall_24 Notebook Evangelist

    Reputations:
    14
    Messages:
    316
    Likes Received:
    0
    Trophy Points:
    30
    Alright, I've read the numerous posts on how to make an eGPU, but I must be stupid, because there's still a few things I don't understand.

    My ThinkPad W530 is in the mail, and I want to make an eGPU solution for it.

    I understand that it can do x1.2Opt, but I don't understand what that means, aside from it can feed the graphics output back to the internal monitor, correct?

    I'd like to know if anybody has benchmarks for a x1.2Opt vs a normal desktop PC connection, whatever that may be, with a graphics card (pick a card, a cheap one perhaps). I'd like to know what kind of performance I can get from this. I understand that feeding it back to the internal monitor gives lower performance, so I'd have it going to an external monitor using the Expresscard slot.

    Thats another thing. I understand that x1 2.0 and x2 1.0 are two different things. However, I don't understand why. Is x2 if I use the Expresscard + mPCIe slots together? And would that provide a significant performance gain over just Expresscard? If I wanted to use both the Expresscard and mPCIe together, I would need the PE4H as opposed to the PE4L, right?

    Given my budget and laptop, what combination of GPU, and adapter, totaling no more than $250 CAN(before shipping/taxes) would give me the best performance(assuming I'm using an external monitor)?

    I'm sorry about all the questions, but I've tried, I really did, to understand everything, but for something as intricate as this, I want to make sure I know what I'm doing, and how I'm doing it. I'd consider myself very tech-savvy, but with this, I'm lost.
     
  2. slugg

    slugg Newbie

    Reputations:
    0
    Messages:
    9
    Likes Received:
    0
    Trophy Points:
    5
    K_Wall_24, thanks for the post. You took the words right out of my mouth! I'm literally left wondering about the same thing.

    Not to thread jack, but I'd also like to add that clarification over "setup 1.x" would be nice. I see that it's a purchasable piece of software on HIT's website, but then the DIY eGPU Experiences FAQ states "Harmonic Inversion Technology. A US distributor of BPlus' PE4H-EC2C/PE4L-EC2C with a cheaper base price, cheaper US$7 UPS shipping, plus are entitled to the Setup 1.x software (valued at US$25) upon your request as part of your purchase." I can't seem to find this entitlement in writing anywhere. So shouldn't the FAQ say that we need to buy some software, too?
     
  3. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    x1.2Opt means that you have a PCIe 2.0 x1 slot that can use Optimus technology, this offers much better (effectively doubles) performance over PCIe 1.0 x1 w/ Optimus (x1.1Opt). In terms of bandwidth, PCIe 2.0 offers twice the amount that a similar lane PCIe 1.0 slot would, this means that a PCIe 2.0 x1 lane = PCIe 1.0 x2 lanes (in bandwidth only).

    However, when compared to a PCIe 3.0 x16 desktop slot, where the GPU is the limiting factor, on notebooks, PCIe 1.0 and 2.0 x1 slots are what will limit your GPU's performance (hence getting an expensive GPU is not worth it). On the flip side, if you get a low end GPU that is not bottlenecked by the PCIe slot, you will have identical performance when compared with a higher bandwidth PCIe slot.

    Using the internal monitor takes a slight performance hit due to overheard of signals as compared with external monitor usage. The PCIe version and amount of slots is independent of the adapter you use (Expresscard and/or mPCIe) since it is determined by your motherboard. You can use any one of these adapters or both in some cases to connect your external GPU solution to your motherboard. For bang for your buck, undoubtedly, the GTX 460 1GB version is still the top.

    An analogy that might help is your motherboard's PCIe bus is like a highway. Let's say that PCIe 2.0 is like a double width lane, while PCIe 1.0 is a single width lane. Effectively, 2 cars can travel simultaneously with PCIe 2.0, while it would take 2 lanes with PCIe 1.0 "width". Now your Expresscard and/or mPCIe is like a bridge and the PCIe slot that you insert your GPU in is another highway. It doesn't matter what "bridge" you choose. The external PCIe slot that you choose for your GPU should be at least the same total "width" as your motherboard's, else you will be limited by that (consider a 1 lane highway crossing a bridge to a 2 lane highway). Now your GPU can be represented by the number of trucks on the road to deliver a package from your GPU to your motherboard, the better the GPU, the more trucks. If your GPU is weak, let's say it's only 1 truck, it doesn't matter if your highway is 1 lane or 16 lanes, it won't get stuck in traffic. If your GPU is 8 or 16 trucks, it will still get stuck in a 2 lane highway. There is a high rate of diminishing returns as you increase your GPU power but limited by your bandwidth.
     
  4. jluu1286

    jluu1286 Newbie

    Reputations:
    0
    Messages:
    2
    Likes Received:
    0
    Trophy Points:
    5
    @sgogeta4

    Do you recommend then to get a external monitor and display out?
    I want to maximize my desk space but at the same time having a good performance. If I opt for a stronger card, would that make up the difference?
     
  5. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    Using the internal monitor is fine for majority of applications, the performance hit isn't significant for daily tasks. I'd only get an external monitor if I needed more real estate and could afford the desk space for a monitor. You don't really need to spend more for a better card than a cheap GTX 460 since the power will essentially be wasted by the limited PCIe bus. If you have money to spend, go ahead but IMO the gains will not be significant enough for the cost (you can check benchmarks in the main stickied thread).
     
  6. jonathanfv

    jonathanfv Notebook Enthusiast

    Reputations:
    7
    Messages:
    34
    Likes Received:
    24
    Trophy Points:
    16
    Hi! I know this is an old thread, but I liked the exchange, so I'm going to ask my question here if you guys don't mind.

    I also have a W530 in the mail, so I have the same specs as K_Wall_24 for the Express Card (x1.2Opt). I wanted to build an eGPU for two things: graphic applications (image and video editing, and I might appreciate some gaming too), and GPUPU calculations. For the GPUPU calculations, Radeon HD cards are apparently better than NVIDIA, and they're also generally cheaper. So here's my two questions:

    1. Can I use a Radeon HD card with Optimus, to use it with my internal monitor?
    2. If I got a pretty decent Radeon HD card, the bandwith of the Express Card will definitely bottleneck the card's power for gaming, and my FPS will stagnate at the same number even tho my card could make it much faster. But would it be the case for GPUPU calculations? Cause I'm pretty sure GPUPU calculations would use less bandwidth, since the card doesn't have to output an image. So am I right to think that a powerful graphics card would not be worth it for gaming if used in an Express Card eGPU, but could be worth it if used for GPUPU?

    Thanks to anyone who can answer that.
     
  7. waynewonders

    waynewonders Newbie

    Reputations:
    0
    Messages:
    6
    Likes Received:
    0
    Trophy Points:
    5
    I don't know exactly what you mean with GPUPU Calculations but I'm assuming doing Calculations with the GPU Kernels.
    I'm using my eGPU Setup to render 3D Images with Blender and its optimized Cylces Render using CUDA Kernels. So with Cycles it's really the more CUDA Kernels, the faster it will render, but that doesn't mean it will perform extremely well in Games. For Games I think the graphics and processor clock are more interesting.

    So as sgogeta4 stated, I went with the GTX 460 V2 1GB, because it has 778 CUDA Kernels and is affordable. CUDA is only used by Nvidia, so I'm not sure what the pros are for Radeon Cards.


    My setup was ~$200 for the PE4H and the GTX 460, I had a good 480W PSU scavenged from my old desktop.
     
  8. jonathanfv

    jonathanfv Notebook Enthusiast

    Reputations:
    7
    Messages:
    34
    Likes Received:
    24
    Trophy Points:
    16
    Thanks a lot for the answer. I meant GPGPU. :$ I'd like to accelerate 3D renderings, video editing rendering and possibly with Pyrit (for cracking WPA). Apparently, for Pyrit Radeon cards are much faster. But I searched a bit, and here's the thing: Radeon cards generally have a lot more cores than Nvidia cards, and they're also clocked faster. But Nvidia did a better job to help people using their cards for GPU calculations (Cuda). So, if software was compatible and optimized for both types of cards, the Radeons cards should be the fastest ones. But in real life applications, most software is optimized and compatible with Nvidia. So I think I'll probably go with a Geforce GTX 660 (~220$).

    Thanks again.