
I wanted to figure out how these PCIe lanes were used by the Mac Pro, so I set out to map everything out as best as I could without taking apart the system (alas, Apple tends to frown upon that sort of behavior when it comes to review samples). The PCH also has another 8 PCIe 2.0 lanes, just like in the conventional desktop case.
Usb pcie card for mac pro 1.1 full#
That’s enough for each GPU in a dual-GPU setup to get a full 16 lanes, and to have another 8 left over for high-bandwidth use.

Here the CPU has a total of 40 PCIe 3.0 lanes. Ivy Bridge E/EP on the other hand doubles the total number of PCIe lanes compared to Intel’s standard desktop platform: The 8 remaining lanes are typically more than enough for networking and extra storage controllers. In a dual-GPU configuration those 16 PCIe 3.0 lanes are typically divided into an 8 + 8 configuration. You’ve got a total of 16 PCIe 3.0 lanes that branch off the CPU, and then (at most) another 8 PCIe 2.0 lanes hanging off of the Platform Controller Hub (PCH). Here’s what a conventional desktop Haswell platform looks like in terms of PCIe lanes: The second point is a connectivity argument. Even though each of those cores is faster than what you get with an Ivy Bridge EP, for applications that can spawn more than 4 CPU intensive threads you’re better off taking the IPC/single threaded hit and going with an older architecture that supports more cores. a conventional desktop Haswell for the Mac Pro and you’ll get two responses: core count and PCIe lanes. Ask anyone at Apple why they need Ivy Bridge EP vs.
