What CPU can I upgrade to?

Was your 3930k overclocked? So the temps are higher with the overclocked 1680v2 vs the overclocked 3930k?
Aren't some of the 1680v2 out of Mac Pros? Those are ok to use in a X79 PC too, right?

Yeah, my 3930k was at 4.5GHz as well. I’m working on lower core voltages to bring the temps down as opposed to pushing for higher frequencies . The higher temps aren’t surprising- it’s 33% more cores. My mobo is x79 and it works just fine- Rampage IV Gene. The 1680v2 isn’t on the supported list, although the little brothers are. Not sure why, but again, it works great.
 
I am trying to understand.
I've been watching my 3930k and during very light loads it seems to run about 3800Mhz.
During 100% loads during renders it runs about 3500Mhz max.
According to official specs it is a 3200Mhz CPU that boosts to 3800Mhz.
So will OC it to around 4000Mhz make much difference?
Why does it only run about 3500 under full load? It's not showing hot

Now I am looking at changing to either E5-2687W v2 or 1680v2
The 2687W v2 is 3.4Ghz that can boost to 4.0
The 1680v2 is 3.0Ghz that can boost to 3.9 but can be OC to 4 or greater

Trying to determine if the 2687v2 that can boost to 4Ghz is better than OCing the 1680v2 to about 4Ghz. Or is the 1680v2 better? I want stability over max clock speed because it's for work.
The 2687v2 is 150watts vs 130watts for the 1680v2 but once the 1680v2 is OC I am guessing it will be just as hot, or hotter??
 
The stock Intel settings for turbo boost (and AMD does the same thing) means that the more cores you have active, the lower the total boost. 3.8 GHz is only for 1-2 active cores and then it decreases to 3.5 GHz when all cores are in use. There is a setting in the BIOS that overrides this and you can set all cores to meet the same max turbo, but it only works on unlocked CPUs (so yes 1680v2, no 2687Wv2). It does increase power/temp of course, but well worth it.

The 2687 has a max all-core speed of 3.6GHz and that can't be changed, and while the stock 1680 all-core is 3.4, as marshac said, it's super easy to just set all of the cores to a much higher speed (and yes, the power and heat will jump with that). Given sufficient cooling, you are guaranteed stability with the 1680v2 at 3.9GHz because every core is tested to run at that speed at the factory, and even going a few hundred MHz over that shouldn't be a problem at all.
 
The stock Intel settings for turbo boost (and AMD does the same thing) means that the more cores you have active, the lower the total boost. 3.8 GHz is only for 1-2 active cores and then it decreases to 3.5 GHz when all cores are in use. There is a setting in the BIOS that overrides this and you can set all cores to meet the same max turbo, but it only works on unlocked CPUs (so yes 1680v2, no 2687Wv2). It does increase power/temp of course, but well worth it.

The 2687 has a max all-core speed of 3.6GHz and that can't be changed, and while the stock 1680 all-core is 3.4, as marshac said, it's super easy to just set all of the cores to a much higher speed (and yes, the power and heat will jump with that). Given sufficient cooling, you are guaranteed stability with the 1680v2 at 3.9GHz because every core is tested to run at that speed at the factory, and even going a few hundred MHz over that shouldn't be a problem at all.
Thank you very much for the clear explanation of all that. I see the 1680 is the way to go, as many have said.

As far as setting all cores to meet the same max turbo on unlocked CPUs, is the 3990k an unlocked CPU that can be set to boost all cores to 3.8?
 
..
I can't just slap a new GPU into the desktop because some Adobe CS6 features aren't supported in the new GPUs so I am most likely limited to using an old model GPU like original TITAN 6GB?
Thoughts? Changing CPU to Xeon with 8/10 cores and 20/25MB cache still best way to go?

Are you sure that the newer Quadro's aren't supported? I found some links that says all you have to do is modify a text file to get the newer Quadro's working:
https://community.adobe.com/t5/prem...o-p4000-not-compatible-with-cs6/td-p/10659245
To get it working go into the root of CS6 (Program files/Adobe/Adobe Premiere Pro CS6) and look for the cuda_supported_cards.txt file and add the card to the list in the same way as the other Quadro cards.

https://helpx.adobe.com/photoshop/kb/photoshop-cc-gpu-card-faq.html

Side note, I have a Quadro P4000 for sale. PM me if you're interested.
 
Are you sure that the newer Quadro's aren't supported? I found some links that says all you have to do is modify a text file to get the newer Quadro's working:
https://community.adobe.com/t5/prem...o-p4000-not-compatible-with-cs6/td-p/10659245


https://helpx.adobe.com/photoshop/kb/photoshop-cc-gpu-card-faq.html

Side note, I have a Quadro P4000 for sale. PM me if you're interested.
Thank you very much. You are correct. The txt file can be modified (or just deleted) which does allow Premiere to use newer cards for GPU rendering. But After Effects uses an old ray tracing feature that doesn't work except with old cards.
My situation is two-fold as I would like to improve performance in both Adobe CS6 and 3ds Max. According to my research it seems Adobe uses mostly CPU, and GPU only comes into play for rendering effects and exporting. Thus CPU is most important. 3ds Max may use the GPU more but I don't know for sure.

Thank you much for mentioning your Quadro P4000. My GTX 780 actually has more CUDA cores so it supposedly will have an advantage in editing in Premiere but the Quadro would most likely be better at 3ds Max.
Nvidia has structured it this way on purpose, dividing the cards so the GTX/RTX line is better at some things and Quadro better at others instead of putting both in one card.
 
The 4800/4900s are Ivy Bridge-E, not Haswell, so only a 5% IPC bump and slightly more efficient AVX.

OP: Go for either the E5-2697 v2 (12 cores, 3GHz all cores, down to $200 off Ebay), or: If you are willing to overclock, the E5-1680 v2 (unlocked 8 core, can easily set all cores to 4GHz or more, $175-200 on Ebay) and if not, E5-2687W v2 (locked 8 core, 3.6GHz all core, $182 on Ebay).
I'm use this if can't find one test all CPU ...

8.108 points - Intel Core i7-3930K @ 3.20GHz
11.489 points - Intel Xeon E5-1680 v2 @ 3.00GHz
11.520 points - Intel Xeon E5-2687W v2 @ 3.40GHz
12.535 points - Intel Xeon E5-2690 v2 @ 3.00GHz

I'm find only Intel Xeon E5-2696 v2 @ 2.50GHz with 13.106 points and good price as 99.45£. But this benchmark just about actual use.
That's quite the Passmark score jump between the 3930k and the other CPUs. Is it just the extra cores that make the difference or are those Xeons that much better chips?
 
That's quite the Passmark score jump between the 3930k and the other CPUs. Is it just the extra cores that make the difference or are those Xeons that much better chips?

The extra cores do have a lot to do with it, though typically more cores=slower speed, which is why the 2690v2 (10c) is close to the 2696v2 (12c). Picking a part is often about which is more important or useful to your use case.

Ivy Bridge did have a ~5% IPC gain over Sandy Bridge, and cores certainly do help, but those Xeons also have more cache per core than the 3930k (2MB/core vs. 2.5-3.125MB/core on the Ivys) which can help things. All Intel HEDT chips use Xeon silicon, even the 3930k. Parts that are released as Xeons are binned more for stability, power, and longevity than desktop parts, which are binned towards speed.

The PassMark scores can be a bit misleading as turbo comes into play and what systems they are run on will determine how much time is spent at turbo vs. not, as well as how many of the 3930k/1680v2 submissions were overclocked. (For example, the 2697v2 scores slightly higher than the 2696v2, despite the latter having equal or higher turbo speeds, but lower base frequency)
 
The extra cores do have a lot to do with it, though typically more cores=slower speed, which is why the 2690v2 (10c) is close to the 2696v2 (12c). Picking a part is often about which is more important or useful to your use case.

Ivy Bridge did have a ~5% IPC gain over Sandy Bridge, and cores certainly do help, but those Xeons also have more cache per core than the 3930k (2MB/core vs. 2.5-3.125MB/core on the Ivys) which can help things. All Intel HEDT chips use Xeon silicon, even the 3930k. Parts that are released as Xeons are binned more for stability, power, and longevity than desktop parts, which are binned towards speed.

The PassMark scores can be a bit misleading as turbo comes into play and what systems they are run on will determine how much time is spent at turbo vs. not, as well as how many of the 3930k/1680v2 submissions were overclocked. (For example, the 2697v2 scores slightly higher than the 2696v2, despite the latter having equal or higher turbo speeds, but lower base frequency)

Don't forget that this Xeon has two disabled cores, but Intel left all of the cache intact, something that's a bit unusual for xeons.


Edit to also add - I've got a Aorus NVME as the boot drive on this x79 board. Benches at 2000MB, so faster than the SATA SSD it replaces, although far less than what the drive is capable of (got it in anticipation of a future upgrade).
 
Don't forget that this Xeon has two disabled cores, but Intel left all of the cache intact, something that's a bit unusual for xeons.


Edit to also add - I've got a Aorus NVME as the boot drive on this x79 board. Benches at 2000MB, so faster than the SATA SSD it replaces, although far less than what the drive is capable of (got it in anticipation of a future upgrade).

Indeed, I was looking at the list of Ivy Xeons, and it seemed quite unusual that most of the lineup had most or all the full L3 cache on the chip (LCC or HCC appropriate) enabled.

For the NVMe, are you using a native BIOS or a modded one? There are only modded ones available for my Asus X79s, but I'm really wary about trying something like that. Also, for the speeds, it sounds like you're in PCIe 2.0? Sandy CPUs do support 3.0 if you've got it in a CPU-provided slot, just enable it in the BIOS.
 
Indeed, I was looking at the list of Ivy Xeons, and it seemed quite unusual that most of the lineup had most or all the full L3 cache on the chip (LCC or HCC appropriate) enabled.

For the NVMe, are you using a native BIOS or a modded one? There are only modded ones available for my Asus X79s, but I'm really wary about trying something like that. Also, for the speeds, it sounds like you're in PCIe 2.0? Sandy CPUs do support 3.0 if you've got it in a CPU-provided slot, just enable it in the BIOS.

Someone modified the last version of the bios for my mobo available, so rather than recreating it, I took the chance and flashed it - found the BIOS on https://www.win-raid.com/

Yeah, I've got 3.0 enabled - 2000MB is as good as it gets ;/
 
Someone modified the last version of the bios for my mobo available, so rather than recreating it, I took the chance and flashed it - found the BIOS on https://www.win-raid.com/

Yeah, I've got 3.0 enabled - 2000MB is as good as it gets ;/
Holy cow! I would love to get 2000MBs in my X79Pro machine! That would be awesome but it's limited to only SSD SATA 6 speeds of around 550MBs without doing a risky mod right?
 
Holy cow! I would love to get 2000MBs in my X79Pro machine! That would be awesome but it's limited to only SSD SATA 6 speeds of around 550MBs without doing a risky mod right?

Risky? Likely not? You can always flash back. You can install the nvme bios modules yourself, or download someone else’s work if you’re feeling trusting.
 
Back
Top