Bulldozer screens/info maybe??? (pcinlife)

flexcore

Gawd
Joined
Apr 12, 2006
Messages
683
Possible screens of the dozer and listing of info on stock clocks and turbo clocks.
You think this is real? Even if it isn't, something has to come out soon.

pcinlife link
 
we will see, i doubt we will get anything substantial until very soon before launch, as typical with AMD
 
I'm still confused when these are supposed to come out? May? June?
 
they come out in june its been posted various times in this forum and on most other sites. the give-away is that the release dates for the am3+ 900 series boards were already announced.

but those are some crazy samples.. especially that one with the +1Ghz turbo boost(samples 3020, 3120, 2820), holy shit.. doubt we will see that on an actual retail processor but that's definitely a sweet sample chip there. good signs on the overclock-ability as well.
 
It looks reasonable to me. With 8 cores, 7 idle, it's probably well within TDP to boost one core by 1Ghz.

3.2Ghz base frequency as the highest is very reasonable and allows "refreshes" of +0.1Ghz later down the road. This is a smaller manufacturing process which allows faster clocks and AMD detached core frequency from the rest of the CPU right? If the leak is accurate, this processor will be highly overclockable past 4ghz if they're stable at 4Ghz according to AMD testing (less cooling, strict TDP limits).
 
Anyone wanna buy my 2500K because Dozer is going to smoke Sandy!

Not necessarily smoke. Remember a Phenom2 needs over 4.3GHz to be equal to a 2500 at stock. So it depends on how much IPC improvement bulldozer will have. Bullodzer will not be equal to SB in IPC, 30+% is too much to make up. I do not see bulldozer hitting 5.5GHz to compare to 2500K overclocked. In heavily multithreded applications Bulldozer will be faster than 4 core SB processors at more than 5 threads no argument with that.
 
If all of this info is true, then sign me up for one, especially the one that Turbos from 3.1 to 4.1GHz. Though, I wonder which FX models the OPN codes represent.
 
I think it will be a great processor for multi-threaded applications like encoding and rendering, but fall short for gaming.
 
I think it will be a great processor for multi-threaded applications like encoding and rendering, but fall short for gaming.

For the most part even my 2 year old Phenom II does just fine, as games aren't as demanding as they used to be. The only time I wish I had a faster CPU is when I transcode video anyways (or play SC2 which is unusually CPU bound). Also threading support has been improving in newer titles.

As long as BD comes close enough to Sandy's single thread performance it will be a great chip.
 
I think it will be a great processor for multi-threaded applications like encoding and rendering, but fall short for gaming.
Well, if the multi-threaded nature of AMD/Nvidia drivers improves, then BD and SB may be close if not equal in gaming.
 
How are you going to post screens but no benchmarks? If you have it running, crank it up!

btw. just got an i5 2500k...absolutely smokes a Phenom II...:cool:
 
if it is true and 32nm I'm not liking the 2.8Ghz at 1.41V...:confused:


AMD has always had high voltages compared to intel.. 90nm amd was at 1.45v intel was 1.35v 65nm amd was 1.4v intel was 1.3v 45nm intel was 1.1-1.3v amd was still at 1.325v.

yes i know AMD has also had lower power processors under those voltages with those processes but i'm talking about their high end processors.

but you also have to consider that these are engineering samples so the voltages will vary between the processors to see what works best with what clocks. also depends on what the board has the default voltage set at. if everyone remembers when the x6's came out all the boards had the voltages set insanely high, between 1.35-1.45v but the actual stock voltage for the 1090T was 1.27-1.28v(was fixed on most boards with a later bios revision) and the 1055T was 1.23-1.25v(never was really fixed, they just left it at the 1090T fixed voltages).


Well, if the multi-threaded nature of AMD/Nvidia drivers improves, then BD and SB may be close if not equal in gaming.

the drivers aren't the problem, its the game engines the game developers are using due to them having to cater the games they make to people still running 6-7 year old systems. if they create a true multithreaded game for example like BFBC2 which chokes on anything less then 4 cores you get people bitching and whining about how the game sucks on their pos C2D or athlon x2 processors. the game developers need to push the technology as well, it forces people to upgrade and also forces companies like AMD and Intel to push their own technology.

but think about it, in the last 4 years how many true multi-threaded games have come out? 2? maybe 3 if you count crysis even though it doesn't really use more then 2 cores efficiently.
supreme commander was multi-threaded, kind of if you used vista/windows 7.
BFBC2 supports an infinite amount of physical cores thanks to the havok physics engine(does not support logical cores(hyperthreaded cores))
starcraft 2 only uses 2 cores
crysis primarily uses 2 even though it loads 4 cores but the last 2 never really go over 30%
COD series really only uses 2 cores.
almost all physX supporting games don't use more then 2 cores due to physX not being multi-threaded.

those are only a few of the games that have been released in the last 3-4 years but it gives you a prime example of whats really going on and thats the true problem here and not AMD or Nvidia's drivers. they can only do so much as far as threading goes with a game. the game its self has to support it.
 
Last edited:
AMD has always had high voltages compared to intel.. 90nm amd was at 1.45v intel was 1.35v 65nm amd was 1.4v intel was 1.3v 45nm intel was 1.1-1.3v amd was still at 1.325v.

yes i know AMD has also had lower power processors under those voltages with those processes but i'm talking about their high end processors.

but you also have to consider that these are engineering samples so the voltages will vary between the processors to see what works best with what clocks. also depends on what the board has the default voltage set at. if everyone remembers when the x6's came out all the boards had the voltages set insanely high, between 1.35-1.45v but the actual stock voltage for the 1090T was 1.27-1.28v(was fixed on most boards with a later bios revision) and the 1055T was 1.23-1.25v(never was really fixed, they just left it at the 1090T fixed voltages).




the drivers aren't the problem, its the game engines the game developers are using due to them having to cater the games they make to people still running 6-7 year old systems. if they create a true multithreaded game for example like BFBC2 which chokes on anything less then 4 cores you get people bitching and whining about how the game sucks on their pos C2D or athlon x2 processors. the game developers need to push the technology as well, it forces people to upgrade and also forces companies like AMD and Intel to push their own technology.

but think about it, in the last 4 years how many true multi-threaded games have come out? 2? maybe 3 if you count crysis even though it doesn't really use more then 2 cores efficiently.
supreme commander was multi-threaded, kind of if you used vista/windows 7.
BFBC2 supports an infinite amount of physical cores thanks to the havok physics engine(does not support logical cores(hyperthreaded cores))
starcraft 2 only uses 2 cores
crysis primarily uses 2 even though it loads 4 cores but the last 2 never really go over 30%
COD series really only uses 2 cores.
almost all physX supporting games don't use more then 2 cores due to physX not being multi-threaded.

those are only a few of the games that have been released in the last 3-4 years but it gives you a prime example of whats really going on and thats the true problem here and not AMD or Nvidia's drivers. they can only do so much as far as threading goes with a game. the game its self has to support it.

Let a major console get released with 4+ cores & then see what the game devs do.. Its not old PC's that are holding them back. Its the crappy hardware on consoles.
 
Let a major console get released with 4+ cores & then see what the game devs do.. Its not old PC's that are holding them back. Its the crappy hardware on consoles.

This. Every new game coming out these days is a console game first and PC game second so developers are catering to the lowest common denominator and right now 720p is that denominator and you don't need 4 blazing cores for that unfortunately.
 
Let a major console get released with 4+ cores & then see what the game devs do.. Its not old PC's that are holding them back. Its the crappy hardware on consoles.

Xbox 360.. the xbox had 3 cores, the 360 has 4.. the PS3 uses cell processing which is the same process as a video card's shader cores. so thats not the problem. the problem is not all console code can be transfered over directly to PC. and most are just to lazy to do the work needed for it since the games don't need the processing power.

the true change in performance will be when the xbox and playstation both support dx11 or what ever dx version microsoft has out in 2012-2013 when RnD is finished on the new consoles. down side is both consoles will be using 2011 technology so i'm sure they will be limited to dx11.
 
Xbox 360.. the xbox had 3 cores, the 360 has 4.. the PS3 uses cell processing which is the same process as a video card's shader cores. so thats not the problem. the problem is not all console code can be transfered over directly to PC. and most are just to lazy to do the work needed for it since the games don't need the processing power.

the true change in performance will be when the xbox and playstation both support dx11 or what ever dx version microsoft has out in 2012-2013 when RnD is finished on the new consoles. down side is both consoles will be using 2011 technology so i'm sure they will be limited to dx11.

Think you are a little confused. The original Xbox had a single core Celeron in it. The 360 is a 3 core CPU, the Wii is basically a Dual Core G5 CPU, and the Cell proccessor is barely "video card like". Much like Buldozer where a module is made up of two FPU and arithmetic units but shared parts of a single CPU, the Cell is one primary CPU with 7 other FPU units. The only thing that makes it Video card like is the Idea of Modules where several large and small units are combined.
 
That looks like a pretty poor sample. It's using 18% of the CPU to play a small movie file... I'm sure the full version should be a better. CPU sector needs more competition to lower prices on both sides.
 
It looks reasonable to me. With 8 cores, 7 idle, it's probably well within TDP to boost one core by 1Ghz.

3.2Ghz base frequency as the highest is very reasonable and allows "refreshes" of +0.1Ghz later down the road. This is a smaller manufacturing process which allows faster clocks and AMD detached core frequency from the rest of the CPU right? If the leak is accurate, this processor will be highly overclockable past 4ghz if they're stable at 4Ghz according to AMD testing (less cooling, strict TDP limits).

I think they will have to change the clocks a module at a time not a core at a time. so if it turbos 1 module (2 cores) to 4.1ghz it could be very impressive.
 
I think they will have to change the clocks a module at a time not a core at a time. so if it turbos 1 module (2 cores) to 4.1ghz it could be very impressive.

If this is true and AMD manages a 10% to 15% percent per core IPC improvement I would expect @ 4.1Ghz it would be ballpark 8 to 15% percent faster than a 2600 at stock + turbo.

I am not sure I would be happy with the 3.2 Ghz at all 8 cores. That will take away some advantage AMD has in multithreading. I still think the 3 module chip (that I expect to have a base clock of 3.5GHz) will be the one to buy for desktop usage.
 
Last edited:
if it is true and 32nm I'm not liking the 2.8Ghz at 1.41V...:confused:

Different Vcc levels between the two manufacturers are not a big deal, it's why you have newer Intel chips running at 1.2V but still drawing crazy wattage and running at 80C, while AMD chips draw the same or less wattage from the motherboard at 1.5V and run at 55C with similar cooling.

Just different manufacturing processes and tolerances
 
If this is true and AMD manages a 10% to 15% percent per core IPC improvement I would expect @ 4.1Ghz it would be ballpark 8 to 15% percent faster than a 2600 at stock + turbo.

I am not sure I would be happy with the 3.2 Ghz at all 8 cores. That will take away some advantage AMD has in multithreading. I still think the 3 module chip (that I expect to have a base clock of 3.5GHz) will be the one to buy for desktop usage.

The basic idea that I have heard is that there will be a base CPU speed (lets say 3.2) and that pretty much out of the gate if most of the cores are in use it will ramp up 500MHz. If the core in use drops then it ramps up even more. Pretty sure those upper numbers are not 8 core numbers but probably 2 or 4 core numbers.
 
I'm hoping these screens are real. Would be good for ES and hopefully final parts will be even better.

I'm also hoping this is the beginning of more info coming out about BD!
 
The basic idea that I have heard is that there will be a base CPU speed (lets say 3.2) and that pretty much out of the gate if most of the cores are in use it will ramp up 500MHz. If the core in use drops then it ramps up even more. Pretty sure those upper numbers are not 8 core numbers but probably 2 or 4 core numbers.

JF-AMD has said before that even when all cores are loaded they can still turbo 500Mhz, and this is on the server side (16core Interlagos). I remember comments(or slides?) stating clock speed being only one factor to power draw and contribution to thermal limit. The other main factor is how much a workload is really utilizing the processor capabilities per clock. The 3.2Ghz base speed is probably based on all the logic in every core being utilized 100%, consuming wattage equal to TDP, so the core clock won't throttle below 3.2Ghz when this happens in normal use. However, that is a theoretical workload and no real workload comes close to using all logic on a CPU (or even half probably). So, if the CPU detects how much less than maximum resources a workload is consuming, it can clock up like crazy to reach TDP limits. It is the new super aggressive turbo that AMD created, sounds pretty cool.
 
I really hope it does, plan to swap my current Core2 era machine to BD as I'd like a machine more capable of a AIO gaming rig + development server.
 
Back
Top