Any newer Socket AM3+ CPUs coming?

griff30

Supreme [H]ardness
Joined
Jul 15, 2000
Messages
7,212
Just curious about an upgrade, seems like this portion of the message board is looking like the Physics Processing/CueCat board :)

Where did everyone go?
 
Before PileDriver, I am not sure. From the leaked roadmaps there was supposed to be higher clocked FX chips but I believe AMD has canned those in favor of working on PileDriver.
 
Piledriver should be landing but I'd suspect August at best, and more likely Sept. should perform maybe 5% or so faster then Trinity and hopefully provide much better efficiency compared to BD.
 
5% or so faster?
5% faster on an already abysmal CPU is going to make matters worse for AMD.
Why bother?
 
5% or so faster?
5% faster on an already abysmal CPU is going to make matters worse for AMD.
Why bother?

He said faster than Trinity, not Bulldozer. Trinity (laptops) is already using piledriver cores, and the desktop varient should be 5% faster then those. Unless I'm totally wrong and have no idea what I'm talking about.
 
Yeah, you're on the right track bootleg.

And when talking about 5% faster, he's talking about 5% faster clock for clock, meaning IPC. There's a very good chance that Piledriver will be clocked higher than Trinity parts. Based on the leaked Trinity specs (3.8 stock, 4.2 turbo for A10-5800k), I'm guessing we'll see stock clocks of at least 4 ghz for the FX-8350 part vs 3.6 ghz of the FX-8150, not to mention the improved IPC, which brought Trinity to approximately the same level as Phenom II at the same clocks. So hopefully, at the very least, the FX-8350 will be like a Phenom II x8 with significantly higher stock clocks and overclocks.
 
last few cpu that was released were FX4170 and the FX6200. We are waiting to see what is happening with Piledriver. On another forum they suggest that if Piledriver can hit 4.6 ghz on default it will be very competitive (no not single thread) ....
 
Last edited:
i keep hoping the 95W 3.2GHz eight core will arrive sporting the "FX-8140" moniker.
 
why do you need leaked info? just read a trinity review and then remind yourself that piledriver with have L3 cache which trinity doesn't have.

what exactly that will add to the performance ... idk.
 
You would want to know at which speed they start if they don't have any problems with manufacturing.
 
C'mon, AMD! I'm really hoping that PD will be the a far cry better performer than BD and really reduce power consumption. Would love to see a plethora of models with 8, 12, and 16 real cores available for the consumer market segment, as well.
 
Would love to see a plethora of models with 8, 12, and 16 real cores available for the consumer market segment, as well.

I believe you are either looking for miracles or you you will be okay if these were clocked at below 2 Ghz. That is except for the 8 core version.
 
You would want to know at which speed they start if they don't have any problems with manufacturing.

Well, considering the A10 5800k will be a two module quad-core part with an IGP at 3.8 ghz, I would expect the 4 module 8-core Vishera part to have stock clocks of at least 4 ghz for the top model.
 
C'mon, AMD! I'm really hoping that PD will be the a far cry better performer than BD and really reduce power consumption. Would love to see a plethora of models with 8, 12, and 16 real cores available for the consumer market segment, as well.
you want a 16 core CPU .... for the consumer market. i bet those will sell really well ...
 
Late Q3 is probably when we'll see them in retail. They're supposed to launch some time between Q3/Q4.
 
you want a 16 core CPU .... for the consumer market. i bet those will sell really well ...

Well why not? We will have more than 6 cores available in the consumer market eventually. It's inevitable.

Same type of responses about that many cores now that were being posted about the probability of quad cores when dual cores were the most available to the consumer segment.

I suppose it doesn't matter a whole lot for now or even over the next few years, as dual and hex cores are plenty for anything and everything the majority of home users do.
 
Well why not? We will have more than 6 cores available in the consumer market eventually. It's inevitable.

It always comes at a cost of decreasing overall IPC and clock speed, so you're decreasing single-threaded and low #-threaded performance at the expense of catering to very very few highly-threaded workloads. It just doesn't make sense because you can't add cores so willy-nilly. If you lined up two competing architectures, one with 4 cores and another with 16, both on the same node and fab process and same TDP then you'd see just how much the 16-core option would suck in today's world of software.

When you design a chip you do so for the predominant workloads of today and just enough to provide incentive to move things forward as well as be able to provide for the future. If you take a massive leap forward then you'll only find that you won't fit in today nor the future.
 
Yeah I dont understand the push for more cores either. The i5's seem to be laying the smack down pretty good with their 4 cores. Dont see why AMD would need to add another couple to their 8 cores.

Im pretty optimistic for Piledriver. Judging by what we've seen with Trinity in the 5+% increase it was over Llano, I expect 10% overall improvement at least. 15% was what I was hoping for at first but I dont think thatll be realistic.
 
The GPU is fantastic on Trinity but the CPU performance is sort of a hidden treasure. It's not good enough to catch up to Intel in single-threaded workloads, but when you compare it to Llano's Husky K10 cores (K10.5?) it improved quite a bit in single-threaded performance at better perf-per-watt with higher clock speeds and carries a far bigger GPU yet they only increased the transistor count by 7%. Basically, they've increased the GPU size significantly, increased overall die size by only 7% and they managed to actually decrease the size of the cores/modules while retaining multi-threaded performance and improving upon single-threaded performance. It shows they've done more with less on 2 fronts, one being smaller core size and the other is better perf-per-watt.

They won't catch SB in single-threaded performance but I'd expect the Vishera chips to really pull away dramatically in heavily multi-threaded workloads when compared to the 4-thread IB/SB chips whereas the current BD 4-module chips really couldn't pull away from the Thuban nor the 2500K. What can rain on this parade will be the resonant clock mesh tech as that really doesn't help as much (or at all) past 4ghz clocks, which is what Vishera will rely heavily on for performance gains.

I guess we'll see :) I wouldn't expect any miracles, though.
 
Thats fine with me. Im not looking for an Intel killer, Im just wanting something that can give them a little competition at least and something good for me to drop in the beautiful new Sabertooth motherboard! ;)

It does look like from reading the reviews out on Trinity that it is a big step forward for AMD which is a very good sign.
 
Yeah, I'm looking forward to a A10 5800K build for my HTPC. Mostly because I also use the Dolphin Wii emulator on it, so a good iGPU and good single-threaded CPU performance is needed. The 5570 I have in there at the moment lags at times.
 
I believe you are either looking for miracles or you you will be okay if these were clocked at below 2 Ghz. That is except for the 8 core version.
the current top end 16 core bulldozer hits 2.6ghz/105W non turbo. an improved process plus the new gating thingy may push these to 3ghz ish non turbo.
 
the current top end 16 core bulldozer, the Opteron 6284 SE, runs at 2.7ghz on each core @ 140W. sorry was looking at old info in the last post.
 
the current top end 16 core bulldozer, the Opteron 6284 SE, runs at 2.7ghz on each core @ 140W. sorry was looking at old info in the last post.

Bingo! Was hoping someone would catch on to what I was saying earlier about 8, 12, and 16 core future models.

With architecture advancements and process shrinks, I think a 3.0+ GHz lower TDP part is attainable, and fairly easy, at that. AMD needs to push them into the mainstream and then mop up the cash. Would be an easy sell to average Joe or Mary since the perception of moar corze being better is the general mindset of the consumer base.

Of course, with that many cores, at those speeds, with lower TDP, it would very well be better. :)
 
Hmmm... a 16-core 3 ghz part with two cores that can turbo up to 4.2+ ghz when the others are idling would be something.
 
Bingo! Was hoping someone would catch on to what I was saying earlier about 8, 12, and 16 core future models.

With architecture advancements and process shrinks, I think a 3.0+ GHz lower TDP part is attainable, and fairly easy, at that. AMD needs to push them into the mainstream and then mop up the cash. Would be an easy sell to average Joe or Mary since the perception of moar corze being better is the general mindset of the consumer base.

Of course, with that many cores, at those speeds, with lower TDP, it would very well be better. :)

they wouldn't sell at all, they're ~$1500 each.
 
they wouldn't sell at all, they're ~$1500 each.

Yeah, right now. If you're in the market for a server.

If AMD starts cranking them out in huge quantities specifically for the consumer market segment, they won't be nearly that expensive at all.

All their CPU's would have the initial intent to be 16 core models, it's just the units that don't meet the muster that get intentionally binned to lower speed or fewer core models. Doesn't cost anything more in manufacturing to make a 16 core vs a 4 core this way, for example.
 
Just picked up a 6 core bulldozer the 3.3 mhz one it's not here yet though just got it for the lower power consumption 95 watt and was cheaper then the 1100T going for like 220.00 on ebay.
 
Not to mention those 16-core parts come with quad processor connection technology and quad-channel memory.

Apparently there's a 115 watt model set to be relased, with 2.5 ghz stock and 3.4 ghz turbo clocks.
 
Yeah, right now. If you're in the market for a server.

If AMD starts cranking them out in huge quantities specifically for the consumer market segment, they won't be nearly that expensive at all.

All their CPU's would have the initial intent to be 16 core models, it's just the units that don't meet the muster that get intentionally binned to lower speed or fewer core models. Doesn't cost anything more in manufacturing to make a 16 core vs a 4 core this way, for example.

Yea, they'll still be expensive. Those chips are MCMs, which are essentially two chips fused together and they have added features and more strict binning. You don't get many viable $1500 chips from a wafer. That's why they cost that much in the first place :p

Bigger chip with moar coars = lower clock speed, lower IPC (compared to competing lower core count architecture), higher TDP, fewer chips per wafer due to size and higher price.

You don't make those unless you they make sense and you can make your money back. At the moment even in the server space the 16-core Interlagos chips aren't selling all that well, how do you think they'd do on the desktop? I'd take a $200 4/6-core chip with higher clocks and higher IPC any day of the week.
 
^ you're still thinking in terms of today's process, not future renditions. With architectural and process changes, a single die 16 core will be coming along at some point.. again, the same things were argued about quad cores before those arrived in a single die.
 
^ you're still thinking in terms of today's process, not future renditions. With architectural and process changes, a single die 16 core will be coming along at some point.. again, the same things were argued about quad cores before those arrived in a single die.

You're still not explaining why a 16-core single die is better than an 8-core single die. There's no doubt that we'll have more and more coars, but expecting such massive transitions in both architecture and software is too far fetched. Even BD at 8 coars suffers today because of the lack of heavily-threaded software to redeem it, and on top of that it has castrated clock speeds and massive power consumption and a bigger die. It doesn't make sense today nor will it make sense next week, but when we get to a point where software is passively threaded then I'll change my point of view ;P
 
You're still not explaining why a 16-core single die is better than an 8-core single die. There's no doubt that we'll have more and more coars, but expecting such massive transitions in both architecture and software is too far fetched. Even BD at 8 coars suffers today because of the lack of heavily-threaded software to redeem it, and on top of that it has castrated clock speeds and massive power consumption and a bigger die. It doesn't make sense today nor will it make sense next week, but when we get to a point where software is passively threaded then I'll change my point of view ;P

Well, more cores can be taken advantage of right now:

1. Windows and other operating systems
2. Productivity software (e.g.: MS Excel)
3. Video/photo creation and editing software
4. Graphics design and rendering

I realize that it depends on how the software was coded/compiled for it to be able to take advantage of "x" amount of threads.

But, it amazes me the advancement that software has taken to become multithread capable en masse in a relatively short amount of time, and the development is not letting up. We will eventually see apps and games that don't neccessarily have limits on the max number of threads that can be utilized.

I don't think it's too far fetched at all to see single die 16 core consumer processors in the near future. I'm not speaking in a manner of time frame of what we'll be seeing in the very next release... more like within the next 2 years for the first consumer parts to arrive.

Alas, I suppose I'm just in a wishful thinking mode, as PD, like BD, will be 32nm. Will be almost impossible to overcome the size constraints to allow for "acceptable" power consumption/heat output in a home environment without a crippled clock speed if they were to make a single die 12 or 16 core CPU in the next release processors. We'd have to put our PC in a fan vented server case just to keep things operating a bit cooler!

I'm thinking we'll be waiting out the 14-16nm processors, or maybe even 10nm before we see fast clocked, power efficient 16+ core parts. Heck, maybe even 18 core once the fab process gets that small and cost effective (yes, more wishful thinking :p ).
 
On the server/workstation, yes, on the desktop there's unfortunately no chance in hell. We've been at quads for years now and mere dual cores + hyperthreading are enough for a majority of tasks. Before we see 16-core chips on desktops/laptops we'll see specialized cores and co-processors, much like both Intel and AMD have done with on-die graphics. It's essentially their way of saying "you've got enough processor power, now let's focus on the graphics/visual portion." Consider that more than 3/4 of PCs sold in the US now are laptops and not desktops. We enthusiasts aren't their bread and butter anymore. Both Intel and AMD have been focusing on pushing the graphical performance with the CPU side of things as an afterthought; Haswell/Sky Lake and Kaveri are steps in just that direction.
 
All good points, pelo.

Before we see processors sporting more than 4-8 cores in the desktop consumer market, they'll first be sold (obviously for much higher prices) to the specialty business Workstation and business Desktop segments, and likely not in really huge quantities, either. Yields wouldn't exactly be stellar due to young and unrefined processes and would take time in order for greater than 8 cores to be coming out of almost every wafer etching.

Then again, it's hard to predict how the market will be in 2-5 years from now. Maybe this whole Ultrabook thing will take off and cripple the desktop market even more than laptops have, thus perpetuating even more R&D focus on increasing IGP performance coupled behind a limit of only 2-8 cores with each new generation CPU.

*shrug*
 
Desktop computing will change in one way or another. 76%> of PCs sold in the US are now laptops/ultrabooks. That's an absolutely crazy statistic. Apple has sexified the laptop and ultrabooks/ultrathins are another step in that direction. Other than some people who use their PCs as workstations, gaming is really the only intensive task that most people do on a daily basis and as a result both Intel and AMD have been catering to that crowd. While AMD has gained laptop market share (more than 3% in a single year) they've decreased in desktop and hovered roughly equal in server (might be a slight gain?). Intel has done very well in the laptop segment with their graphical improvements and now AMD/Intel have essentially kicked, or are planning to kick nVidia to the curb in mobile. Bypassing discrete GPUs means you the chip maker and OEMs make more profit because they're bypassing another hardware manufacturer.

As good as Kepler is in perf-per-watt, it just won't sell in mobile and because of rebranding and yields/expense at 28nm level, nVidia will have a hard time selling low/mid discrete GPUs which are all rebrands of their 40nm Fermi architecture. I have a feeling they're gonna get kicked in the nuts in the next few years by AMD and Intel both.

All of this sounds like hell for the desktop and maybe it is, but I don't care :) I'm planning on ditching my desktop entirely in favor of a laptop with a 1080p monitor capable of driving 3 displays simultaneously. Gaming at lower res is fine by me if it means I save on power, cost and it all comes in at under 5 lbs. Bring on the Intel/AMD on-die graphics!

jpr_q1_graphics_01.jpg


It shows just how prevalent laptops are and how discrete GPU sales have been slipping, down over 3% year-to-year. That's overall graphics share and includes discrete, but the discrete numbers are down whereas overall PC sales are up so it paints a gloomy picture for low end discrete GPU sales today and mid-tier discrete cards in the future. Intel and AMD have really been focusing on on-die graphics
 
Last edited:
Back
Top