i9-13900K benchmark leak?

Generally, you get better overclocking headroom on the KS models because of the slight differences in voltage and thermals, but I agree.
Well so dumb question, and not just addressed to you, if you have an onboard GPU, can you actually turn off the PCIE GPU down to some ridiculously low power state and run off the onboard for most things, then have windows wake up the GPU for 3d workloads?
 
Even if something is multithreaded it is rarely perfect like rendering or some compression; gaming, a lot of compilers, CAD work, can have main thread or some crucial part of the workload singlthreaded that make it still quite relevant, no ?
Certainly, many things may use a single thread, or multiple, but not efficiently, because they are either poorly designed, old, or just are better that way (MS Excel is awful for being single threaded for many of its formulas) So there could be a very specific place for it, but in general, with the amount of things people are doing on their systems at any given time, a 400 point synthetic benchmark score is not going to be something anyone will notice in the real world, vs the benefits of the other processor options, or just sticking with what they have.
 
11'th gen may be trash but OEMs sell it cheap as dirt, and really you don't need a crapload of power in the average office, the biggest reason I have to replace things right now is the lack of TPM2 modules in the older stuff. It makes a huge difference when dealing with encrypted storage and we've gradually been encrypting everything because oversharing and leaks are a thing. Better to assume the bad actors are already inside and make it hard as hell to export anything than it is to assume they can't get in. But in terms of day to day usage, stick 16gb of ram and a decent NVME/SSD in an old 6th gen and as far as they are concerned they have a whole new top of the line fast as fuck machine, until they access one of the encrypted shares and it takes like 2 minutes to open a document and they are convinced the network is having a problem.
Yeah, the only systems coming with fast NVME drives are the 11th Gen parts and the odd 10th Gen machine I see floating around. We are using M.2 SATA drives for almost all the machines. It's amazing that I once considered them fast! I do load all the systems here with 16GB and with all the encryption and company apps we have... makes 8th Gen Parts seem slow. That's inaccurate, everything seems slow in this environment.
 
Generally, you get better overclocking headroom on the KS models because of the slight differences in voltage and thermals, but I agree.
There is zero down side other then it cost a few more dollars for a igpu. It is great to have for a backup and troubleshooting discreet GPU. You also lose our on quick sync. The KS skus are highly binned chips. The KF and K don't have much difference in overclocking head room.
 
Last edited:
My understanding of i9-13900K performance based on what is known so far:

1. Faster than Ryzen 7950 in general
2. AMD 5800FX3D outperforms it in some games but not others
3. High energy consumption and thermals (but undervolting can improve this without much performance sacrifice)

So, as a general do-it-all CPU, which for me would be workstation, video rendering and photo processing, AI applications, with some gaming occasionally, it might be the best CPU available. For me the choice is i9-13900K vs 7950x.

This of course is an early take and before thorough reviews have been released.
 
I actually will also be looking out for the i7 variant as I can still use it with my DDR4 Z690 board. I just do not feel like DDR5 makes any sense at this point. Drop in replacement, I'll do that.
This is where I'm leaning as well - cheap Z690 from warehouse deals, DDR4 and 13700K. I'm happy to wait for high binned 13900 KS variant if I decide to do a dumb-money Z790/DDR5 build.

My inevitable 7900X3D AM5 build will wait for X3D refresh in Q1.

Funny how Intel is the backward compat value proposition for this round, but only for this round because it just happened to be during AMD's move to AM5. Longterm we know AM5 will have the longer backward compat tail.
 
Last edited:
It's not really just single-threaded; it's more about a lower number of faster cores and in that case it's just about everything that a non-researcher/creator runs. Not that it's all that noticeable but Intel does generally still lead in performance where a lower amount of faster cores are more relevant. I would assume most things "normal" people use a PC for are going to fall into this category; web browsing, photo editing, etc.
Screenshot_20220928-202354.jpgScreenshot_20220928-202340.jpg
 
Yeah that good example, seeing 6 core system beating the last generation of 12-16 cores by a good amount.
 
Last edited:
This is where I'm leaning as well - cheap Z690 from warehouse deals, DDR4 and 13700K. I'm happy to wait for high binned 13900 KS variant if I decide to do a dumb-money Z790/DDR5 build.

My inevitable 7900X3D AM5 build will wait for X3D refresh in Q1.

Funny how Intel is the backward compat value proposition for this round, but only for this round because it just happened to be during AMD's move to AM5. Longterm we know AM5 will have the longer backward compat tail.
Broken clock is right twice a day :) Let’s hope AMD delivers with the longevity again.
 
This is where I'm leaning as well - cheap Z690 from warehouse deals, DDR4 and 13700K. I'm happy to wait for high binned 13900 KS variant if I decide to do a dumb-money Z790/DDR5 build.

My inevitable 7900X3D AM5 build will wait for X3D refresh in Q1.

Funny how Intel is the backward compat value proposition for this round, but only for this round because it just happened to be during AMD's move to AM5. Longterm we know AM5 will have the longer backward compat tail.
I recommend the Asus TUF GAMING B660M-PLUS WIFI D4. Its got everything you need. I recently built one with a 12600k for some kids-----and its the smoothest build I have done in the past 2.5 years. I got it from Amazon warehouse for $125. No original box---but all accessories. And it all looked new/unused. May have been held for a minute.

I personally have a DDR5 ITX system with a 12700k. And getting that 'good', was a real pain. 3 different motherboards. Long boot times. DOA videocards. Bad set of DDR5, etc.

If you really want overclocking, I supposed the Z690 equivalent TUF, would also be good.

**I think that the 13900k and 13700k will win in gaming (at least by a small margin). Except for a few titles which have unusual benefit from the huge amount of cache in the 3D chips.
 
Personally, I see these as a pretty legit alternative to AMD's offerings. Especially as someone who is mostly playing games and using the 2D half of Adobe Creative Suite. I'm pretty unlikely to buy another new CPU in less than 2 years, so I don't really care about platform longevity. I'm going to watch Microcenter's Mobo/CPU bundle deals once PCIE 5 drives and newer PSU's start hitting shelves. Whoever has the better bundle deal is likely to get my $.
 
Personally, I see these as a pretty legit alternative to AMD's offerings. Especially as someone who is mostly playing games and using the 2D half of Adobe Creative Suite. I'm pretty unlikely to buy another new CPU in less than 2 years, so I don't really care about platform longevity. I'm going to watch Microcenter's Mobo/CPU bundle deals once PCIE 5 drives and newer PSU's start hitting shelves. Whoever has the better bundle deal is likely to get my $.
I couldn't care less about PCI-e 5 drives until we see some games that actually take advantage of them. Been taking their sweet time on that.
 
I couldn't care less about PCI-e 5 drives until we see some games that actually take advantage of them. Been taking their sweet time on that.

In my case it's a matter of starting a totally fresh build and attempting to future-proof to the best of my ability. If they're just around the corner, I might as well wait a few months. Ditto with PSU's with new cabling. There's always going to be something right around the corner, but those (+ a case and cooling) are items I shouldn't need to replace for a long time.
 
Well so dumb question, and not just addressed to you, if you have an onboard GPU, can you actually turn off the PCIE GPU down to some ridiculously low power state and run off the onboard for most things, then have windows wake up the GPU for 3d workloads?
I might be out to lunch myself on this but I thought that to use the onboard video you had to plug your monitor into the HDMI/DVI connector on the motherboard whereas to use the PCIE GPU you had to plug into the video card. Assuming that I am correct here, I am not sure that there would be a "soft" way to swap between onboard and PCIE video graphics based on 3d workload. You would need to manually swap the cables.

That said, I agree that it would be nice if you could do this.
 
I couldn't care less about PCI-e 5 drives until we see some games that actually take advantage of them. Been taking their sweet time on that.
The issue with "using" Pcie5 for drives is to really use it you need to be using the direct storage APIs. The parts of the Direct Storage API that really makes it work are a series of commands that are optional in the PCIe3 and 4 specifications, but mandatory in the PCIe5 specification. There was only a handful of NVME controllers that contained those optional commands but when the silicon shortages hit and the NVME controllers were basically downgraded across the board those controllers were replaced with ones that don't have them even in the same product lineup, which makes supporting it a PITA. So really the adoption of PCIe5 NVME drives is what will spur developers to integrate the technology that uses it.
 
The issue with "using" Pcie5 for drives is to really use it you need to be using the direct storage APIs. The parts of the Direct Storage API that really makes it work are a series of commands that are optional in the PCIe3 and 4 specifications, but mandatory in the PCIe5 specification. There was only a handful of NVME controllers that contained those optional commands but when the silicon shortages hit and the NVME controllers were basically downgraded across the board those controllers were replaced with ones that don't have them even in the same product lineup, which makes supporting it a PITA. So really the adoption of PCIe5 NVME drives is what will spur developers to integrate the technology that uses it.
Nonsense, DirectStorage is a Windows solution to a Windows problem. PCIE5 drives are entirely usable without DirectStorage.

The latest NVMe storage devices connected by using a PCIe bus can achieve very high levels of throughput and IOPS (I/O requests per second). The overhead of Win32 APIs means that even though the available storage bandwidth can be utilized, taking advantage of it might result in an unacceptably high CPU utilization. This is especially true when the workload consists of a large number of small requests.

The DirectStorage APIs are designed to remove most of the operating system's overhead by closely interacting with the underlying NVMe hardware. This allows for achieving a higher bandwidth with lower CPU usage. The goal is to enable handling of up to 50,000 requests per second while using at most 10 percent of a single CPU core.
https://learn.microsoft.com/en-us/g...verviews/directstorage/directstorage-overview
 
12900K performance at jsut 65watt if true, it is really quite nice.

All the 7xxx marketing using 12xxxK all the 13xxx using 5xxx and everything on older video card already look strange

From what I've seen, the 7950x runs like a 5950x at 65W also. Both are impressive in their own way.
 
Nonsense, DirectStorage is a Windows solution to a Windows problem. PCIE5 drives are entirely usable without DirectStorage.


https://learn.microsoft.com/en-us/g...verviews/directstorage/directstorage-overview
Direct Storage is Microsoft's solution to the industry-wide problem that a GPU or Accelerator can not talk directly to system memory or system storage without using the CPU as an intermediary. This is a problem for all OS be it Linux, Unix, Microsoft, or Apple which is why they each have their own version of it.
IBM developed BaM in partnership with Nvidia, Nvidia has GPU Direct, Magnum, and RTX IO, Microsoft has Direct Storage, and Apple has Fast Resource Loading, all exist to bypass the huge bottleneck the CPU imposes on the GPU when it needs to access system resources.
PCIe NVME drives are now so fast that the CPU can't keep up creating statistically significant bottlenecks when you are trying to have things communicate to each other. So if all you are doing is looking at storage speed benchmarks then yes PCIe 3, 4, and 5 look like huge steps up in performance, but if you are doing many other things then the CPU is physically getting in the way and it slows things down immensely.
The PCIe 3 and 4 specifications contain a number of optional commands that are usually contained in the flagship controllers, but not on those on the mid or low end of the lineup so simply having a PCIe 3 or 4 NVME drive isn't enough you need to know some of the specifics of the controller to know if you actually support those technologies.
I know Microsoft calls that feature set bypassIO, and it makes some specific calls to the controller's firmware.
Example:
Phison updated the firmware for the E18 to include the necessary commands to make them work but they didn't include it in the U17, U18, E21T, or E19T, the Phison E16 shipped with support right away.
Other controllers have a similarly confusing adoption of the IO commands needed to make GPU to Storage communication possible, and since many storage providers switch controllers even in the same product lineup can most users honestly say they know what controller they have on their NVME drive or what firmware version it is running and were those IO commands issued as an update from their storage manufacturer?
So yes if all you are interested in is OS to storage speeds then PCIE 3, 4, and 5 drives are completely usable with out DirectStorage or any of the other storage technologies out there, but if you are working with any form of hardware-assisted acceleration then you need to know many of the specifics that aren't readily available to the average consumer, which makes using many of the excellent benefits of PCIe 3 and 4 NVME drives an absolute nightmare to support on the consumer side, so many developers just don't.

Making those commands mandatory as part of the PCIe 5 specification goes a very long way to speeding adoption and making things better for everyone.
This same issue but for GPU to Ram communication is why AMD SmartAccess, and Nvidia GPU Direct were such big deals when they launched.
 
Last edited:
Yeah, the only systems coming with fast NVME drives are the 11th Gen parts and the odd 10th Gen machine I see floating around. We are using M.2 SATA drives for almost all the machines. It's amazing that I once considered them fast! I do load all the systems here with 16GB and with all the encryption and company apps we have... makes 8th Gen Parts seem slow. That's inaccurate, everything seems slow in this environment.
I got lucky - IT issued me a 10700 with 16GB and NVMe :). Of course, the network and all of the authentication & encryption is still slow :dead:.
I might be out to lunch myself on this but I thought that to use the onboard video you had to plug your monitor into the HDMI/DVI connector on the motherboard whereas to use the PCIE GPU you had to plug into the video card. Assuming that I am correct here, I am not sure that there would be a "soft" way to swap between onboard and PCIE video graphics based on 3d workload. You would need to manually swap the cables.

That said, I agree that it would be nice if you could do this.
AMD could leverage this with their iGPU and RDNA cards. Will they?
 
I might be out to lunch myself on this but I thought that to use the onboard video you had to plug your monitor into the HDMI/DVI connector on the motherboard whereas to use the PCIE GPU you had to plug into the video card. Assuming that I am correct here, I am not sure that there would be a "soft" way to swap between onboard and PCIE video graphics based on 3d workload. You would need to manually swap the cables.

That said, I agree that it would be nice if you could do this

AMD could leverage this with their iGPU and RDNA cards. Will they?
They could do this, Nvidia and Intel worked out Optimus to do just this, and for the many years it has existed it has generally been a pain in the ass for everybody, so while AMD could do this I would be very grateful if they didn't.
 
I might be out to lunch myself on this but I thought that to use the onboard video you had to plug your monitor into the HDMI/DVI connector on the motherboard whereas to use the PCIE GPU you had to plug into the video card. Assuming that I am correct here, I am not sure that there would be a "soft" way to swap between onboard and PCIE video graphics based on 3d workload. You would need to manually swap the cables.

That said, I agree that it would be nice if you could do this.
I feel that something Laptop achieve to do (one HDMI or DP port output that both the Igpu and the Dgpu can use depending of what is going on), for a very simple use of the sort if it is directX or Vulkan and fullScreen shift to the more powerful GPU it could be possible (I would imagine).
 
They could do this, Nvidia and Intel worked out Optimus to do just this, and for the many years it has existed it has generally been a pain in the ass for everybody, so while AMD could do this I would be very grateful if they didn't.
Honestly, I have never been a fan of the NV/Intel graphics switching. It has caused all manner of issues and plenty of hardware acceleration headaches. It would be nice if you could pick the video card that would be primary for everything or just disable the iGPU entirely, but... That would be asking for too much.

Now, I did have an AMD laptop back in the day with a paired discreet and decent iGPU that would crossfire. The laptop itself was sort of anemic, but the graphics capability was exceptional if the titles supported crossfire or you could force the 3D acceleration to use Crossfire (there was a way to designate the exe / launch file to default to the crossfire profile).

But since multiple GPU acceleration has seemingly disappeared... I would prefer one damn GPU that worked well instead of switching between multiple ones.

Question, isn't DX12 supposed to support leveraging multiple GPUs for extra acceleration? I was under the impression the cards didn't even have to be the same and seem to recall an Ashes of Singularity Benchmark some years ago showing that it was possible.... I guess no one really supports DX12 and that is part of the problem
 
Question, isn't DX12 supposed to support leveraging multiple GPUs for extra acceleration? I was under the impression the cards didn't even have to be the same and seem to recall an Ashes of Singularity Benchmark some years ago showing that it was possible.... I guess no one really supports DX12 and that is part of the problem
There's no magical switch. It requires a lot of work to get it working. Now with ray tracing and other stuff it probably would be even harder if not impossible in practice.
 
There's no magical switch. It requires a lot of work to get it working. Now with ray tracing and other stuff it probably would be even harder if not impossible in practice.

I still recall some kind of Space Magic in DX12 that enabled multiple GPUs to be combined together....

What games support multi-GPU DX12?

There were a lot of initial DX12 games that did support it.
  • Deus Ex: Mankind Divided.
  • Rise of the Tomb Raider.
  • Sniper Elite 4.
  • Ashes of the Singularity (this one supported mixed AMD+NV gpu support)
  • Strange Brigade.
  • Hitman
It exists, but it's on the devs of the games to implement it.

There is tech out there that facilitates GPU combining transparently. I suspect it's expensive and limited to datacenter applications, but I recall seeing something come out about a year ago.

I wouldn't think that Ray Tracing would cause more issues, since it can run with or without hardware acceleration. I have seen GPUs push Ray Tracing at significant penalties without dedicated hardware.

I suspect I am derailing the thread. I will shut up now (y)
 
SOOOOOO what do we know so far?

1) Will be slightly faster that Ryzen 7000 in gaming
2) Will pull close at the high end in multitasking, will likely lead in the midrange
3) Will use slightly more power gaming than 7000, will melt your home's wiring in multicore
4) Last generation on the socket, so dead end platform
5) Will lose out to the 7800x3D in most games

what else?
 
No more Intel CPU's for me until they start releasing HEDT cpu's again... Running a 10900x. Only upgrade path is threadripper and still not an actual upgrade, and waay too expensive.

Bring on a 13900x! Or at least 32 pcie lanes anyway.
 
SOOOOOO what do we know so far?

1) Will be slightly faster that Ryzen 7000 in gaming
2) Will pull close at the high end in multitasking, will likely lead in the midrange
3) Will use slightly more power gaming than 7000, will melt your home's wiring in multicore
4) Last generation on the socket, so dead end platform
5) Will lose out to the 7800x3D in most games

what else?
IDK - the 13th Gen stuff looks quite promising as a nice, efficiency uplift over the 12th gen parts. Methinks this is going to give AMD the black eye that makes them punch out their X3D parts in Q1 next year (or sooner). Depending on how it goes, I might even be coerced to switch from Team Red to Blue. I did say I was sitting this generation out but I'm starting to develop the upgrade itch again... Not a burning sensation, just a desire to try something new and non AMD for my next build. I am eagerly awaiting whatever they punch out for this release. Intel will use my stupid huge stockpile of DDR4 too, so more value for me.
 
IDK - the 13th Gen stuff looks quite promising as a nice, efficiency uplift over the 12th gen parts. Methinks this is going to give AMD the black eye that makes them punch out their X3D parts in Q1 next year (or sooner). Depending on how it goes, I might even be coerced to switch from Team Red to Blue. I did say I was sitting this generation out but I'm starting to develop the upgrade itch again... Not a burning sensation, just a desire to try something new and non AMD for my next build. I am eagerly awaiting whatever they punch out for this release. Intel will use my stupid huge stockpile of DDR4 too, so more value for me.
Even the most diehard AMD fan should admit Intel did the right thing by supporting DDR4 for their big/small rollout.
 
Even the most diehard AMD fan should admit Intel did the right thing by supporting DDR4 for their big/small rollout.
Yeah, I think that was a smart move that didn't simply invalidate the shitload of ram that I and many other's have stockpiled over the years. The actual performance shift between DDR4 and 5 is like 3%...

AMD might have saved themselves a couple pennies on their memory controller and likely a lot more with how the Fabric and such work on their chips.... However, this just comes across as lazy. Because when you're running 4 sticks of RAM in AMD systems it dials the DDR5 down to 3600 Mhz. The Fabric is supposed to be independent of the other system busses this time too. I might have jumped on the early adoption for team Red if they had supported DDR4 because I have a tons of it lying around....

Totally agree with your point.
 
No doubt that they will sell more due to longer DDR4 support, makes it an easy decision for those who have a lot of RAM and want the best DDR4 platform to retire it with.
 
I've not been paying attention, getting ready to upgrade so starting. If that's really it what the heck is the point?
Eventually DDR5 will mature, and the differences will be evident. Until that happens, we're all just playing guinea pig with DDR5.
 
With people being forced to adopt DDR5 moving forward (for Zen4 and 14th Gen at least), it'll bring prices down. How much and how quickly are unknowns, but at least there's that.
 
I've not been paying attention, getting ready to upgrade so starting. If that's really it what the heck is the point?
What DooKey said. Initially, we won't see much difference at all. The biggest gains may be seen on the iGPU side of things when AMD punches out their next gen APU parts. However, right now, the margin of performance uplift is negligible. The penalties for adopting DDR5 are pretty large with all the protections enabled for side channel attacks. IIRC the boot times for these systems are long and initial boot setups can take MINUTES as the boards calibrate everything. I saw a report that initial setups could take something like 5-10 minutes before systems even hit a point where they post to BIOS. So, people aren't supposed to be alarmed thinking their systems are dead out the gate.

When they get it all dialed in, in a year or two it will all likely be transparent. But until then, if you have a bunch of DDR4 on hand the Intel solution is looking better to me every day.
 
Last edited:
With people being forced to adopt DDR5 moving forward (for Zen4 and 14th Gen at least), it'll bring prices down. How much and how quickly are unknowns, but at least there's that.
This is true enough. Right now the prices are a bit salty, even though they have come down significantly since DDR5 first came out.
 
Back
Top