Micron to Introduce GDDR7 Memory in 1H 2024

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
11,096
This is cool 😎

“Cadence already has its GDDR7 verification solution, so adopters can ensure that theire controllers and physical interfaces will be compliant with the GDDR7 specification eventually.”

1688047822704.png

Source: https://www.tomshardware.com/news/micron-to-introduce-gddr7-memory-in-1h-2024
 
Could be worth it to wait for this to have good yield-volume-fix in for your next generation, specially if you want to stay on non-wide bus
 
  • Like
Reactions: erek
like this
AMD loves this for handhelds and APU's. But seriously if they wanted us to have the magikal BANDWIDTH they would have just given more channels already. Last time that happened Intel got scardy pants over Phenom 2 and gave the great X58 3X channel RAM. How good were those days?
 
Wow! Nvidia will save so much money when they design $700 GPUs to use use 3 of them on a 96 bit wide bus.
I don’t know about that, if I am understanding the papers for the GDDR7 PAM3 signalling then it requires a 256-bit bus as the base to function. So I suspect a 512 bit bus for the high end, 256 for the mid, and GDDR6 for the low.

So $500 for last generation tech using a barely acceptable 192-bit bus with 12gb ram targeting 1080p.

Because Nvidia has learned nothing this generation and there will be lots of it left on the shelves come 2025 when these cards to hit the market.
 
Because Nvidia has learned nothing this generation
I doubt that, if they are left on the shelves and if it is considered an issue, Nvidia will have learn everything there is to learn about it, Lovelace was designed during a very strange time for the GPU market or how much they learned by how high AIBs-reseller were able to push GPU prices.

Look how much they got the world pumped for Ampere after Turing let down and how hard Unreal nanite and co. will make game hard to run at 1080p, the latest gen console have barely 50% of the needed power to run game at 1080p at high setting and look like they will often run around half that resolution around 7xxp.

According to this:
https://wccftech.com/gddr7-memory-f...tion-stage-as-cadence-intros-first-solutions/

  • 128-bit @ 36 Gbps: 576 GB/s
  • 192-bit @ 36 Gbps: 846 GB/s
  • 256-bit @ 36 Gbps: 1152 GB/s
  • 320-bit @ 36 Gbps: 1440 GB/s
  • 384-bit @ 36 Gbps: 1728 GB/s
The
128 bits 4060Ti has: 288 GB/s
192 bits 4070TI has: 504 GB/s
256 bits 4080 has: 716 GB/s
385 bits 4090 has: 1,000 GB/s

With GDDR7 if the tech is ready and work well, I can see Nvidia saving a lot of money and have direct compatibility with the laptop world by keeping the small bus of Lovelave for an other generation while being able to gain around 70% of the bandwidth

128 bits GDDR7 being a bit faster than 192 bits, 192 being faster than 256 and so on now.

If they do a left on the shelve untractive product it would be because they do not mind not moving a lot of silicon in that market using all they learned.
 
Last edited:
I doubt that, if they are left on the shelves and if it is considered an issue, Nvidia will have learn everything there is to learn about it, Lovelace was designed during a very strange time for the GPU market or how much they learned by how high AIBs-reseller were able to push GPU prices.

Look how much they got the world pumped for Ampere after Turing let down and how hard Unreal nanite and co. will make game hard to run at 1080p, the latest gen console have barely 50% of the needed power to run game at 1080p at high setting and look like they will often run around half that resolution around 7xxp.

According to this:
https://wccftech.com/gddr7-memory-f...tion-stage-as-cadence-intros-first-solutions/

  • 128-bit @ 36 Gbps: 576 GB/s
  • 192-bit @ 36 Gbps: 846 GB/s
  • 256-bit @ 36 Gbps: 1152 GB/s
  • 320-bit @ 36 Gbps: 1440 GB/s
  • 384-bit @ 36 Gbps: 1728 GB/s
The
128 bits 4060Ti has: 288 GB/s
192 bits 4070TI has: 504 GB/s
256 bits 4080 has: 716 GB/s
385 bits 4090 has: 1,000 GB/s

With GDDR7 if the tech is ready and work well, I can see Nvidia saving a lot of money and have direct compatibility with the laptop world by keeping the small bus of Lovelave for an other generation while being able to gain around 70% of the GDDR6x stack

128 bits GDDR7 being a bit faster than 192 bits, 192 being faster than 256 and so on now.

If they do a left on the shelve untractive product it would be because they do not mind not moving a lot of silicon in that market using all they learned.
Maybe, but what I think they learned was how to not step on their own toes. Sell high, put the proceeds into a slush fund so if they need to lower prices on those existing parts they can offset that from the slush fund that has been collecting interest for a year or two and write that down as a loss to offset taxes. Then their now low end bits aren’t left competing with their new low end parts.
So they can issue huge rebates on say 4060’s to run the stock down before launching a 5050 that would be in its general performance space.
 
I doubt that, if they are left on the shelves and if it is considered an issue, Nvidia will have learn everything there is to learn about it, Lovelace was designed during a very strange time for the GPU market or how much they learned by how high AIBs-reseller were able to push GPU prices.

Look how much they got the world pumped for Ampere after Turing let down and how hard Unreal nanite and co. will make game hard to run at 1080p, the latest gen console have barely 50% of the needed power to run game at 1080p at high setting and look like they will often run around half that resolution around 7xxp.

According to this:
https://wccftech.com/gddr7-memory-f...tion-stage-as-cadence-intros-first-solutions/

  • 128-bit @ 36 Gbps: 576 GB/s
  • 192-bit @ 36 Gbps: 846 GB/s
  • 256-bit @ 36 Gbps: 1152 GB/s
  • 320-bit @ 36 Gbps: 1440 GB/s
  • 384-bit @ 36 Gbps: 1728 GB/s
The
128 bits 4060Ti has: 288 GB/s
192 bits 4070TI has: 504 GB/s
256 bits 4080 has: 716 GB/s
385 bits 4090 has: 1,000 GB/s

With GDDR7 if the tech is ready and work well, I can see Nvidia saving a lot of money and have direct compatibility with the laptop world by keeping the small bus of Lovelave for an other generation while being able to gain around 70% of the bandwidth

128 bits GDDR7 being a bit faster than 192 bits, 192 being faster than 256 and so on now.

If they do a left on the shelve untractive product it would be because they do not mind not moving a lot of silicon in that market using all they learned.
They will just choke it with 64-bit, 32-bit, and 16-bit buses at the mid to low-end to keep up their artificial segmentation.
The baseline could be capable of 10TB/s and they would choke it down to 12.8GB/s if they could get away with it.

We all know who 'they' are.
 
if they could get away with it.
Then it would be because they would have learned it from Lovelace that they can. The if it is an issue being key.

I feel like they will not be able to have a second generation 5060 wise in a row, hard to run Unreal 5 type game will be common, peak AI training craze will have probably pass by the time the 5060 launch.

Maybe cache will be good enough and large enough for this.
 
Back
Top