Will be interesting to see how long this RAM insanity will last. If it doesn't calm down before Zen 6 releases, people like me on older platforms might just have to skip Zen 6 entirely and wait for the AM6 platform.
Can they double the memory lanes without switching socket ? If not I feel like PC is going to fall behind even further compared to Apple chips. Having ram on chip sucks for repairability but 500gb/s main ram bandwidth is insane.
They stumbled into the right direction with strix halo but I have a feeling they won't recognize the win/follow up.
The socket io locks in the amount of memory channels. Some pins could be repurposed but that's pretty much a new socket anyway.
They could in theory do on package dram as faster first level memory, but I doubt we'll see that anytime soon on desktop and it probably wouldn't fit under the heat spreader
Higher DRAM prices might mean that there is less demand from new system builders mean depressed prices so it might be more tempting to upgrade your existing AM5 CPU to Zen 6
But they are still gonna fab the Zen 6 chips. So for people already with AM5 motherboards populated with RAM but rocking a Zen 4 CPU this could be a good time to upgrade that CPU with your existing setup. You passing this generation just means less competition for those CPUs which should make them even cheaper.
My understanding is they’re using the same process time for cpus and gpus so they may just be able to reallocate it for datacenter gpus. Sure they’re behind but some of the AI companies have already made deals with them as they just want compute, any compute. So I think the effect might be less than some hope for
I am a hypocrite, but there is really not that much need to upgrade CPUs anymore. Even a ten year old chip seems completely adequate for day to day use. I played with a N100 recently and those things are incredibly capable.
(Ignore my AM5 workstation with 192GB RAM in the corner)
I rocked my Haswell i5 until last year when I built a brand new machine around the 9800x3d. Along the way I upgraded it from 8gb of ram to 32gb, got a gen 1 pcie3 NVME, and went through successive hand-me-down GPUs starting from a GeForce 770 to the RTX 2070 it has now.
In fact my wife is still rocking that machine - although her gaming needs are much less equipment intense than mine. After a small refurb I gave it (new case, new air cooler, new PSU) - I expect it to last another 5 years for her.
I rode out an i7-4790K until this year... replaced solely because of Windows 10 support ending. But it's a solid chip.
My new one is a 9700X. Didn't feel the need to spring for higher power budget for a marginal gaming performance bump. But I suppose that also means it's much more practical for me to jump to a newer CPU later.
Heh. It was a luxury purchase at the start of the year when I was only worried about tariffs. Wanted to lock in a new build good for years. Every once in a while I have a machine learning project that needs over 100GB and so it is nice not to have to overthink things. Honestly, I’m kicking myself I did not go all the way with 256GB.
You say that, but DDR6 will double the memory bandwidth over DDR5. This means modern systems will go beyond 200GB/s memory bandwidth just for the CPU alone.
Considering PC desktops. DDR4 is 3200 MT/s max JEDEC. DDR5 is available on AMD since 3 years and is 5600. DDR6 specification is almost finished. It looks like DDR5 will double performance just right before new DDR6 DIMMs appear. Thus I'd expect DDR6 to double the bandwidth just as late when the new memory standard arrives.
I’m sure there are a plethora of technical reasons it’s impractical - but my dream is a big, unified L3 cache across their CCD chiplets. Maybe 256mb in size for the x950 x3d chips.
There are challenges with really big monolithic caches. IBM does something sort of like your idea in their Power and Telum chips, with different approaches. Power has a non-uniform cache within each die, Telum has a way to stitch together cache even across sockets (!).
The PCI-Express bus is actually rather slow. Only ~63 GB/s, even with PCIe 5 x16!
PCIe is simply not a bottleneck for gaming. All the textures and models are loaded into the GPU once, when the game loads, then re-used from VRAM for every frame. Otherwise, a scene with a lowly 2 GB of assets would cap out at only ~30 fps.
Which is funny to think about historically. I remember when AGP first came out, and it was advertised as making it so GPUs wouldn't need tons of memory, only enough for the frame buffers, and that they would stream texture data across AGP. Well, the demands for bandwidth couldn't keep up. And now, even if the port itself was fast enough, the system RAM wouldn't be. DDR5-6400 running in dual-channel mode is only ~102 GB/s. On the flip side the RTX 5050, a current-gen budget card, has over 3x that at 320 GB/s, and on the top end, the RTX 5090 is 1.8 TB/s.
Faster M.2 drives are great, but you know what would be even greater? More M.2 drives.
I wish it was possible to put several M.2 drives in a system and RAID them all up, like you can with SATA drives on any above-average motherboard. Even a single lane of PCIe 5.0 would be more than enough for each of those drives, because each drive won't need to work as hard. Less overheating, more redundancy, and cheaper than getting a small number of super fast high capacity drives. Alas, most mobos only seem to hand out lanes in multiples of 4.
Maybe one day we'll have so many PCIe lanes that we can hand them out like candy to a dozen storage devices and have some left to power a decent GPU. Still, it feels wasteful.
The M.2 form factor isn't that conducive to having lots of them, since they're on the board and need large connectors and physical standoffs. They're also a pain in the ass to install because they lie flat, close to the board, so you're likely to have to remove a bunch of shit to get to them. This is why I've never cared about and mostly hated every "tool-less" M.2 latching mechanism cooked up by the motherboard manufacturers: I already have a screwdriver because I needed to remove my GPU and my ethernet card and the stupid motherboard "armor" to even get at the damn slots.
SATA was a cabling nightmare, sure, but cables let you relocate bulk somewhere else in the case, so you can bunch all the connectors up on the board.
Frankly, given that most advertised M.2 speeds are not sustained or even hit most of the time, I could deal with some slower speeds due to cable length if it meant I could mount my SSDs anywhere but underneath my triple slot GPU.
And even when AMD does move their mainstream desktop processors to a new socket, there's very little reason to expect them to be trying to accommodate multi-GPU setups. SLI and Crossfire are dead, multi-GPU gaming isn't coming back for the foreseeable future, so multi-GPU is more or less a purely workstation/server feature at this point. They're not going to increase the cost of their mainstream platform for the sole purpose of cannibalizing Threadripper sales.
https://overclock3d.net/news/cpu_mainboard/amd-extends-am5-l...
They stumbled into the right direction with strix halo but I have a feeling they won't recognize the win/follow up.
They could in theory do on package dram as faster first level memory, but I doubt we'll see that anytime soon on desktop and it probably wouldn't fit under the heat spreader
I'd love to build a new desktop soon but I couldn't justify the cost and am instead building out a used desktop that's still on ddr4 / lga1151.
I just checked how much the 64 Gb ddr4 in my desktop would cost now... it starts at 2.5 times what i paid in 2022.
Sorry AMD, I would maybe like a new desktop but not now.
(Ignore my AM5 workstation with 192GB RAM in the corner)
In fact my wife is still rocking that machine - although her gaming needs are much less equipment intense than mine. After a small refurb I gave it (new case, new air cooler, new PSU) - I expect it to last another 5 years for her.
My new one is a 9700X. Didn't feel the need to spring for higher power budget for a marginal gaming performance bump. But I suppose that also means it's much more practical for me to jump to a newer CPU later.
I'm a gamer, often playing games that need a BEEFY CPU, like MS Flight Simulator. My upgrade from an i9-9900K to a Ryzen 9800X3D was noticeable.
Considering PC desktops. DDR4 is 3200 MT/s max JEDEC. DDR5 is available on AMD since 3 years and is 5600. DDR6 specification is almost finished. It looks like DDR5 will double performance just right before new DDR6 DIMMs appear. Thus I'd expect DDR6 to double the bandwidth just as late when the new memory standard arrives.
Only if they overestimate demand and overproduce CPUs. Otherwise it will lead to higher prices because there's less economy of scale.
Something like 5900x on 2nm or 4nm
https://chipsandcheese.com/p/telum-ii-at-hot-chips-2024-main...
https://www.eecg.utoronto.ca/~moshovos/ACA07/projectsuggesti...
(if you do ML things you might recognize Doug Burger's name on the authors line of the second one)
There are some exceptions, but I haven't seen one with for example four x16 slots that support PCIe 5.0 x4 lanes with bifurcation.
E.g. https://www.ebay.co.uk/itm/126656188922
Most motherboards don’t go beyond 2x8 with 2x16 physical slots because there is little actual use for it and it costs quite a bit of money.
The PCI-Express bus is actually rather slow. Only ~63 GB/s, even with PCIe 5 x16!
PCIe is simply not a bottleneck for gaming. All the textures and models are loaded into the GPU once, when the game loads, then re-used from VRAM for every frame. Otherwise, a scene with a lowly 2 GB of assets would cap out at only ~30 fps.
Which is funny to think about historically. I remember when AGP first came out, and it was advertised as making it so GPUs wouldn't need tons of memory, only enough for the frame buffers, and that they would stream texture data across AGP. Well, the demands for bandwidth couldn't keep up. And now, even if the port itself was fast enough, the system RAM wouldn't be. DDR5-6400 running in dual-channel mode is only ~102 GB/s. On the flip side the RTX 5050, a current-gen budget card, has over 3x that at 320 GB/s, and on the top end, the RTX 5090 is 1.8 TB/s.
I wish it was possible to put several M.2 drives in a system and RAID them all up, like you can with SATA drives on any above-average motherboard. Even a single lane of PCIe 5.0 would be more than enough for each of those drives, because each drive won't need to work as hard. Less overheating, more redundancy, and cheaper than getting a small number of super fast high capacity drives. Alas, most mobos only seem to hand out lanes in multiples of 4.
Maybe one day we'll have so many PCIe lanes that we can hand them out like candy to a dozen storage devices and have some left to power a decent GPU. Still, it feels wasteful.
SATA was a cabling nightmare, sure, but cables let you relocate bulk somewhere else in the case, so you can bunch all the connectors up on the board.
Frankly, given that most advertised M.2 speeds are not sustained or even hit most of the time, I could deal with some slower speeds due to cable length if it meant I could mount my SSDs anywhere but underneath my triple slot GPU.
Observing server mainboards reveals many PCIe 5.0 connectors for cables to attach PCIe-SSDs looking similar to SATA ones.
When did the GHz race start again?
Leaks = the author just made something up, but now it ranks extra highly when someone searches for "[upcoming thing] leaks"
Now, it's either a fancy term for "announcement", or people use it synonymously with "rumor".