| ▲ | jonhohle 5 days ago |
| Why did something like this take so long to exist? I’ve always wanted swap or tmpfs available on old RAM I have lying around. |
|
| ▲ | gertrunde 5 days ago | parent | next [-] |
| Such things have existed for quite a long time... For example: https://en.wikipedia.org/wiki/I-RAM (Not a unique thing, merely the first one I found). And then there are the more exotic options, like the stuff that these folk used to make: https://en.wikipedia.org/wiki/Texas_Memory_Systems - iirc - Eve Online used the RamSan product line (apparently starting in 2005: https://www.eveonline.com/news/view/a-history-of-eve-databas... ) |
| |
| ▲ | jonbiggums22 3 days ago | parent [-] | | Are these the same thing though? I have a fondness for novel hardware and the idea of using up old ram standards to create a fast ram disk is interesting to me. But the CXL card seems to require motherboard/platform support whereas these older ram drives just showed up as a disk. |
|
|
| ▲ | numpad0 4 days ago | parent | prev | next [-] |
| Yeah. I can't count how many times I've seen descriptions of northbridge links smelling like the author knows it's PCIe under the hood. I've also seen someone explaining that it can't be done on most CPUs unless all cache systems are turned off because (IO?)MMU don't allow caching of MMIO addresses outside DRAM range. The technical explanations for the fact that you (boolean)can't have extra DRAM controllers on PCIe is increasingly sounding like market segmentation reasons than purely technical ones. x86 is a memory mapped I/O platform. Why we can't just have RAM sticks on RAM addresses. The reverse of this works btw. NVMe drives can use Host Memory Buffer to cache reads and writes on system RAM - the feature that jammed and caught fire on recently rumored bad ntfs.sys incident in Windows 11. |
|
| ▲ | kvemkon 5 days ago | parent | prev | next [-] |
| I'd have rather a question why we had single (or already dual) core CPUs with dual-channel memory controller and now we have 16-core CPUs but still with only dual-channel RAM. |
| |
| ▲ | Dylan16807 5 days ago | parent | next [-] | | DDR1 and DDR2 were clocked 20x and 10x slower than DDR5. The CPU cores we have now are faster but not that much faster, and with the typical user having 8 or fewer performance cores 128 bits of memory width has stayed a good balance. If you need a lot of memory bandwidth, workstation boards have DDR5 at 256-512 bits wide. Apple Silicon supports that range on Pro and Max, and Ultra is 1024. (I'm using bits instead of channels because channels/subchannels can be 16 or 32 or 64 bits wide.) | |
| ▲ | justincormack 4 days ago | parent | prev | next [-] | | AMD EPYC has 12 channel, 24 on dual socket. AMD sell machines with 2 (consumer), 4 (threadripper), 6 (dense edge), 8 (threadripper pro) and 12 memory channels (EPYC high end). Next generation EPYC will have 16 channels. Roughly if you look at the AMD options, they give you 2 memory channels per 16 cores. CPUs tend to be somewhat limited in what bandwidth they can use, eg on Apple Silicon you cant actually consume all the memory bandwidth on the wider options just on the CPUs, its mainly useful for the GPU. DDR5 was double speed of DDR4, and speeds have been ramping up too, so there have been improvements there. | |
| ▲ | bobmcnamara 5 days ago | parent | prev | next [-] | | Intel and AMD I'd reckon. Apple went wide with their busses. | | |
| ▲ | to11mtm 5 days ago | parent [-] | | Well, Each Channel needs a lot of pins. I don't think all 288/262 pins need to go to the CPU, but a large number of them do, I'd wager; The old LGA 1366 (Tri-Channel) and LGA 1151 (Dual Channel) are probably as close as we can get to a simple reference point [0]. Apple FBOW, based on a quick and sloppy count of a reballing jig [1], has something on the order of 2500-2700 balls on an M2 CPU. I think AMD's FP11 'socket' (it's really just a standard ball grid array) pinout is something on the order of 2000-2100 balls and that gets you four 64 Bit DDR channels (I think Apple works a bit different and uses 16 bit channels, thus the 'channel count' for an M2 is higher.) Which is a roundabout way of saying, AMD and Intel probably can match the bandwidth but to do so likely would require moving to soldered CPUS which would be a huge paradigm shift for all the existing boardmakers/etc. [0] - They do have other tradeoffs; namely that 1151 has built in PCIE, on the other hand the link to the PCH is AFAIR a good bit thinner than the QPI link on the 1366. [1] - https://www.masterliuonline.com/products/a2179-a1932-cpu-reb... . I counted ~55 rows along the top and ~48 rows on the side... | | |
| ▲ | bobmcnamara 4 days ago | parent [-] | | Completely agree, and this is a bit of a ramble... I think part of might be that Apple recognized that integrated GPUs require a lot of bulk memory bandwidth. I noticed this with their tablet derivative cores having memory bandwidth that tended to scale with screen size but Samsung and Qualcomm didn't bother for ages. And it sucked doing high speed vision systems on their chips because of it. For years Intel had been slowly beefing up the L2/L3/L4. M1Max is somewhere between Nvidia 1080 and 1080TI in bulk bandwidth. The lowest end M chips aren't competitive, but near everything above that overlaps even the current gen NVIDA 4050+ offerings | | |
| ▲ | to11mtm 4 days ago | parent [-] | | Good ramble though :) Yeah, Apple definitely realized that they should do something and for as much as I don't care for their ecosystem I think they were very smart in how they handled the need for memory bandwidth, e.x. having more 16 bit channels vs fewer 64 bit channels probably allows for better power management characteristics as far as being able to relocating data on 'sleep'/'wake' and thus being able to leave more of the ram powered off. That plus the good UMA impl has left the rest of the industry 'not playing catchup' i.e. - Intel failing to capitalize on the opportunity of a 'VRAM heavy' low end card to gain market share, - AMD failing to bite the bullet and meaningfully try to fight Nvidia on memory/bandwidth margin... - Nvidia just raking that margin in... - By this point you'd think Qualcomm would just do an 'AI Accelerator' reference platform just to try.... - I'm guessing whatever efforts are happening in China, they are too busy trying to fill internal needs to bother boasting and tipping their hat; better to let outside companies continue to overspend on the current paradigm. |
|
|
| |
| ▲ | christkv 5 days ago | parent | prev | next [-] | | Check out Strix Halo 395+ it’s got 8 memory channels up to 128 GB and 16 cores | | |
| ▲ | Dylan16807 5 days ago | parent [-] | | That's a true but misleading number. It's the equivalent of "quad channel" in normal terms. |
| |
| ▲ | kmeisthax 4 days ago | parent | prev [-] | | [dead] |
|
|
| ▲ | aidenn0 5 days ago | parent | prev | next [-] |
| (S)ATA or PCI to DRAM adapters were widely available until NAND became cheaper per bit than DRAM, at which point the use for it kind of went away. IIRC Intel even made a DRAM card that was drum-memory compatible. |
|
| ▲ | 5 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | Dylan16807 5 days ago | parent | prev [-] |
| RAM controllers are expensive enough that it's rarely worth pairing them with old RAM lying around. |