| ▲ | basilikum 13 hours ago | |||||||||||||||||||
KolibriOS would fit in there, even with the data in memory. You cannot load it into the cache directly, but when the cache capacity is larger than all the data you read there should be no cache eviction and the OS and all data should end up in the cache more or less entirely. In other words it should be really, really fast, which KolibriOS already is to begin with. | ||||||||||||||||||||
| ▲ | vlovich123 13 hours ago | parent | next [-] | |||||||||||||||||||
Unless you lay everything out continuously in memory, you’ll still get cache eviction due to associativty and depending on the eviction strategy of the CPU. But certainly DOS or even early Windows 95 could conceivably just run out of the cache | ||||||||||||||||||||
| ||||||||||||||||||||
| ▲ | hrmtst93837 7 hours ago | parent | prev [-] | |||||||||||||||||||
That assumes KolibriOS or any major component is pinned to one core and one cache slice instead of getting dragged between CCDs or losing memory affinity. Throw actual users, IO, and interrupts at it and you get traffic across chiplets, or at least across L3 groups, so the nice 'everything lives in cache' story falls apart fast. Nice demo, bad model. The funny part is that an entire OS can fit in cache now, the hard part is making the rest of the system act like that matters. | ||||||||||||||||||||