Remix.run Logo
zoogeny 6 days ago

I think trial-and-error hand-waving isn't all that far from experimentation.

As an aside, I was working in the games industry when multi-core was brand new. Maybe Xbox-360 and PS3? I'm hazy on the exact consoles but there was one generation where the major platforms all went multi-core.

No one knew how to best use the multi-core systems for gaming. I attended numerous tech talks by teams that had tried different approaches and were give similar "maybe do this and maybe see x% improvement?". There was a lot of experimentation. It took a few years before things settled and best practices became even somewhat standardized.

Some people found that era frustrating and didn't like to work in that way. Others loved the fact it was a wide open field of study where they could discover things.

jorvi 6 days ago | parent | next [-]

Yes, it was the generation of the X360 and PS3. X360 was 3 core and the PS3 was 1+7 core (sort of a big.little setup).

Although it took many, many more years until games started to actually use multi-core properly. With rendering being on a 16.67ms / 8.33ms budget and rendering tied to world state, it was just really hard to not tie everything into eachother.

Even today you'll usually only see 2-4 cores actually getting significant load.

Nullabillity 5 days ago | parent | prev | next [-]

Performance optimization is different, because there's still some kind of a baseline truth. Every knows what a FPS is, and +5% FPS is +5% FPS. Even the tricky cases have some kind of boundary (+5% FPS on this hardware but -10% on this other hardware, +2% on scenes meeting these conditions but -3% otherwise, etc).

Meanwhile, nobody can agree on what a "good" LLM in, let alone how to measure it.

hackernewds 6 days ago | parent | prev [-]

there probably was still a structured way to test this through cross hatching but yeah like blind guessing might take longer and arrive at the same solution