| ▲ | ssl-3 9 hours ago | |
I learned a lot about underclocking, undervolting, and computational power efficiency during my brief time in the ethereum mining[1] shenanigans. The best ROI was with the most-numerous stable computations at the lowest energy expense. I'd tweak individual GPUs' various clocks and volts to optimize this. I'd even go so far as to tweak fan speed ramps on the cards themselves (those fans don't power themselves! There's whole Watts to save there!). I worked to optimize the efficiency of even the power from the wall. But that was a system that ran, balls-out, 24/7/365. Or at least it ran that way until it got warmer outside, and warmer inside, and I started to think about ways to scale mining eth in the basement vs. cooling the living space of the house to optimize returns. (And I never quite got that sorted before they pulled the rug on mining.) And that story is about power efficiency, but: Power efficiency isn't always the most-sensible goal. Sometimes, maximum performance is a better goal. We aren't always mining Ethereum. Jeff's (quite lovely) video and associated article is a story about just one man using a stack of consumer-oriented-ish hardware in amusing -- to him -- ways, with local LLM bots. That stack of gear is a personal computer. (A mighty-expensive one on any inflation-adjusted timeline, but what was constructed was definitely used as a personal computer.) Like most of our personal computers (almost certainly including the one you're reading this on), it doesn't need to be optimized for a 24/7 100% workload. It spends a huge portion of its time waiting for the next human input. And unlike mining Eth in the winter in Ohio: Its compute cycles are bursty, not constant, and are ultimately limited by the input of one human. So sure: I, like Jeff, would also like to see how it would work when running with the balls[2] running further out. For as long as he gets to keep it, the whole rig is going to spend most of its time either idling or off, anyway. So it might as well get some work done when a human is in front of it, even if each token costs more in that configuration than it does OOTB. It theoretically can even clock up when being actively-used (and suck all the power), and clock back down when idle (and resume being all sleepy and stuff). That's a well-established concept that [eg] Intel has variously called SpeedStep and/or Turbo Boost -- and those things work for bursty workloads, and have worked in that way for a very long time now. [1]: Y'all can hate me for being a small part of that problem. It's allowed. | ||
| ▲ | jermaustin1 2 hours ago | parent [-] | |
I did Crypto Mining as an alternative to heating. In my centrally cool apartment my office was the den which had the air return. So my mining rig ran RIGHT in front of that, it sucked the heat out and pushed it all over the house. Then summer came, and in Texas the AC can barely keep up to begin with. So then my GPUs became part of a render farm instead. | ||