| ▲ | bunderbunder 4 days ago |
| The cost effective way to do it is in the cloud. Because there's a very good chance you'll learn everything you intended to learn and then get bored with it long before your cloud compute bill reaches the price of a desktop with even fairly modest specs for this purpose. |
|
| ▲ | dukeyukey 3 days ago | parent | next [-] |
| It's good for the soul to have your cluster running in your home somewhere. |
| |
| ▲ | NordSteve 3 days ago | parent | next [-] | | Bad for your power bill though. | | |
| ▲ | platybubsy 3 days ago | parent | next [-] | | I'm sure 5 rpis will devastate the power grid | |
| ▲ | duxup 3 days ago | parent | prev | next [-] | | I need to heat my house too so maybe it helps a little there. | |
| ▲ | 11101010001100 3 days ago | parent | prev | next [-] | | You still pay for power for the cloud. | |
| ▲ | trenchpilgrim 3 days ago | parent | prev | next [-] | | Still less than renting the same amount of compute. Somewhere between several months and a couple years you pull ahead on costs. Unless you only run your lab a few hours a day. | |
| ▲ | Damogran6 3 days ago | parent | prev | next [-] | | I got past that back when I was paying for ISDN and had 5 Surplus Desktop PCs...write it off as 'Professional development' | |
| ▲ | throwaway894345 3 days ago | parent | prev [-] | | What does a few rpis cost on a monthly basis? | | |
| ▲ | theodric 3 days ago | parent [-] | | Depends. At full load? At Irish power prices? Just the Pi, no peripherals, no NVMe? 5 units? €13/mo. Handy: https://700c.dk/?powercalc My Pi CM4 NAS with a PCIe switch, SATA and USB3 controllers, 6 SATA SSDs, 2 VMs, 2 LXC containers, and a Nextcloud snap pretty much sits at 17 watts most of the time, hitting 20 when a lot is being asked of it, and 26-27W at absolute max with all I/O and CPU cores pegged. €3.85/mo if I pay ESB, but I like to think that it runs fully off the solar and batteries :) | | |
| ▲ | throwaway894345 3 days ago | parent [-] | | > Depends. At full load? At Irish power prices? Just the Pi, no peripherals, no NVMe? 5 units? €13/mo. Pretty sure most of us aren't running anywhere close to full load 24/7, but whoa, Irish power is expensive. In the central US I pay $0.14/KWh. | | |
| ▲ | theodric 3 days ago | parent | next [-] | | Yeah, it's brutal. Was €0.39 right after Mad Vlad kicked off his vanity conflict. | | |
| ▲ | throwaway894345 5 hours ago | parent [-] | | That’s rough. What’s your progress on renewables? Wind has made electricity really cheap in my state and I would think Ireland would be pretty windy (esp offshore)? |
| |
| ▲ | fragmede 2 days ago | parent | prev [-] | | cries in west coast peak $0.71/KWh rate |
|
|
|
| |
| ▲ | ofrzeta 3 days ago | parent | prev [-] | | Maybe so, but even then a second-hand blade server is more cost-effective than a Raspi Cluster. | | |
| ▲ | geerlingguy 3 days ago | parent [-] | | Not if you run it idle a lot; most commercial blade servers suck down a lot of power. I think a niche where Pi blades can work is for a learning cluster, like in schools for HPC learning, network automation, etc. It's definitely not suited for production, but there, you won't find old blade servers either (for the power to performance issue). |
|
|
|
| ▲ | Almondsetat 4 days ago | parent | prev | next [-] |
| I can get a Xeon E5-2690V4 with 28 threads and 64GB of RAM for about $150. If you need cores and memory to make a lot of VMs you can do it extremely cheaply |
| |
| ▲ | Aurornis 3 days ago | parent | next [-] | | > I can get a Xeon E5-2690V4 with 28 threads and 64GB of RAM for about $150. If the goal is a lot of RAM and you don’t care about noise, power, or heat then these can be an okay deal. Don’t underestimate how far CPUs have come, though. That machine will be slower than AMD’s slowest entry-level CPU. Even an AMD 5800X will double its single core performance and even walk away from it on multithreaded tasks despite only having 8 cores. It will use less electricity and be quiet, too. More expensive, but if this is something you plan to leave running 24/7 the electricity costs over a few years might make the power hungry server more expensive over time. | |
| ▲ | semi-extrinsic 3 days ago | parent | prev | next [-] | | For $3000 you can get 3x used Epyc servers with a total of 144 cores and 384 GB memory, with dual-port 25Gbe networking so you can run them in a fully connected cluster without a switch. It will have >20x better perf/$ and ~3x better perf/W. That combo gives you the better part of a gigabyte of L3 cache and an aggregate memory bandwidth of 600 GB/s, while still below 1000W total running at full speed. Plus your NICs are the fancy kind that let you play around with RoCEv2 and such nifty stuff. It would also be relevant to then also learn how to do stuff properly with SLURM and Warewulf etc. instead of a poor mans solution with Ansible playbooks like in these blog posts. | | |
| ▲ | p12tic 3 days ago | parent | next [-] | | Better build a single workstation - less noise, less power usage and the form factor is way more convenient. A budget of $3000 can buy 128 cores with 512GB of RAM on a single regular EATX motherboard, a case, a power supply and other accessories. Power usage is ~550W at maximum utilization which not much more than a gaming rig with a powerful GPU. | |
| ▲ | Almondsetat 3 days ago | parent | prev [-] | | You are taking my reply completely out of context. If you want to learn clustering, you need a lot of cores and ram to run many VMs. You don't need them to be individually very powerful. |
| |
| ▲ | mattbillenstein 3 days ago | parent | prev | next [-] | | Power and noise - old server hardware is not something you want in your home. Commodity desktop cpus with 32 or 64GB RAM can do all of this in a low-power and quiet way without a lot more expense. | | |
| ▲ | p12tic 3 days ago | parent [-] | | The problem is with the form factor, not the server hardware per-se. If one buys regular ATX motherboard that accepts server CPUs and fits it in regular ATX case, then there's lots of space for a relatively silent CPU air cooler. 2690 v4 idles at less than 40W which is not much more than a regular gaming desktop with a powerful GPU. The only problem in practice is that server CPUs don't support S3 suspend, so putting whole thing to sleep after finishing with it doesn't work. |
| |
| ▲ | nine_k 4 days ago | parent | prev | next [-] | | It will probably consume $150 worth of electricity in less than a month, even sitting idle :-\ | | |
| ▲ | blobbers 4 days ago | parent | next [-] | | The internet says 100W idle, so maybe more like $40-50 electricity, depending on where you live could be cheaper could be more expensive. Makes me wonder if I should unplug more stuff when on vacation. | | |
| ▲ | nine_k 4 days ago | parent | next [-] | | I was surprised to find out that my apartment pulls 80-100W when everything is seemingly down during the night. A tiny light here and there, several displays in sleep mode, a desktop idling (mere 15W, but), a laptop charging, several phones charging, etc, the fridge switches on for a short moment. The many small amounts add up to something considerable. | | |
| ▲ | ToucanLoucan 3 days ago | parent | next [-] | | I got out of the homelab game as I finished my transition from DevOps to Engineering Lead, and it was simply massively overbuilt for what I actually needed. I replaced an ancient Dell R700 series, R500 series, and a couple supermicros with 3 old desktop PCs in rack enclosures and cut my electric bill nearly $90/month. Fuckin nutty how much juice those things tear through. | |
| ▲ | amatecha 3 days ago | parent | prev [-] | | Yeah it kinda puts it all into perspective when you think of how every home used to use 60-watt light bulbs all throughout. Most people just leave lights on all over their home all day, probably using hundreds of watts of electricity. Makes me realize my 35-65w laptop is pretty damn efficient haha |
| |
| ▲ | rogerrogerr 3 days ago | parent | prev | next [-] | | 100W over a month (rule of thumb 730 hours) is 73kWh. Which is $7.30 at my $0.10/kWh rate, or less than $25 at (what Google told me is) Cali’s average $0.30/kWh. | | |
| ▲ | mercutio2 3 days ago | parent [-] | | Your googling gave results that were likely accurate for California 4-5 years ago. My average cost per kWh is about 60 cents. Rates have gone up enormously because the cost of wildfires is falling on ratepayers, not the utility owners. Regulated monopolies are pretty great, aren’t they? Heads I win, tales you lose. | | |
| ▲ | lukevp 3 days ago | parent | next [-] | | 60 cents per kWh? That’s shocking. Here in Oregon people complain about energy prices and my fully loaded cost (not the per kWh but including everything) is 19c. And I go over the limit for single family residential where I end up in a higher priced bracket. Thanks for making me feel better about my electricity rate. I’m sorry you have to deal with that. The utility companies should have to pay to cover those costs. | |
| ▲ | cogman10 3 days ago | parent | prev | next [-] | | Depends entirely on the utilities board doing the regulation. That said, I'm of the opinion that power/water/internet should all be state/county/city ran. I don't want my utilities companies to have profit motives. My water company just got bought up by a huge water company conglomerate and, you guessed it, immediate rate increases. | | |
| ▲ | SoftTalker 3 days ago | parent [-] | | Most utilities, even if ostensibly privately-owned, are profit-limited and rates must be approved by a regulatory board. Some are organized as non-profits (rural water and electric co-ops, etc.) This is in exchange for the local monopoly. If your local regulators approved the merger and higher rates, your complaint is with them as much as the utility company. Not saying that some regulators are not basically rubber stamps or even corrupt. | | |
| ▲ | cogman10 3 days ago | parent [-] | | I agree. The issue really is that they are 3 layers removed from where I can make a change. They are all appointed and not elected which means I (and my neighbors) don't have any recourse beyond the general election. IIRC, they are appointed by the governor which makes it even harder to fix (might be the county commissioner, not 100% on how they got their position, just know it was an appointment). I did (as did others), in fact, write in comments and complaints about the rate increases and buyout. That went unheard. |
|
| |
| ▲ | Damogran6 3 days ago | parent | prev | next [-] | | CORE energy in Colorado is charging $0.10819 per kWh _today_ https://core.coop/my-cooperative/rates-and-regulations/rate-... | |
| ▲ | LTL_FTC 3 days ago | parent | prev [-] | | They have definitely increased but not all of California is like this. In the heart of Silicon Valley, Santa Clara, it's about $0.15/kWh. Having Data Centers nearby helps, I suppose. | | |
| ▲ | chermi 3 days ago | parent | next [-] | | I'm guessing the parent is talking about total bill (transmission, demand charges..) $.15/kwH is probably just the usage, and I am very skeptical that's accurate for residential. | | |
| ▲ | LTL_FTC 24 minutes ago | parent [-] | | Correct. $0.15/kwh is usage. There are a few small fees but that’s likely the case in most places. This is residential use. If skeptical, a quick online search is all it takes… |
| |
| ▲ | favorited 3 days ago | parent | prev [-] | | Santa Clara's energy rates are an outlier among neighboring municipalities, and should not be used as an example of energy cost in the Bay Area. Santa Clara residents are served by city-owned Silicon Valley Power, which has lower rates than PG&E or SVCE, which service almost all of the South Bay. | | |
| ▲ | LTL_FTC 9 minutes ago | parent [-] | | Well the discussion was California as a whole and averages, so I decided to share. As with averages, data is above and bellow the mean, so when a commenter above said $.30/kwh was much too low for California, I decided to lend some support the the argument as I’m in California paying bellow the average. It’s a just a data point. A counter example to the claim made by parent. Maybe it helps fellow nerds pick a spot in the bay if they want to run their homelabs. |
|
|
|
| |
| ▲ | titanomachy 3 days ago | parent | prev | next [-] | | 100W continuous at 12¢/kWh (US average) is only ~$9 / month. Is your electricity 5x more expensive than the US average? | | |
| ▲ | RussianCow 3 days ago | parent | next [-] | | The US average hasn't been that low in a few years; according to [0] it's 17.47¢/kWh, and significantly higher in some parts of the country (40+ in Hawaii). And the US has low energy costs relative to most of the rest of the world, so a 3-5x multiplier over that for other countries isn't unreasonable. Plus, energy prices are currently rising and will likely continue to do so over the next few years. $50/month for 100W continuous usage isn't totally mad, and that could climb even higher over the rest of the decade. | |
| ▲ | mercutio2 3 days ago | parent | prev [-] | | Not OP, but my California TOU rates are between a 40 and 70 cents per kWh. Still only $50/month, not $150, but I very much care about 100W loads doing no work. | | |
| ▲ | cjbgkagh 3 days ago | parent [-] | | Those kWh prices are insane, that’ll make industry move out of there. | | |
| ▲ | selkin 3 days ago | parent [-] | | Industrial pays different rates than homes. That said, I am not sure those numbers are true. I am in California (PG&E with East Bay community generation), and my TOU rates are much lower than those. | | |
| ▲ | mercutio2 3 days ago | parent | next [-] | | There are 3 different components of PG&E electricity bills, which makes the bill difficult to read. I am also in PG&E East Bay community generation, and when I look at all components, it’s: Minimum Delivery Charge (what’s paid monthly, which is largely irrelevant, before annual true-up of NEM charges): $11.69/month Actual charges, billed annually, per kWh: Peak NEM charge: $.62277
Off-Peak NEM charges: $.31026
Plus 3-20% extra (depending on the month) in “non-bypassable charges” (I haven’t figured out where these numbers come from), then a 7.5% local utility tax.Those rates do get a little lower in the winter (.30 to .48), and of course the very high rates benefit me when I generate more energy than I consume (which only happens when I’m on vacation). But the marginal all-in costs are just very high. That’s NEM2 + TOU-EV2A, specifically. | | |
| ▲ | nullc 2 days ago | parent [-] | | Are you actually able to compute that? With PG&E + MCE because of the way they back off the PG&E generation charges, the actual per-time period rates are not disclosed. I can solve for them with three equations for three unknowns... but since they change the rates quarterly by the time I know what my exact rates were they have changed. |
| |
| ▲ | mrkstu 3 days ago | parent | prev [-] | | If he’s only paying $50 most of it is connection fees and low usage distorting his per kWh price way up. |
|
|
|
| |
| ▲ | yjftsjthsd-h 4 days ago | parent | prev | next [-] | | > Makes me wonder if I should unplug more stuff when on vacation. What's the margin on unplugging vs just powering off? | | |
| ▲ | Symbiote 3 days ago | parent | next [-] | | That also depends on the country you live. The EU (and maybe China?) have been regulating standby power consumption, so most of my appliances either have a physical off switch (usually as the only switch) or should have very low standby power draw. I don't have the equipment to measure this myself. | |
| ▲ | dijit 4 days ago | parent | prev [-] | | By "off" you mean, functionally disabled but with whatever auto-update system in the background with all the radios on for "smart home" reasons - or, "off"? |
| |
| ▲ | p12tic 3 days ago | parent | prev [-] | | Depends on a server. This test got 79W idle for _two socket_ E5 2690-V4 server. https://www.servethehome.com/lenovo-system-x3650-m5-workhors... |
| |
| ▲ | swiftcoder 3 days ago | parent | prev | next [-] | | Obviously the solution is to pickup another hobby, and enter the DIY solar game at the same time as your home lab obsession :D | | |
| ▲ | bokohut 9 hours ago | parent [-] | | Interestingly enough it is often times a foundational change in one's 'normal' that inspires something 'new'. In this case that 'new' is energy efficient software down to the individual lines of code and what their energy cost is on certain hardware. Academics are publishing about it in niche corners of the web and some entrepreneurs are doing it but of course none of this is cool now so we remain a mockery for our objectives. In time this too will become a real thing as many now are just beginning to feel the ever rising costs of energy which is only just starting to increase from decisions made years ago. The worst is yet to come as seen and heard directly from every single expert that has testified in the last years before the Energy and Commerce committee however only the outside-the-boxers among us watch such educational content to better prepare for tomorrow. Electricity powers our world and nearly all take it for granted, time too will change this thinking. :D |
| |
| ▲ | Almondsetat 3 days ago | parent | prev | next [-] | | Isn't your home lab supposed to make you learn stuff? Why would you leave it idle? | | |
| ▲ | cjbgkagh 3 days ago | parent [-] | | You wouldn’t, it’s given as a lower bound, it costs more than that when not idling | | |
| ▲ | dijit 3 days ago | parent [-] | | but then you’d turn it off, if you don’t then cloud is much more expensive too. Also $150 for 100w is crazy, thats like $1.70 per kWh; it would cost about $150 a year at the (high) rates of southern Sweden. | | |
| ▲ | cjbgkagh 3 days ago | parent [-] | | Im not the OP, don’t know how they arrived at that cost. Personally it’s cheaper to buy the hardware that does spend most of its time idling. Fast turnaround on very large private datasets being key. |
|
|
| |
| ▲ | kjkjadksj 3 days ago | parent | prev [-] | | So shut it off when you don’t need it. |
| |
| ▲ | sebastiansm 4 days ago | parent | prev | next [-] | | On Aliexpress those Xeon+mobo+ram kits are really cheap. | | |
| ▲ | datadrivenangel 3 days ago | parent [-] | | 1. Not in the US with tariffs now.
2. I would not trust complicated electronics from Aliexpress from a safety and security perspective. |
| |
| ▲ | 4 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | kbenson 3 days ago | parent | prev [-] | | Source? That seems like something I would want to take advantage if at the moment... | | |
| ▲ | kllrnohj 3 days ago | parent [-] | | Note the E5-2690V4 is a 10 year old CPU, they are talking about used servers. You can find those on ebay or whatever as well as stores specializing in that. Depending on where you live, you might even find them free as they are often considered literal ewaste by the companies decommissioning them. It also means it performs like a 10 year old server CPU, so those 28 threads are not exactly worth a lot. The geekbench results, for whatever value those are worth, are very mediocre in the context of anything remotely modern: https://browser.geekbench.com/processors/intel-xeon-e5-2690-... Like a modern 12-thread 9600x runs absolute circles around it https://browser.geekbench.com/processors/amd-ryzen-5-9600x | | |
| ▲ | flas9sd 3 days ago | parent | next [-] | | I tend to use quite old hardware that is powered-off when not in use for its intended purpose and I coined "capability is its own quality". For dedicated build boxes that crunch through lots of sources (whole distributions, AOSP) but do run seldomly, getting your hands on lots of Cores and RAM very cheaply can still trump buying newer CPUs with better perf/watt but higher cost. | |
| ▲ | mattbillenstein 3 days ago | parent | prev [-] | | This is the correct analysis - there's a reason you see this stuff cheap or free. The homelab group on Reddit is full of people who don't understand any of this - they have full racks in their house that could be replaced with one high-end desktop. | | |
| ▲ | kllrnohj 3 days ago | parent | next [-] | | > The homelab group on Reddit is full of people who don't understand any of this - they have full racks in their house that could be replaced with one high-end desktop. A lot of that group is making use of the IO capabilities of these systems to run lots of PCI-E devices & hard drives. There's not exactly a cost-effective modern equivalent for that. If there were cost-effective ways to do something like take a PCI-E 5.0 x2 and turn it into a PCI-E 3.0 x8 that'd be incredible, but there isn't really. So raw PCI-E lane count is significant if you want cheap networking gear or HBAs or whatever, and raw PCI-E lane count is $$$$ if you're buying new. Also these old systems mean cheap RAM in large, large capacities. Like 128GB RAM to make ZFS or VMs purr is much cheaper to do on these used systems than anything modern. | | |
| ▲ | mattbillenstein 3 days ago | parent [-] | | Perhaps, but I don't really get the dozens of TB of storage in the home use case a lot of the time either. Like if you have a large media library, you need to push maybe 10MB/s, you don't need 128GB of RAM to do that... It's mostly just hardware porn - perhaps there are a few legit use cases for the old hardware, but they are exceedingly rare in my estimate. | | |
| ▲ | kllrnohj 3 days ago | parent [-] | | > Like if you have a large media library, you need to push maybe 10MB/s, For just streaming a 4k bluray you need more than 10MB/s, Ultra HD bluray tops out at 144 Mbit/s. Not to mention if that system is being hit by something else at the same time (backup jobs, etc...). Is the 128GB of RAM just hardware porn? Eh, maybe, probably. But if you want 8+ bays for a decent sized NAS then you're already quickly into price points at which point these used servers are significantly cheaper, and 128GB of RAM adds very little to the cost so why not. | | |
| ▲ | Kubuxu 3 days ago | parent [-] | | For 8+ bays you just need a SAS HBA card and one free PCI-E slot. Not to mention that many motherboards will have 6+ SATA ports already. If anything, 2nd hand AMD gaming rigs make more sense than old servers.
I say that as someone with always off r720xd at home due to noise and heat. It was fun when I bought it during winter years ago, until summer came. | | |
| ▲ | ThatPlayer 3 days ago | parent | next [-] | | I've been turning off my home server even though it's a modern PC rather than old server hardware because it idles at 100W which is too much. Put a Ryzen 7900X in it. Not sure if it's not properly doing lower power states, or if it's the 10 HDDs spinning. Or even the GPU. But also don't really have anything important running on it that I can't just turn it off. | |
| ▲ | kllrnohj 3 days ago | parent | prev [-] | | > For 8+ bays you just need a SAS HBA card and one free PCI-E slot. Not to mention that many motherboards will have 6+ SATA ports already. And what case are you putting them into? What if you want it rack mounted? What about >1gig networking? What if I want a GPU in there to do whisper for home assistant? Used gaming rigs are great. But used servers also still have loads of value, too. Compute just isn't one of them. | | |
| ▲ | ssl-3 3 days ago | parent [-] | | > And what case are you putting them into? Maybe one of the Fractal Designs cases with a bunch of drive bays? > What if you want it rack mounted? Companies like Rosewill sell ATX cases that can scratch that itch. > What about >1gig networking? What about PCI Express card? Regular ATX computers are expandable. > What if I want a GPU in there to do whisper for home assistant? I mean... We started with a gaming rig, right? Isn't a GPU already implicit? | | |
| ▲ | kllrnohj 3 days ago | parent [-] | | > Companies like Rosewill sell ATX cases that can scratch that itch. Have you looked at what they cost? Those cases alone cost as much as a used server. Which comes with a case. > What about PCI Express card? Regular ATX computers are expandable. As mentioned higher up, they run out of lane count in a hurry. Especially when you're using things like used Connect-X cards | | |
| ▲ | ssl-3 2 days ago | parent [-] | | A rackmount case from Rosewill costs a couple of hundred bucks or so, new. And they'll remain useful for as long as things like ATX boards and 3.5" hard drives are useful. I mean: An ATX case can be paid for once, and then be used for decades. (I'm writing this using a modern desktop computer with an ATX case that I bought in 2008.) PCI Express lanes can be multiplied. There should frankly be more of this going on than there is, but it's still a thing that can be done. Consumer boards built on the AMD X670E chipset, for instance, have some switching magic built in. There's enough direct CPU-connected lanes for an x16 GPU and a couple of x4 NVMe drives, and the NIC(s) and/or HBA(s) can go downstream of the chipset. (Yeah, sure: It's limited to an aggregate 64 Gbps at the tail end, but that's not a problem for the things I do at home where my sights are set on 10Gbps networking and an HBA with a bunch of spinny disks. Your needs may differ.) |
|
|
|
|
|
|
| |
| ▲ | zer00eyz 3 days ago | parent | prev [-] | | Most of the workloads that people with homelabs run, could be run on a 5 year old i5. A lot of business are paying obscene money to cloud providers when they could have a pair of racks and the staff to support it. Unless you're paying attention to the bleeding edge of the server market, to its costs (better yet features and affordability) this sort of mistake is easy to make. The article is by someone who does this sort of thing for fun, and views/attention, and im glad for it... it's fun to watch. But it's sad when this same sort of misunderstanding happens in professional settings, and it happens a lot. |
|
|
|
|
|
| ▲ | montebicyclelo 4 days ago | parent | prev | next [-] |
| Yeah... Looks like can get about $1/hr for 10 small VMs, ($0.10 per VM). So for $3000, that's 3000 hours, or 125 days, (if just wastefully leave them on all the time, instead of turning them on when needed). Say you wanted to play around for a couple of hours, that's like.. $3. (That's assuming there's no bonus for joining / free tier, too.) |
| |
| ▲ | wongarsu 3 days ago | parent | next [-] | | The VMs quickly get expensive if you leave them running though. The desktop equivalent of your 10 T3 Micro instances is about $600 if you buy new. For example a Lenovo ThinkCentre M75q Gen 2 Tiny 11JN009QGE has 8x3.2GHz processor with hyperthreading. That's 16 virtual cores compared to the 20 vcpus of the T3 instances, but with much faster cores. And 16GB RAM allows you to match the 1GB per instance. If you don't have anything and feel generous throw in another $200 for a good monitor and keyboard plus mouse. But you can get a used crap monitor for $20. I'd give you one for free just to be rid of it. That's a total of $800, or 33 days of forgetting to shut down the 10 VMs. Maybe half that if you buy used. Granted not everyone has $800 or even $400 to drop on hobby projects, renting VMs often does make sense | |
| ▲ | verdverm 4 days ago | parent | prev | next [-] | | You can rent a beefy vm with an H100 for $1.50 / hr I regularly rent this for a few hours at a time for learning and prototyping | | | |
| ▲ | pinkgolem 3 days ago | parent | prev [-] | | Are you comparing 10 VM with 1 shared core with a 144 core solution? |
|
|
| ▲ | aprdm 4 days ago | parent | prev | next [-] |
| That really depends on what you want to learn and how deep. If you're automating things before the hypervisor comes online or there's an OS running (e.g: working on datacenter automation, bare metal as a service) you will have many gaps |
| |
| ▲ | leoc 3 days ago | parent [-] | | If you want to run something like GNS3 network simulation on a hosting service's hardware you'll either have to deal with hiring a bare-metal server or deal with nested virtualisation on other people's VM setups. Network simulation absolutely drinks RAM, too, so just filling an old Xeon with RAM starts to look very attractive in comparison to cloud providers who treat it an expensive upsell. |
|
|
| ▲ | sam1r 3 days ago | parent | prev | next [-] |
| A great way to do this is… is with a brand new Aws account, which will give you 1 year free across all services with reasonable limits. |
| |
|
| ▲ | bakugo 4 days ago | parent | prev | next [-] |
| It heavily depends on the use case. For these AI setups, you're completely correct, because the people who talk about how amazing it is to run a <100B model at home almost never actually end up using it for anything real (mostly because these small models aren't actually very good) and are doing it purely for the novelty. But if you're someone like me who intends to actively use the hardware for real-world purposes, the cloud often simply can't compete on price. At home, I have a mini PC with a 5600G, 32GB of RAM, and a few TBs of NVME storage. The entire thing cost less than $600 a few years ago, and consumes around 20W of power on average. Even on the cheapest cloud providers available, an equivalent setup would exceed that price in less than half a year. SSD storage in particular is disproportionately expensive on the cloud. For small VMs that don't need much storage, it does make sense, but as soon as you scale up, cloud prices quickly start ballooning. |
| |
| ▲ | swiftcoder 3 days ago | parent [-] | | Plus you still have access to the whole lot when your ISP goes down (maybe less of a problem than it used to be, but not unheard of) |
|
|
| ▲ | nsxwolf 4 days ago | parent | prev | next [-] |
| That isn’t fun. I have a TI-99/4A in my office hooked up to a raspberry pi so it can use the internet. Why? Because it’s fun. I like to touch and see the things even though it’s all so silly. |
|
| ▲ | motorest 3 days ago | parent | prev | next [-] |
| > The cost effective way to do it is in the cloud. This. Some cloud providers offer VMs with 4GB RAM and 2 virtual cores for less than $4/month. If your goal is to learn how to work with clusters, nothing beats firing up a dozen VMs when it suits your fancy, and shut them down when playtime is over. This is something you can pull off in a couple of minutes with something like an Ansible script. |
|
| ▲ | pinkgolem 3 days ago | parent | prev | next [-] |
| For learning I feel much safer setting everything up locally, worst case I have to reinstall my system. In the cloud, worst case I have a bill over 5-6 digits. And I know my ADD, 2 is not super unlikely. |
|
| ▲ | cramcgrab 3 days ago | parent | prev | next [-] |
| I don’t know, i keyed this into google Gemini and got pretty far: “ Simulate an AWS AI cluster, command line interface. For each command supply the appropriate AWS AI cluster response” |
|
| ▲ | mattbillenstein 3 days ago | parent | prev | next [-] |
| LOL, no |
|
| ▲ | newsclues 3 days ago | parent | prev [-] |
| Text and reference books are free at the library. You don’t need hardware to learn. Sure it helps but you can learn from a book and pen and paper exercises. |
| |
| ▲ | trenchpilgrim 3 days ago | parent [-] | | I disagree. Most of what I've learned about systems comes from debugging the weird issues that only happen on real systems, especially real hardware. The book knowledge is like, 20-30% of it. | | |
| ▲ | titanomachy 3 days ago | parent [-] | | Agreed, I don't think I'd hire a datacenter engineer whose experience consisted of reading books and doing "pen and paper exercises". |
|
|