Remix.run Logo
Aurornis 4 days ago

I thought the conclusion should have been obvious: A cluster of Raspberry Pi units is an expensive nerd indulgence for fun, not an actual pathway to high performance compute. I don’t know if anyone building a Pi cluster actually goes into it thinking it’s going to be a cost effective endeavor, do they? Maybe this is just YouTube-style headline writing spilling over to the blog for the clicks.

If your goal is to play with or learn on a cluster of Linux machines, the cost effective way to do it is to buy a desktop consumer CPU, install a hypervisor, and create a lot of VMs. It’s not as satisfying as plugging cables into different Raspberry Pi units and connecting them all together if that’s your thing, but once you’re in the terminal the desktop CPU, RAM, and flexibility of the system will be appreciated.

bunderbunder 4 days ago | parent | next [-]

The cost effective way to do it is in the cloud. Because there's a very good chance you'll learn everything you intended to learn and then get bored with it long before your cloud compute bill reaches the price of a desktop with even fairly modest specs for this purpose.

dukeyukey 3 days ago | parent | next [-]

It's good for the soul to have your cluster running in your home somewhere.

NordSteve 3 days ago | parent | next [-]

Bad for your power bill though.

platybubsy 3 days ago | parent | next [-]

I'm sure 5 rpis will devastate the power grid

duxup 3 days ago | parent | prev | next [-]

I need to heat my house too so maybe it helps a little there.

11101010001100 3 days ago | parent | prev | next [-]

You still pay for power for the cloud.

trenchpilgrim 3 days ago | parent | prev | next [-]

Still less than renting the same amount of compute. Somewhere between several months and a couple years you pull ahead on costs. Unless you only run your lab a few hours a day.

Damogran6 3 days ago | parent | prev | next [-]

I got past that back when I was paying for ISDN and had 5 Surplus Desktop PCs...write it off as 'Professional development'

throwaway894345 3 days ago | parent | prev [-]

What does a few rpis cost on a monthly basis?

theodric 3 days ago | parent [-]

Depends. At full load? At Irish power prices? Just the Pi, no peripherals, no NVMe? 5 units? €13/mo.

Handy: https://700c.dk/?powercalc

My Pi CM4 NAS with a PCIe switch, SATA and USB3 controllers, 6 SATA SSDs, 2 VMs, 2 LXC containers, and a Nextcloud snap pretty much sits at 17 watts most of the time, hitting 20 when a lot is being asked of it, and 26-27W at absolute max with all I/O and CPU cores pegged. €3.85/mo if I pay ESB, but I like to think that it runs fully off the solar and batteries :)

throwaway894345 3 days ago | parent [-]

> Depends. At full load? At Irish power prices? Just the Pi, no peripherals, no NVMe? 5 units? €13/mo.

Pretty sure most of us aren't running anywhere close to full load 24/7, but whoa, Irish power is expensive. In the central US I pay $0.14/KWh.

theodric 3 days ago | parent | next [-]

Yeah, it's brutal. Was €0.39 right after Mad Vlad kicked off his vanity conflict.

throwaway894345 5 hours ago | parent [-]

That’s rough. What’s your progress on renewables? Wind has made electricity really cheap in my state and I would think Ireland would be pretty windy (esp offshore)?

fragmede 2 days ago | parent | prev [-]

cries in west coast peak $0.71/KWh rate

ofrzeta 3 days ago | parent | prev [-]

Maybe so, but even then a second-hand blade server is more cost-effective than a Raspi Cluster.

geerlingguy 3 days ago | parent [-]

Not if you run it idle a lot; most commercial blade servers suck down a lot of power. I think a niche where Pi blades can work is for a learning cluster, like in schools for HPC learning, network automation, etc.

It's definitely not suited for production, but there, you won't find old blade servers either (for the power to performance issue).

Almondsetat 4 days ago | parent | prev | next [-]

I can get a Xeon E5-2690V4 with 28 threads and 64GB of RAM for about $150. If you need cores and memory to make a lot of VMs you can do it extremely cheaply

Aurornis 3 days ago | parent | next [-]

> I can get a Xeon E5-2690V4 with 28 threads and 64GB of RAM for about $150.

If the goal is a lot of RAM and you don’t care about noise, power, or heat then these can be an okay deal.

Don’t underestimate how far CPUs have come, though. That machine will be slower than AMD’s slowest entry-level CPU. Even an AMD 5800X will double its single core performance and even walk away from it on multithreaded tasks despite only having 8 cores. It will use less electricity and be quiet, too. More expensive, but if this is something you plan to leave running 24/7 the electricity costs over a few years might make the power hungry server more expensive over time.

semi-extrinsic 3 days ago | parent | prev | next [-]

For $3000 you can get 3x used Epyc servers with a total of 144 cores and 384 GB memory, with dual-port 25Gbe networking so you can run them in a fully connected cluster without a switch. It will have >20x better perf/$ and ~3x better perf/W.

That combo gives you the better part of a gigabyte of L3 cache and an aggregate memory bandwidth of 600 GB/s, while still below 1000W total running at full speed. Plus your NICs are the fancy kind that let you play around with RoCEv2 and such nifty stuff.

It would also be relevant to then also learn how to do stuff properly with SLURM and Warewulf etc. instead of a poor mans solution with Ansible playbooks like in these blog posts.

p12tic 3 days ago | parent | next [-]

Better build a single workstation - less noise, less power usage and the form factor is way more convenient. A budget of $3000 can buy 128 cores with 512GB of RAM on a single regular EATX motherboard, a case, a power supply and other accessories. Power usage is ~550W at maximum utilization which not much more than a gaming rig with a powerful GPU.

Almondsetat 3 days ago | parent | prev [-]

You are taking my reply completely out of context. If you want to learn clustering, you need a lot of cores and ram to run many VMs. You don't need them to be individually very powerful.

mattbillenstein 3 days ago | parent | prev | next [-]

Power and noise - old server hardware is not something you want in your home.

Commodity desktop cpus with 32 or 64GB RAM can do all of this in a low-power and quiet way without a lot more expense.

p12tic 3 days ago | parent [-]

The problem is with the form factor, not the server hardware per-se. If one buys regular ATX motherboard that accepts server CPUs and fits it in regular ATX case, then there's lots of space for a relatively silent CPU air cooler. 2690 v4 idles at less than 40W which is not much more than a regular gaming desktop with a powerful GPU.

The only problem in practice is that server CPUs don't support S3 suspend, so putting whole thing to sleep after finishing with it doesn't work.

nine_k 4 days ago | parent | prev | next [-]

It will probably consume $150 worth of electricity in less than a month, even sitting idle :-\

blobbers 4 days ago | parent | next [-]

The internet says 100W idle, so maybe more like $40-50 electricity, depending on where you live could be cheaper could be more expensive.

Makes me wonder if I should unplug more stuff when on vacation.

nine_k 4 days ago | parent | next [-]

I was surprised to find out that my apartment pulls 80-100W when everything is seemingly down during the night. A tiny light here and there, several displays in sleep mode, a desktop idling (mere 15W, but), a laptop charging, several phones charging, etc, the fridge switches on for a short moment. The many small amounts add up to something considerable.

ToucanLoucan 3 days ago | parent | next [-]

I got out of the homelab game as I finished my transition from DevOps to Engineering Lead, and it was simply massively overbuilt for what I actually needed. I replaced an ancient Dell R700 series, R500 series, and a couple supermicros with 3 old desktop PCs in rack enclosures and cut my electric bill nearly $90/month.

Fuckin nutty how much juice those things tear through.

amatecha 3 days ago | parent | prev [-]

Yeah it kinda puts it all into perspective when you think of how every home used to use 60-watt light bulbs all throughout. Most people just leave lights on all over their home all day, probably using hundreds of watts of electricity. Makes me realize my 35-65w laptop is pretty damn efficient haha

rogerrogerr 4 days ago | parent | prev | next [-]

100W over a month (rule of thumb 730 hours) is 73kWh. Which is $7.30 at my $0.10/kWh rate, or less than $25 at (what Google told me is) Cali’s average $0.30/kWh.

mercutio2 3 days ago | parent [-]

Your googling gave results that were likely accurate for California 4-5 years ago. My average cost per kWh is about 60 cents.

Rates have gone up enormously because the cost of wildfires is falling on ratepayers, not the utility owners.

Regulated monopolies are pretty great, aren’t they? Heads I win, tales you lose.

lukevp 3 days ago | parent | next [-]

60 cents per kWh? That’s shocking. Here in Oregon people complain about energy prices and my fully loaded cost (not the per kWh but including everything) is 19c. And I go over the limit for single family residential where I end up in a higher priced bracket. Thanks for making me feel better about my electricity rate. I’m sorry you have to deal with that. The utility companies should have to pay to cover those costs.

cogman10 3 days ago | parent | prev | next [-]

Depends entirely on the utilities board doing the regulation.

That said, I'm of the opinion that power/water/internet should all be state/county/city ran. I don't want my utilities companies to have profit motives.

My water company just got bought up by a huge water company conglomerate and, you guessed it, immediate rate increases.

SoftTalker 3 days ago | parent [-]

Most utilities, even if ostensibly privately-owned, are profit-limited and rates must be approved by a regulatory board. Some are organized as non-profits (rural water and electric co-ops, etc.) This is in exchange for the local monopoly.

If your local regulators approved the merger and higher rates, your complaint is with them as much as the utility company.

Not saying that some regulators are not basically rubber stamps or even corrupt.

cogman10 3 days ago | parent [-]

I agree. The issue really is that they are 3 layers removed from where I can make a change. They are all appointed and not elected which means I (and my neighbors) don't have any recourse beyond the general election. IIRC, they are appointed by the governor which makes it even harder to fix (might be the county commissioner, not 100% on how they got their position, just know it was an appointment).

I did (as did others), in fact, write in comments and complaints about the rate increases and buyout. That went unheard.

Damogran6 3 days ago | parent | prev | next [-]

CORE energy in Colorado is charging $0.10819 per kWh _today_

https://core.coop/my-cooperative/rates-and-regulations/rate-...

LTL_FTC 3 days ago | parent | prev [-]

They have definitely increased but not all of California is like this. In the heart of Silicon Valley, Santa Clara, it's about $0.15/kWh. Having Data Centers nearby helps, I suppose.

chermi 3 days ago | parent | next [-]

I'm guessing the parent is talking about total bill (transmission, demand charges..) $.15/kwH is probably just the usage, and I am very skeptical that's accurate for residential.

LTL_FTC 32 minutes ago | parent [-]

Correct. $0.15/kwh is usage. There are a few small fees but that’s likely the case in most places. This is residential use. If skeptical, a quick online search is all it takes…

favorited 3 days ago | parent | prev [-]

Santa Clara's energy rates are an outlier among neighboring municipalities, and should not be used as an example of energy cost in the Bay Area. Santa Clara residents are served by city-owned Silicon Valley Power, which has lower rates than PG&E or SVCE, which service almost all of the South Bay.

LTL_FTC 17 minutes ago | parent [-]

Well the discussion was California as a whole and averages, so I decided to share. As with averages, data is above and bellow the mean, so when a commenter above said $.30/kwh was much too low for California, I decided to lend some support the the argument as I’m in California paying bellow the average. It’s a just a data point. A counter example to the claim made by parent. Maybe it helps fellow nerds pick a spot in the bay if they want to run their homelabs.

titanomachy 4 days ago | parent | prev | next [-]

100W continuous at 12¢/kWh (US average) is only ~$9 / month. Is your electricity 5x more expensive than the US average?

RussianCow 3 days ago | parent | next [-]

The US average hasn't been that low in a few years; according to [0] it's 17.47¢/kWh, and significantly higher in some parts of the country (40+ in Hawaii). And the US has low energy costs relative to most of the rest of the world, so a 3-5x multiplier over that for other countries isn't unreasonable. Plus, energy prices are currently rising and will likely continue to do so over the next few years.

$50/month for 100W continuous usage isn't totally mad, and that could climb even higher over the rest of the decade.

mercutio2 3 days ago | parent | prev [-]

Not OP, but my California TOU rates are between a 40 and 70 cents per kWh.

Still only $50/month, not $150, but I very much care about 100W loads doing no work.

cjbgkagh 3 days ago | parent [-]

Those kWh prices are insane, that’ll make industry move out of there.

selkin 3 days ago | parent [-]

Industrial pays different rates than homes.

That said, I am not sure those numbers are true. I am in California (PG&E with East Bay community generation), and my TOU rates are much lower than those.

mercutio2 3 days ago | parent | next [-]

There are 3 different components of PG&E electricity bills, which makes the bill difficult to read. I am also in PG&E East Bay community generation, and when I look at all components, it’s:

Minimum Delivery Charge (what’s paid monthly, which is largely irrelevant, before annual true-up of NEM charges): $11.69/month

Actual charges, billed annually, per kWh:

  Peak NEM charge: $.62277
  Off-Peak NEM charges: $.31026
Plus 3-20% extra (depending on the month) in “non-bypassable charges” (I haven’t figured out where these numbers come from), then a 7.5% local utility tax.

Those rates do get a little lower in the winter (.30 to .48), and of course the very high rates benefit me when I generate more energy than I consume (which only happens when I’m on vacation). But the marginal all-in costs are just very high.

That’s NEM2 + TOU-EV2A, specifically.

nullc 2 days ago | parent [-]

Are you actually able to compute that? With PG&E + MCE because of the way they back off the PG&E generation charges, the actual per-time period rates are not disclosed.

I can solve for them with three equations for three unknowns... but since they change the rates quarterly by the time I know what my exact rates were they have changed.

mrkstu 3 days ago | parent | prev [-]

If he’s only paying $50 most of it is connection fees and low usage distorting his per kWh price way up.

yjftsjthsd-h 4 days ago | parent | prev | next [-]

> Makes me wonder if I should unplug more stuff when on vacation.

What's the margin on unplugging vs just powering off?

Symbiote 3 days ago | parent | next [-]

That also depends on the country you live.

The EU (and maybe China?) have been regulating standby power consumption, so most of my appliances either have a physical off switch (usually as the only switch) or should have very low standby power draw.

I don't have the equipment to measure this myself.

dijit 4 days ago | parent | prev [-]

By "off" you mean, functionally disabled but with whatever auto-update system in the background with all the radios on for "smart home" reasons - or, "off"?

p12tic 3 days ago | parent | prev [-]

Depends on a server. This test got 79W idle for _two socket_ E5 2690-V4 server.

https://www.servethehome.com/lenovo-system-x3650-m5-workhors...

swiftcoder 3 days ago | parent | prev | next [-]

Obviously the solution is to pickup another hobby, and enter the DIY solar game at the same time as your home lab obsession :D

bokohut 9 hours ago | parent [-]

Interestingly enough it is often times a foundational change in one's 'normal' that inspires something 'new'.

In this case that 'new' is energy efficient software down to the individual lines of code and what their energy cost is on certain hardware. Academics are publishing about it in niche corners of the web and some entrepreneurs are doing it but of course none of this is cool now so we remain a mockery for our objectives. In time this too will become a real thing as many now are just beginning to feel the ever rising costs of energy which is only just starting to increase from decisions made years ago. The worst is yet to come as seen and heard directly from every single expert that has testified in the last years before the Energy and Commerce committee however only the outside-the-boxers among us watch such educational content to better prepare for tomorrow.

Electricity powers our world and nearly all take it for granted, time too will change this thinking.

:D

Almondsetat 3 days ago | parent | prev | next [-]

Isn't your home lab supposed to make you learn stuff? Why would you leave it idle?

cjbgkagh 3 days ago | parent [-]

You wouldn’t, it’s given as a lower bound, it costs more than that when not idling

dijit 3 days ago | parent [-]

but then you’d turn it off, if you don’t then cloud is much more expensive too.

Also $150 for 100w is crazy, thats like $1.70 per kWh; it would cost about $150 a year at the (high) rates of southern Sweden.

cjbgkagh 3 days ago | parent [-]

Im not the OP, don’t know how they arrived at that cost.

Personally it’s cheaper to buy the hardware that does spend most of its time idling. Fast turnaround on very large private datasets being key.

kjkjadksj 3 days ago | parent | prev [-]

So shut it off when you don’t need it.

sebastiansm 4 days ago | parent | prev | next [-]

On Aliexpress those Xeon+mobo+ram kits are really cheap.

datadrivenangel 3 days ago | parent [-]

1. Not in the US with tariffs now. 2. I would not trust complicated electronics from Aliexpress from a safety and security perspective.

4 days ago | parent | prev | next [-]
[deleted]
kbenson 3 days ago | parent | prev [-]

Source? That seems like something I would want to take advantage if at the moment...

kllrnohj 3 days ago | parent [-]

Note the E5-2690V4 is a 10 year old CPU, they are talking about used servers. You can find those on ebay or whatever as well as stores specializing in that. Depending on where you live, you might even find them free as they are often considered literal ewaste by the companies decommissioning them.

It also means it performs like a 10 year old server CPU, so those 28 threads are not exactly worth a lot. The geekbench results, for whatever value those are worth, are very mediocre in the context of anything remotely modern: https://browser.geekbench.com/processors/intel-xeon-e5-2690-...

Like a modern 12-thread 9600x runs absolute circles around it https://browser.geekbench.com/processors/amd-ryzen-5-9600x

flas9sd 3 days ago | parent | next [-]

I tend to use quite old hardware that is powered-off when not in use for its intended purpose and I coined "capability is its own quality".

For dedicated build boxes that crunch through lots of sources (whole distributions, AOSP) but do run seldomly, getting your hands on lots of Cores and RAM very cheaply can still trump buying newer CPUs with better perf/watt but higher cost.

mattbillenstein 3 days ago | parent | prev [-]

This is the correct analysis - there's a reason you see this stuff cheap or free.

The homelab group on Reddit is full of people who don't understand any of this - they have full racks in their house that could be replaced with one high-end desktop.

kllrnohj 3 days ago | parent | next [-]

> The homelab group on Reddit is full of people who don't understand any of this - they have full racks in their house that could be replaced with one high-end desktop.

A lot of that group is making use of the IO capabilities of these systems to run lots of PCI-E devices & hard drives. There's not exactly a cost-effective modern equivalent for that. If there were cost-effective ways to do something like take a PCI-E 5.0 x2 and turn it into a PCI-E 3.0 x8 that'd be incredible, but there isn't really. So raw PCI-E lane count is significant if you want cheap networking gear or HBAs or whatever, and raw PCI-E lane count is $$$$ if you're buying new.

Also these old systems mean cheap RAM in large, large capacities. Like 128GB RAM to make ZFS or VMs purr is much cheaper to do on these used systems than anything modern.

mattbillenstein 3 days ago | parent [-]

Perhaps, but I don't really get the dozens of TB of storage in the home use case a lot of the time either.

Like if you have a large media library, you need to push maybe 10MB/s, you don't need 128GB of RAM to do that...

It's mostly just hardware porn - perhaps there are a few legit use cases for the old hardware, but they are exceedingly rare in my estimate.

kllrnohj 3 days ago | parent [-]

> Like if you have a large media library, you need to push maybe 10MB/s,

For just streaming a 4k bluray you need more than 10MB/s, Ultra HD bluray tops out at 144 Mbit/s. Not to mention if that system is being hit by something else at the same time (backup jobs, etc...).

Is the 128GB of RAM just hardware porn? Eh, maybe, probably. But if you want 8+ bays for a decent sized NAS then you're already quickly into price points at which point these used servers are significantly cheaper, and 128GB of RAM adds very little to the cost so why not.

Kubuxu 3 days ago | parent [-]

For 8+ bays you just need a SAS HBA card and one free PCI-E slot. Not to mention that many motherboards will have 6+ SATA ports already.

If anything, 2nd hand AMD gaming rigs make more sense than old servers. I say that as someone with always off r720xd at home due to noise and heat. It was fun when I bought it during winter years ago, until summer came.

ThatPlayer 3 days ago | parent | next [-]

I've been turning off my home server even though it's a modern PC rather than old server hardware because it idles at 100W which is too much. Put a Ryzen 7900X in it.

Not sure if it's not properly doing lower power states, or if it's the 10 HDDs spinning. Or even the GPU. But also don't really have anything important running on it that I can't just turn it off.

kllrnohj 3 days ago | parent | prev [-]

> For 8+ bays you just need a SAS HBA card and one free PCI-E slot. Not to mention that many motherboards will have 6+ SATA ports already.

And what case are you putting them into? What if you want it rack mounted? What about >1gig networking? What if I want a GPU in there to do whisper for home assistant?

Used gaming rigs are great. But used servers also still have loads of value, too. Compute just isn't one of them.

ssl-3 3 days ago | parent [-]

> And what case are you putting them into?

Maybe one of the Fractal Designs cases with a bunch of drive bays?

> What if you want it rack mounted?

Companies like Rosewill sell ATX cases that can scratch that itch.

> What about >1gig networking?

What about PCI Express card? Regular ATX computers are expandable.

> What if I want a GPU in there to do whisper for home assistant?

I mean... We started with a gaming rig, right? Isn't a GPU already implicit?

kllrnohj 3 days ago | parent [-]

> Companies like Rosewill sell ATX cases that can scratch that itch.

Have you looked at what they cost? Those cases alone cost as much as a used server. Which comes with a case.

> What about PCI Express card? Regular ATX computers are expandable.

As mentioned higher up, they run out of lane count in a hurry. Especially when you're using things like used Connect-X cards

ssl-3 2 days ago | parent [-]

A rackmount case from Rosewill costs a couple of hundred bucks or so, new. And they'll remain useful for as long as things like ATX boards and 3.5" hard drives are useful.

I mean: An ATX case can be paid for once, and then be used for decades. (I'm writing this using a modern desktop computer with an ATX case that I bought in 2008.)

PCI Express lanes can be multiplied. There should frankly be more of this going on than there is, but it's still a thing that can be done.

Consumer boards built on the AMD X670E chipset, for instance, have some switching magic built in. There's enough direct CPU-connected lanes for an x16 GPU and a couple of x4 NVMe drives, and the NIC(s) and/or HBA(s) can go downstream of the chipset.

(Yeah, sure: It's limited to an aggregate 64 Gbps at the tail end, but that's not a problem for the things I do at home where my sights are set on 10Gbps networking and an HBA with a bunch of spinny disks. Your needs may differ.)

zer00eyz 3 days ago | parent | prev [-]

Most of the workloads that people with homelabs run, could be run on a 5 year old i5.

A lot of business are paying obscene money to cloud providers when they could have a pair of racks and the staff to support it.

Unless you're paying attention to the bleeding edge of the server market, to its costs (better yet features and affordability) this sort of mistake is easy to make.

The article is by someone who does this sort of thing for fun, and views/attention, and im glad for it... it's fun to watch. But it's sad when this same sort of misunderstanding happens in professional settings, and it happens a lot.

montebicyclelo 4 days ago | parent | prev | next [-]

Yeah... Looks like can get about $1/hr for 10 small VMs, ($0.10 per VM).

So for $3000, that's 3000 hours, or 125 days, (if just wastefully leave them on all the time, instead of turning them on when needed).

Say you wanted to play around for a couple of hours, that's like.. $3.

(That's assuming there's no bonus for joining / free tier, too.)

wongarsu 3 days ago | parent | next [-]

The VMs quickly get expensive if you leave them running though.

The desktop equivalent of your 10 T3 Micro instances is about $600 if you buy new. For example a Lenovo ThinkCentre M75q Gen 2 Tiny 11JN009QGE has 8x3.2GHz processor with hyperthreading. That's 16 virtual cores compared to the 20 vcpus of the T3 instances, but with much faster cores. And 16GB RAM allows you to match the 1GB per instance.

If you don't have anything and feel generous throw in another $200 for a good monitor and keyboard plus mouse. But you can get a used crap monitor for $20. I'd give you one for free just to be rid of it.

That's a total of $800, or 33 days of forgetting to shut down the 10 VMs. Maybe half that if you buy used.

Granted not everyone has $800 or even $400 to drop on hobby projects, renting VMs often does make sense

verdverm 4 days ago | parent | prev | next [-]

You can rent a beefy vm with an H100 for $1.50 / hr

I regularly rent this for a few hours at a time for learning and prototyping

Y_Y 4 days ago | parent [-]

[flagged]

verdverm 3 days ago | parent [-]

I'll take the H1/200s over a vehicle any day of the week

pinkgolem 3 days ago | parent | prev [-]

Are you comparing 10 VM with 1 shared core with a 144 core solution?

aprdm 4 days ago | parent | prev | next [-]

That really depends on what you want to learn and how deep. If you're automating things before the hypervisor comes online or there's an OS running (e.g: working on datacenter automation, bare metal as a service) you will have many gaps

leoc 3 days ago | parent [-]

If you want to run something like GNS3 network simulation on a hosting service's hardware you'll either have to deal with hiring a bare-metal server or deal with nested virtualisation on other people's VM setups. Network simulation absolutely drinks RAM, too, so just filling an old Xeon with RAM starts to look very attractive in comparison to cloud providers who treat it an expensive upsell.

sam1r 3 days ago | parent | prev | next [-]

A great way to do this is… is with a brand new Aws account, which will give you 1 year free across all services with reasonable limits.

jahsome 3 days ago | parent [-]

Oracle's free tier is pretty generous too.

bakugo 4 days ago | parent | prev | next [-]

It heavily depends on the use case. For these AI setups, you're completely correct, because the people who talk about how amazing it is to run a <100B model at home almost never actually end up using it for anything real (mostly because these small models aren't actually very good) and are doing it purely for the novelty.

But if you're someone like me who intends to actively use the hardware for real-world purposes, the cloud often simply can't compete on price. At home, I have a mini PC with a 5600G, 32GB of RAM, and a few TBs of NVME storage. The entire thing cost less than $600 a few years ago, and consumes around 20W of power on average.

Even on the cheapest cloud providers available, an equivalent setup would exceed that price in less than half a year. SSD storage in particular is disproportionately expensive on the cloud. For small VMs that don't need much storage, it does make sense, but as soon as you scale up, cloud prices quickly start ballooning.

swiftcoder 3 days ago | parent [-]

Plus you still have access to the whole lot when your ISP goes down (maybe less of a problem than it used to be, but not unheard of)

nsxwolf 4 days ago | parent | prev | next [-]

That isn’t fun. I have a TI-99/4A in my office hooked up to a raspberry pi so it can use the internet. Why? Because it’s fun. I like to touch and see the things even though it’s all so silly.

motorest 3 days ago | parent | prev | next [-]

> The cost effective way to do it is in the cloud.

This. Some cloud providers offer VMs with 4GB RAM and 2 virtual cores for less than $4/month. If your goal is to learn how to work with clusters, nothing beats firing up a dozen VMs when it suits your fancy, and shut them down when playtime is over. This is something you can pull off in a couple of minutes with something like an Ansible script.

pinkgolem 3 days ago | parent | prev | next [-]

For learning I feel much safer setting everything up locally, worst case I have to reinstall my system.

In the cloud, worst case I have a bill over 5-6 digits.

And I know my ADD, 2 is not super unlikely.

cramcgrab 3 days ago | parent | prev | next [-]

I don’t know, i keyed this into google Gemini and got pretty far: “ Simulate an AWS AI cluster, command line interface. For each command supply the appropriate AWS AI cluster response”

mattbillenstein 3 days ago | parent | prev | next [-]

LOL, no

newsclues 4 days ago | parent | prev [-]

Text and reference books are free at the library.

You don’t need hardware to learn. Sure it helps but you can learn from a book and pen and paper exercises.

trenchpilgrim 4 days ago | parent [-]

I disagree. Most of what I've learned about systems comes from debugging the weird issues that only happen on real systems, especially real hardware. The book knowledge is like, 20-30% of it.

titanomachy 3 days ago | parent [-]

Agreed, I don't think I'd hire a datacenter engineer whose experience consisted of reading books and doing "pen and paper exercises".

TZubiri 3 days ago | parent | prev | next [-]

Fun fact, a raspberry pi does not have a built in Real Time Clock with its own battery, so it relies on network clocks to keep the time.

Another fun fact, the network module of the pi is actually connected to the USB bus, so there's some overhead as well as a throughput limitation.

Fun fact, the Pi does not have a power button, relying on software to shut down cleanly. If you lose access to the machine, it's not possible to avoid corrupted states on the disk.

Despite all of this, if you want to self host some website, the raspberry pi is still an amazingly cost effective choice, from anywhere between 2 to 20000 monthly users, one pi will be overprovisioned. And you can even get an absolutely overkill redundant pi as a failover, but still a single pi can reach 365 days of uptime with no problem, and as long as you don't reboot or lose power or lose internet, you can achieve more than a couple of nines of reliability.

But if you are thinking of a third, much less a 10th raspberry pi, you are probably scaling the wrong way, way before you reach the point where a quantity matters ( a third machine), it becomes cost effective to upgrade the quality of your one or two machines.

On the embedded side it's the same story, these are great for prototyping, but you are not going to order 10k and sell them in production, maybe a small 100 test batch? But you will optimize and make your own PCB before a mass batch.

alias_neo 3 days ago | parent | next [-]

> the raspberry pi is still an amazingly cost effective choice

It's really not though. I've been a Pi user and fan since it was first announced, and I have dozens of them, so I'm not hating on RPi here; we did the maths some time back here on HN when something else Pi related came up.

If you go for a Pi5 with say 8GB RAM, by the time you factor in an SSD + HAT + PSU + Case + Cooler (+ maybe a uSD), you're actually already in mini-PC price territory and you can get something much more capable and feature complete for about the same price, or for a few £ more, something significantly more capable, better CPU, iGPU, you'll get an RTC, proper networking, faster storage, more RAM, better cooling, etc, etc, and you won't be using much more electricity either.

I went this route myself and have figuratively and literally shelved a bunch of Pis by replacing them with a MiniPC.

My conclusion, for my own use, after a decade of RPi use, is that a cheap mini PC is the better option these days for hosting/services/server duty and Pis are better for making/tinkering/GPIO related stuff, even size isn't a winner for the Pi any more with the size of some of the mini-PCs on the market.

mrguyorama 3 days ago | parent | next [-]

>SSD + HAT + PSU + Case + Cooler

Zero of any of that is needed. The new Pi "works best" with a cooler sure but at standard room temps will be fine for serving web apps and custom projects and things. You do not need an SSD. You do not need a HAT for anything.

Apparently the Pi 5 8gb is $120 though WTF.

What personal web site or web app or project can't run just fine on a Pi Zero 2 though? It's a little RAM starved but performance wise it should be sufficient.

Other than second-hand mini PCs, old laptops also make great home servers. They have built in UPS!

barnas2 3 days ago | parent | prev | next [-]

> SSD + HAT + PSU + Case + Cooler (+ maybe a uSD)

The only 100% required thing on there is some sort of power supply, and an SD card, and I suspect a lot of people have a spare USB-C cable and brick lying around. A cooler is only recommended if you're going to be putting it under sustained CPU load, and they're like $10 on Amazon.

sjsdaiuasgdia 3 days ago | parent [-]

> a spare USB-C cable and brick lying around

Particularly with Pi 5, any old brick that might be hanging around has a fair chance at not being able to supply sufficient power.

TZubiri 3 days ago | parent | prev [-]

What do you mean by Cooler? Raspberry pi doesn't need a fan.

Also the other peripherals you consider are irrelevant, since you would need them (or not), in other setups. You can use a pi without a PSU for example. And if you use an SSD, you have to consider that cost in whatever you compare it to.

>I went this route myself and have figuratively and literally shelved a bunch of Pis

>and I have dozens of them,

Reread my post? I meant specifically that Pis are great for the 1 to 2 range. with 3 pis you should change to something else. So I'm saying they are good at the 100$-200$ budget, but bad anywhere above that.

J_McQuade 3 days ago | parent | next [-]

> What do you mean by Cooler? Raspberry pi doesn't need a fan.

From the official website:

> Does Raspberry Pi 5 need active cooling?

> Raspberry Pi 5 is faster and more powerful than prior-generation Raspberry Pis, and like most general-purpose computers, it will perform best with active cooling.

TZubiri 3 days ago | parent [-]

Oh. I haven't used 5, i did 3 and 4

Sohcahtoa82 3 days ago | parent | prev | next [-]

> What do you mean by Cooler? Raspberry pi doesn't need a fan.

Starting with the Pi 4, they started saying that a cooler isn't required, but that it may thermal throttle without one if you keep the CPU pegged.

alias_neo 3 days ago | parent | prev [-]

> What do you mean by Cooler? Raspberry pi doesn't need a fan

It's recommended for Pi 5, and if you're running a Pi 4, you should at least use a little heat sink, the 4 and 5 run pretty warm, and under any load they can throttle quite easily. I run mine in a rack, in the UK where it's not very warm compared to other parts of the world, and they get pretty warm even with cooling.

> Also the other peripherals you consider are irrelevant, since you would need them (or not), in other setups

No, they're not irrelevant, because if you buy a Mini-PC you get SSD, RAM, cooling, case, PSU included in the price.

> You can use a pi without a PSU for example

You can wing it with some odd USB charger you have lying around, but my experience over a decade killing tens of high-quality microSDs in Pis, power throttling and brown outs is that you should stick to the Pi spec (5.1V) PSUs, the current can typically be lower than their rated if you're not connecting peripherals but a proper USB spec plug will be 5V not the 5.1V the Pi wants.

> Reread my post? I meant specifically that Pis are great for the 1 to 2 range

I think you need to re-read mine, I'm not suggesting replacing all of the Pis with a mini-PC, I'm suggesting replacing ONE is cost-effective NOW, when compared to Pi 5.

> So I'm saying they are good at the 100$-200$ budget

Disagree (at least as things stand here in the UK with our current pricing).

Mini-PC with N100, 16GB RAM, 512GB SSD, case, cooling, PSU, better IO, much better performance, etc: £128[0]

Pi 5, bare board, nothing else: £114[1]

These aren't some obtuse websites, they're places I shop all the time, PiHut is an official distributor in the UK, and the Amazon result is the second result for "mini pc".

The thing about the performance gap here is that you _can_ replace 2-3+ Raspberry Pis with a single Mini-PC for the same price as a single Raspberry Pi 5. I've occasionally seen mini PC models on Amazon go on sale for £99 and less.

I'm not talking theoretical or napkin maths, I've literally done it, I replaced a bunch of Pis with a mini PC and now the Pis sit idle because there's still LOTS of headroom on the mini PC to add more, before I need to even consider firing up the Pis again for other stuff.

The Pi, _to me_, in 2025, is a great tool for learning, and building upon, using the GPIO and the excellent resources, but for self-hosting services, it no longer adds up.

By services I mean software tools, services, things actively "doing work", not a personal blog or project that could run on a vape[2].

[0] https://www.amazon.co.uk/BOSGAME-Computers-Windows-Desktop-G... [1] https://thepihut.com/products/raspberry-pi-5?src=raspberrypi... [2] https://news.ycombinator.com/item?id=45252817

TZubiri 2 days ago | parent [-]

Interesting. I may be outdated

1) raspberry pis competitors have gotten better, that nuc is very cheap.

2) the pi has gone in a different direction, increasing specs and price, the 3b+ or 4a had much lower specs, price, power consumption etc...

In conclusion, if you can get an arm soc board with specs similar to the 3b+ or 4a (500mb to 2gb ram), then you can host a blog on linux for cheap. Should run you in the 50$ area. But raspberry no longer makes these, you might look into the thousands of competitors.

Additionally if you want something more serious, nucs become reasonable, while it's hard to tell whether two 50$ pis or one 200$ Intel NUC would be better. It depends on the tradeoffs.

alias_neo 11 hours ago | parent [-]

Absolutely. I wouldn't suggest one shouldn't use a Pi if it fits their use case and budget, simply that once we get to a higher end Pi, it can be cost effective to simply buy a mini PC which will be more capable for not a lot more money.

The issue with competing ARM SBCs is the software support; Radxa makes some boards that are more powerful than Pis, but if you read the forums they've had hardware flaws in the designs, and they run old kernels and don't get updated, and of course there isn't the community behind it.

An x86 mini pc is a different beast to a Pi, but then I think a lot of people who were hosting software on a Pi weren't specifically looking for ARM architecture anyway, unless they were, in which case stick with a Pi.

stuxnet79 3 days ago | parent | prev | next [-]

> Fun fact, a raspberry pi does not have a built in Real Time Clock with its own battery, so it relies on network clocks to keep the time.

> Another fun fact, the network module of the pi is actually connected to the USB bus, so there's some overhead as well as a throughput limitation.

> Fun fact, the Pi does not have a power button, relying on software to shut down cleanly. If you lose access to the machine, it's not possible to avoid corrupted states on the disk.

With all these caveats in mind, a raspberry pi seems to be an incredibly poor choice for distributed computing

CamperBob2 3 days ago | parent [-]

With all these caveats in mind, a raspberry pi seems to be an incredibly poor choice for distributed computing

Exactly. This build sounds like the proverbial "1024 chickens" in Seymour Cray's famous analogy. If nothing else, the communications overhead will eat you alive.

geerlingguy 3 days ago | parent | prev [-]

The Pi 5 / CM5 / Pi 500 series does have a built-in RTC now, though most models require you to buy a separate RTC battery to plug into the RTC battery jack.

llm_nerd 4 days ago | parent | prev | next [-]

If you assume that the author did this to have content for his blog and his YouTube channel, it makes much more sense. Going back to the well with a "I regret" entry allows for extra exploiting of a pretty dubious venture.

YouTube is absolute jam packed full of people pitching home "lab" sort of AI buildouts that are just catastrophically ill-advised, but it yields content that seems to be a big draw. For instance Alex Ziskind's content. I worry that people are actually dumping thousands to have poor performing ultra-quantized local AIs that will have zero comparative value.

philipwhiuk 4 days ago | parent [-]

I doubt anyone does this seriously.

nerdsniper 4 days ago | parent [-]

I sure hope no one does this seriously expecting to save some money. I enjoy the videos on "catastrophically ill-advised" build-outs. My primary curiosities that get satisfied by them are:

1) How much worse / more expensive are they than a conventional solution?

2) What kinds of weird esoteric issues pop up and how do they get solved (e.g. the resizable BAR issue for GPU's attached to RPi's PCIe slot)

glitchc 4 days ago | parent | prev | next [-]

I did some calculations on this. Procuring a Mac Studio with the latest Mx Ultra processor and maxing out the memory seems to be the most cost effective way to break into 100b+ parameter model space.

eesmith 4 days ago | parent | next [-]

Geerling links to last month's essay on a Frameboard cluster, at https://www.jeffgeerling.com/blog/2025/i-clustered-four-fram... . In it he writes 'An M3 Ultra Mac Studio with 512 gigs of RAM will set you back just under $10,000, and it's way faster, at 16 tokens per second.' for 671B parameters, that is, that M3 is at least 3x the performance of the other three systems.

teleforce 3 days ago | parent | prev | next [-]

Not quite, as it stands now the most cost effective way is most likely framework desktop or similar system for example HP G1a laptop/PC [1],[2].

[1] The Framework Desktop is a beast:

https://news.ycombinator.com/item?id=44841262

[2] HP ZBook Ultra:

https://www.hp.com/us-en/workstations/zbook-ultra.html

GeekyBear 4 days ago | parent | prev | next [-]

Now that we know that Apple has added tensor units to the GPU cores the M5 series of chips will be using, I might be asking myself if I couldn't wait a bit.

t1amat 3 days ago | parent [-]

This is the right take. You might be able to get decent (2-3x less than a GPU rig) token generation, which is adequate, but your prompt processing speeds are more like 50-100x slower. A hardware solution is needed to make long context actually usable on a Mac.

llm_nerd 3 days ago | parent | prev | next [-]

The next generation M5 should bring the matmul functionality seen on the A19 Pro to the desktop SoC's GPU -- "tensor" cores, in essence -- and will dramatically improve the running of most AI models on those machine.

Right now the Macs are viable purely because you can get massive amounts of unified memory. Be pretty great when they have the massive matrix FMA performance to complement it.

randomgermanguy 4 days ago | parent | prev | next [-]

Depends on how heavy one wants to go with the quants (for Q6-Q4 the AMD Ryzen AI MAX chips seem better/cheaper way to get started).

Also the Mac Studio is a bit hampered by its low compute-power, meaning you really can't use a 100b+ dense model, only MoE feasibly without getting multi minute prompt-processing times (assuming 500+ tokens etc.)

GeekyBear 3 days ago | parent | next [-]

Given the RAM limitations of the first gen Ryzen AI MAX, you have no choice but to go heavy on the quantization of the larger LLMs on that hardware.

mercutio2 3 days ago | parent | prev [-]

Huh? My maxed out Mac Studio gets 60-100 tokens per second on 120B models, with latency on the order of 2 seconds.

It was expensive, but slow it is not for small queries.

Now, if I want to bump the context window to something huge, it does take 10-20 seconds to respond for agent tasks, but it’s only 2-3x slower than paid cloud models, in my experience.

Still a little annoying, and the models aren’t as good, but the gap isn’t nearly as big as you imply, at least for me.

zargon 3 days ago | parent | next [-]

GPT OSS 120B only has 5B active parameters. GP specifically said dense models, not MoE.

EnPissant 3 days ago | parent | prev | next [-]

I think the Mac Studio is a poor fit for gpt-oss-120b.

On my 96 GB DDR5-6000 + RTX 5090 box, I see ~20s prefill latency for a 65k prompt and ~40 tok/s decode, even with most experts on the CPU.

A Mac Studio will decode faster than that, but prefill will be 10s of times slower due to much lower raw compute vs a high-end GPU. For long prompts that can make it effectively unusable. That’s what the parent was getting at. You will hit this long before 65k context.

If you have time, could you share numbers for something like:

llama-bench -m <path-to-gpt-oss-120b.gguf> -ngl 999 -fa 1 --mmap 0 -p 65536 -b 4096 -ub 4096

Edit: The only Mac Studio pp65536 datapoint I’ve found is this Reddit thread:

https://old.reddit.com/r/LocalLLaMA/comments/1jq13ik/mac_stu ...

They report ~43.2 minutes prefill latency for a 65k prompt on a 2-bit DeepSeek quant. Gpt-oss-120b should be faster than that, but still very slow.

int_19h 3 days ago | parent [-]

This is Mac Studio M1 Ultra with 128Gb of RAM.

  > llama-bench -m ./gpt-oss-120b-MXFP4-00001-of-00002.gguf -ngl 999 -fa 1 --mmap 0 -p 65536 -b 4096 -ub 4096       
                                                                                             
  | model                          |       size |     params | backend    | threads | n_batch | n_ubatch | fa | mmap |            test |                  t/s |
  | ------------------------------ | ---------: | ---------: | ---------- | ------: | ------: | -------: | -: | ---: | --------------: | -------------------: |
  | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | Metal,BLAS |      16 |    4096 |     4096 |  1 |    0 |         pp65536 |       392.37 ± 43.91 |
  | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | Metal,BLAS |      16 |    4096 |     4096 |  1 |    0 |           tg128 |         65.47 ± 0.08 |
  
  build: a0e13dcb (6470)
EnPissant 2 days ago | parent [-]

Thanks. That’s better than I expected. It's only 8.3x worse than a 5090 + CPU: 167s latency.

3 days ago | parent | prev [-]
[deleted]
the8472 4 days ago | parent | prev | next [-]

You could try getting a DGX Thor devkit with 128GB unified memory. Cheaper than the 96GB mac studio and more FLOPs.

glitchc 3 days ago | parent [-]

Yeah but slower memory compared to the M3 Ultra. There's a big difference in memory bandwidth, which seems to be a driving factor for inferencing. Training on the other hand, it's probably a lot faster.

Palomides 4 days ago | parent | prev | next [-]

even a single new mac mini will beat this cluster on any metric, including cost

encom 3 days ago | parent | prev [-]

>Mac

>cost effective

lmao

vlovich123 4 days ago | parent | prev | next [-]

I’d say it’s inconclusive. For traditional compute it wins on power and cost (it’ll always lose on space). The inference is noted to not be able to use the GPU due to llama.cpp’s vulkan backend AND that clustering software in llama.cpp is bad. I’d say it’s probably still going to be worse for AI but it’s inconclusive where it could be due to the software immaturity (ie not worth it today but could be with better software)

tracker1 3 days ago | parent [-]

But will there be a CM6 while you're waiting for the software to improve?

randomNumber7 3 days ago | parent | prev | next [-]

What I think is strange with stuff like this that you should be able to come to that conclusion without technical knowledge. Just the fact that everyone runs AI on GPUs and NVIDIAs stock skyrocketed since the AI boom should tell you s.th..

Did OP really think his fellow humans are that moronic that they just didn't find out you can plug in together a cuple of rasperri pis?

rustyminnow 3 days ago | parent [-]

Nobody thought an RPI cluster would ever be competitive, and Geerling never expected anybody would. But it's fun to play "what if" and then make the thing just to see how it stacks up and that's his job. Any implication or suggestion of this being a good idea is just part of the story telling.

Asraelite 3 days ago | parent | prev | next [-]

> I don’t know if anyone building a Pi cluster actually goes into it thinking it’s going to be a cost effective endeavor, do they?

Some Raspberry Pi products are sold at a loss, so I could see how it's in the realm of possibility.

moduspol 4 days ago | parent | prev | next [-]

Also cost effective is to buy used rack mount servers from Amazon. They may be out of warranty but you get a lot more horsepower for your buck, and now your VMs don’t have to be small.

Aurornis 4 days ago | parent | next [-]

Putting a retired datacenter rack mount server in your house is a great way to learn how unbearably loud a real rack mount datacenter server is.

Tsiklon 4 days ago | parent | next [-]

To quote @swiftonsecurity - https://x.com/swiftonsecurity/status/1650223598903382016 ;

> DO NOT TAKE HOME THE FREE 1U SERVER YOU DO NOT WANT THAT ANYWHERE A CLOSET DOOR WILL NOT STOP ITS BANSHEE WAIL TO THE DARK LORD AN UNHOLY CONDUIT TO THE DEPTHS OF INSOMNIA BINDING DARKNESS TO EVEN THE DAY

buildbot 3 days ago | parent [-]

This 1000%; and some 1us are extra 666. I had a sparc t2000 at one point, it was so much louder than a 1u Supermicro. Or whatever was in Microsoft HW labs, those you could hear from multiple hallways over… There were non optional earplugs at the doors.

tempest_ 4 days ago | parent | prev | next [-]

ahah and pricey power wise.

Currently the cloud providers are dumping second gen xeon scalables and those things are pigs when it comes to power use.

Sound wise its like someone running a hair dryer at full speed all the time and it can be louder under load.

J_Shelby_J 3 days ago | parent | prev | next [-]

Buy a 3/4u case for $100 and put whatever board you want in it with standard PC fans and a decent cpu cooler. Dead silent.

moduspol 3 days ago | parent | prev | next [-]

True! They aren't quiet. I keep mine in a well-ventilated room that doesn't typically have people in it.

_boffin_ 3 days ago | parent | prev | next [-]

Not true. Have one running in the closet and never hear it.

ComputerGuru 3 days ago | parent | prev [-]

Only if it’s a 1U. 2U units idle at silent.

Y_Y 4 days ago | parent | prev | next [-]

If you're following this path, make sure to use the finest traditional server rack that money can buy: https://www.ikea.com/ie/en/p/lack-side-table-white-30449908/

allanrbo 4 days ago | parent | prev [-]

No, again, just run VMs on your desktop/laptop. The software doesn't know or care if it's a rack mounted machine.

3 days ago | parent [-]
[deleted]
wccrawford 3 days ago | parent | prev | next [-]

Geerling's titles have been increasingly click-bait for a while now. It's pretty sad, because I like his content, but hate the click-bait BS.

mrguyorama 3 days ago | parent | next [-]

Blame Youtube. They are the ones that run a purposely zero sum and adversarial system for directing attention at your videos. If he doesn't have a high enough click rate on his videos, Youtube will literally stop showing them to people, even subscribers.

Youtube demonstrably wants clickbait titles and thumbnails. They built tooling to automatically A/B test titles and thumbnails for you.

Youtube could fix this and stop it if they want, but that might lose them 1% of business so they never will.

They love that you blame creators for this market dynamic instead of the people who literally create the market dynamic.

jonathanlydall 3 days ago | parent | prev [-]

If it makes an appreciable difference to how much money he makes on YouTube then I can’t begrudge him for doing it.

Don’t hate the player, hate the game.

geerlingguy 3 days ago | parent [-]

Just to add context — I've been experimenting on my 2nd channel (Level 2 Jeff) with titles that are straight/barebones exactly describing the content of the video, vs a slight bit of clickbait (never untrue, but certainly more intriguing and not describing the exact topic of the video).

The ones that are dead straight with no clickbait are 10/10 (the worst performers), and usually by a massive margin. Even with the same thumbnail.

The sad fact is, if you want your work seen on YouTube, you can't just say "I built a 10 node Raspberry Pi blade cluster and ran HPL and LLMs on it".

Some people are fine with a limited audience. And that's fine too! I don't have to write on my blog at all—I earn negative income from that, since I pay for hosting and a domain, but I hope some people enjoy the content in text form like I do.

asalahli 2 days ago | parent | next [-]

FWIW I like Level 2 Jeff more and I would watch the videos with or without the clickbait-y titles. As you've said I've never found your titles deceptive so if they bring you more money, then more power to you

pmw 3 days ago | parent | prev [-]

Thanks for the transparency. Much respect to you.

kolbe 3 days ago | parent | prev | next [-]

The author, Jeff Geerling, is a very intelligent person. He has more experience with using niche hardware than almost anyone on earth. If he does something, there's usually a good a priori rationale for it.

buildbot 3 days ago | parent | next [-]

Jeff is a good person/blogger and does interesting projects but more experience with niche hardware than literally anyone is a stretch.

Like what about the people who maintain the alpha/sparc/parisc linux kernels? Or the designers behind idk tilera or tenstorrent hardware.

geerlingguy 3 days ago | parent | next [-]

I was just at VCF Midwest this past weekend, and I can assure you I am on some of the lower echelons of people who know about niche hardware.

I do get to see and play with a lot of interesting systems, but for most of them, I only get to go just under surface-level. It's a lot different seeing someone who's reverse engineered every aspect of an IBM PC110, or someone who's restored an entire old mainframe that was in storage for years... or the group of people who built an entire functional telephone exchange with equipment spread over 50 years (including a cell network, a billing system, etc.).

phatfish 3 days ago | parent | prev | next [-]

Youtubers have armies of sycophants (check their video comments if you dare). Not saying they even court them, something to do with video building a stronger parasocial relationship than a text blog I think.

kolbe 3 days ago | parent | prev [-]

> more experience with niche hardware than literally anyone is a stretch.

This is why I said "almost anyone." If I changed your words, I could disagree with you as well.

AceJohnny2 3 days ago | parent | prev | next [-]

> If he does something, there's usually a good a priori rationale for it.

I greatly respect Jeff's work, but he's a professional YouTuber, so his projects will necessarily lean towards clickbait and riding trends (Jeff, I don't mean this as criticism!) He's been a great advocate for doing interesting things with RasPis, but "interesting" != "rational"

amelius 3 days ago | parent | prev [-]

Is a Pi still considered "niche" hardware?

ww520 4 days ago | parent | prev [-]

Now. Imagine a Beawulf of these...