Remix.run Logo
RulerOf 9 days ago

> I've never heard of anyone else having done anything like this. This surprises me! But, surely, if someone else did it, someone would have told me about it? If you know of another, please let me know!

I never had the tenacity to consider my build "finished," and definitely didn't have your budget, but I built a 5-player room[1] for DotA 2 back in 2013.

I got really lucky with hardware selection and ended up fighting with various bugs over the years... diagnosing a broken video card was an exercise in frustration because the virtualization layer made BSODs impossible to see.

I went with local disk-per-VM because latency matters more than throughput, and I'd been doing iSCSI boot for such a long time that I was intimately familiar with the downsides.

I love your setup (thanks for taking the time to share this BTW) and would love to know if you ever get the local CoW working.

My only tech-related comment is that I will also confirm that those 10G cards are indeed trash, and would humbly suggest an Intel-based eBay special. You could still load iPXE (I assume you're using it) from the onboard NIC, continue using it for WoL, but shift the netboot over to the add-in card via a script, and probably get better stability and performance.

[1]: https://imgur.com/a/4x4-four-desktops-one-system-kWyH4

kentonv 9 days ago | parent [-]

Hah, you really did the VM thing? A lot of people have suggested that to me but I didn't think it'd actually work. Pretty cool!

Yeah I'm pretty sure my onboard 10G Marvell AQtion ethernet is the source of most of my stability woes. About half the time any of these machines boot up, Windows bluescreens within the first couple minutes, and I think it has something to do with the iSCSI service crashing. Never had trouble in the old house where the machines had 1G network -- but load times were painful.

Luckily if the machines don't crash in the first couple minutes, then they settle down and work fine...

Yeah I could get higher-quality 10G cards and put them in all the machines but they seem expensive...

ThatPlayer 8 days ago | parent | next [-]

I've done a multi-seat gaming VM back in the day too. I don't think I'd want to do it again. Assigning hotplug USB devices was a pain: I mostly wanted unique USB devices per computer to easily figure which device was which. Though nowadays I would probably use a thin client Raspberry Pi running Moonlight to do it cheaply.

I think another issue is the limited amount of PCI-E lanes now that HEDT is dead. I picked up a 5930k for my build at the time for its 40 PCI-E lanes. But now consumer CPUs basically max out at 20-24 lanes.

Also with the best CPUs for gaming nowadays being AMD's X3D series because of its additional L3 cache, I wonder about the performance hit with 2 different VMs fighting for cache. Maybe the rumored 9950X3D will have 2 3D caches and you'd be able to pin the VMs to each CPU cores/cache. The 7950X3D had 3D cache only on half of its cores, so games generally performed better pinned to only those cores.

So with only 2-3 VMs/PC, and you still needing a GPU for each VM which are the most expensive part anyway, I'd pay a bit more to do it without VMs. The only way I'd be interested in multiseat VM gaming again would be if I could utilize GPU virtualization: split up a single GPU into many VMs. But like you say in the article that's usually been limited to enterprise hardware. And even then it'd be interesting only for the flexibility, being able to run 1 high-end GPU for when I'm not having a party.

amluto 8 days ago | parent [-]

If you’re on an Intel chip that supports “Resource Director,” you can assign most of your cache to a VM. I have no idea whether AMD can do this. I’ve also never done it, and I don’t know how well KVM supports it.

amluto 8 days ago | parent | prev | next [-]

Just buy used 10G hardware from an HFT firm :). Seriously, though, 10G gear is cheap these days.

I bet one could put an unreasonable amount of effort into convincing an Nvidia Bluefield card to pretend to be a disk well enough to get Windows to mount it. I imagine that AWS is doing something along those lines too, but with more cheap chips and less Nvidia markup…

There has got to be a way to convince Windows to do an overlay block device that involves magic words like “thin provisioning”. But two seconds of searching didn’t find it. Every self-respecting OS (Linux, FreeBSD, etc) has had this capability for decades, of course. Amusingly, AFAICT, major clouds also mostly lack this capability — performance of the obvious solution in AWS (boot everything off an AMI) is notoriously poorly performing.

tinco 9 days ago | parent | prev | next [-]

It's been a couple years, but when I built our in-office render farm for my previous company I also got motherboards with built-in 10G because they needed 4GPU's and there simply no more PCIe slots left. There were so many connectivity issues, but eventually it was solved when we replaced the switches. When I first built the farm there was only one brand that sold cheap 10gbit ethernet switches, but a couple years later finally ubiquiti started making them as well and I think now all of the semi-pro brands sell 10gbit switches. Since we swapped to ubiquiti switches we had no more connectivity issues, not even with the cheap 10G interfaces.

The good intel 10G cards were not expensive at all by the way, I bought them for later additions, and they were cheaper than the premium we paid for the money-gamer motherboards that included 10G cards that I saw you were unhappy about too.

toast0 9 days ago | parent | prev | next [-]

> Yeah I could get higher-quality 10G cards and put them in all the machines but they seem expensive...

Bulk buying is probably hard, but ex-enterprise Intel 10G on eBay tends to be pretty inexpensive. Dual spf+ x520 cards are regularly available for $10. Dual 10g-base-t x540 cards run a bit more, with more variance, $15-$25. No 2.5/5Gb support, but my 10g network equipment can't do those speeds either, so no big deal. These are almost all x8 cards, so you need a slot that can accomidate them, but x4 electrical should be fine (I've seen reports that some enterprise gear has trouble working properly in x1/x4 slots beyond bandwidth restrictions which shouldn't be a problem; if a dual port card needs x8 and you only have x4 and only use a single port, that should be fine)

I think all of mine can pxeboot, but sometimes you have to fiddle with the eeprom tools, and they might be legacy only, no uefi pxe, but that's fine for me.

And you usually have to be ok with running them with no brackets, cause they usually come with low profile brackets only.

vueko 9 days ago | parent [-]

+1 ebay x520 cards. My entire 10g sfp+ home network runs on a bunch of x520s, fs.com DACs/AOCs, Mikrotik switches, and an old desktop running FreeBSD with a few x520s in it as the core router. Very very cheap to assemble and has been absolutely bulletproof. IME at this point in time the ixgbe driver is extremely stable.

x520s with full-height brackets do exist (I have a box full of them), but you may pay like $3-5/ea more than the more common lo-pro bracket ones. If you're willing to pop the bracket off, you can also find full-height brackets standalone and install your own.

Also, in general: in my experience avoiding 10gbe rj45 is very worthwhile. More expensive, more power consumption, more heat generation. If you can stick a sfp+ card in something, do it. IMO 10gbe rj45 is only worthwhile when you've got a device that supports it but can't easily take a pcie nic, like some intel NUCs.

toast0 9 days ago | parent [-]

sfp+ is clearly cheaper, and less heat/power, but I've got cat5e in the walls and between my house and detached garage, so I've got to use 10g-baseT to get between the garage and the house, and up to my office from the basement. At my two network closet areas, I use sfp+ for servers.

I think my muni fiber install happening this week might have a 10G-baseT handoff, and I've got a port for that open on my switch in the garage. If that works out, that will be neat, but I'll need to upgrade some more stuff to make full use of that.

vueko 9 days ago | parent [-]

Oh true, good point, being wired for ethernet is another valid usecase. I'm lucky in that my ONT is just a commodity Nokia switch I can slap any sfp+ form factor transceiver I want in the appropriate port of for the connection to the router, so in my case 10gbe is truly banishable to the devices I can't get a pcie card into. I'm still in the phase of masking taping cables to the ceiling instead of doing real wall pulls, but when I do get around to that I feel like I'm going to pick up an aliexpress fiber splicer and pull single-mode fiber to futureproof it and make sure I never have to deal with pulls again (and not be stuck on an old ethernet standard in the magical future where I can get a 100gbit wan link).

tarasglek 8 days ago | parent | prev | next [-]

I am not a gamer, but I found that https://moonlight-stream.org/ latency when streaming from my server to mbp is lower than that of my projector directly connected to said server. Might be easier to just get a beefy server with gpu passthrough than fight 10gbe drivers on 10 machines. Amd cards seem to work amazing for passthrough.

kridsdale3 9 days ago | parent | prev | next [-]

I'm building out a 10G LAN in my house (8k VR video files are ludicrously enormous) and while it's mostly Mac, where I use Thunderbolt to SFP fiber adapters, for my Windows PC I'm looking around at what PCI options to get, and haven't pulled the trigger.

If you make a decision on a 10G card (SFP or ethernet) I'd like to hear what you picked.

murderfs 9 days ago | parent | next [-]

You can get pretty cheap 10GBASE-T NICs on ebay. I've had pretty good success with this abomination, a server-pull NIC with an HP proprietary physical interface plugged into an adapter to PCI-E: https://www.ebay.com/itm/144151881516

toast0 7 days ago | parent [-]

That's a pretty weird contraption. If you want weird, I'd suggest one of these https://www.ebay.com/itm/166884585625

Silicom PEG210 Silicom PE210G2BPI40-T-SD-BC7 Intel x540 based bypass NIC. In case you want to have the two ports connect together when the computer is off or something. Setup time is a bit more, but you can also configure them to act like normal NICs.

Usually show up around $15-25 like other x540 dual rj45 cards, but sometimes a bit less, cause they're weird.

timc3 8 days ago | parent | prev [-]

If its SFP then intel, they have seem to have good ability to go into power saving states. My mellanox cards don’t.

10gbase-t ethernet is harder to pick, a lot of those cards run incredibly hot particularly the ones that expect server style cooling. Heard bad things about all of them.

Also heard that Windows has a hard time reaching 10G anyway.

toast0 7 days ago | parent [-]

> Also heard that Windows has a hard time reaching 10G anyway.

It really shouldn't. Microsoft invented or popularized Receive Side Scaling [1], which helps get things lined up for high throughput; but applications probably need to do a bit of work too.

[1] https://learn.microsoft.com/en-us/windows-hardware/drivers/n...

justmarc 8 days ago | parent | prev | next [-]

You can get used ones super cheap on ebay. The same applies to RAM, CPUs and other parts.

No need to buy new for most computing equipment unless you're looking for the absolute latest and greatest.

murderfs 9 days ago | parent | prev | next [-]

Yeah, gaming in a VM is fairly easy and reliable nowadays (the keyword to google for is VFIO). The cost savings is pretty substantial from consolidating multiple machines into one bigger machine. Unfortunately, there's an increasing number of games with anticheat that looks for being inside a VM.

> onboard 10G Marvell AQtion ethernet

I had similar problems with an Aquantia 10GbE NIC (which AQtion appears to be the rebranded name for, post-acquisition by Marvell), and it turned out to be the network chip overheating because it was poorly thermally bonded to a VRM heatsink that defaulted to turning on at something like 90C. Adding a thicker thermal pad and setting the VRM fan to always be on at 30% solved my problems.

kentonv 9 days ago | parent [-]

Interesting! I sure hope that's not my problem because I uhhh really don't want to open up 20 machines to try to fix that.

I think it probably isn't the same problem, though, because I only have stability issues at initial startup. If it boots and doesn't BSOD in the first five minutes then it's fine... even through heavy network and disk use (like installing updates).

jmb99 9 days ago | parent | prev [-]

> Hah, you really did the VM thing? A lot of people have suggested that to me but I didn't think it'd actually work. Pretty cool!

Another data point that it is indeed possible. I had a dual Xeon E5-2690 v2 setup with two RX 580 8GB cards passed through to separate VMs, and with memory and CPU pinning it was a surprisingly resilient setup. 150+ FPS in CSGO with decent 1% lows (like 120 if I remember correctly?) which was fine since I only had 60Hz monitors. I have a Threadripper workstation now, I should test out to see what kind of performance I can get out of that for VM gaming...

> Yeah I could get higher-quality 10G cards and put them in all the machines but they seem expensive...

I have had very good luck with Intel X540 cards. $20-40 on eBay, and there’s hundreds (if not thousands) available. They’re plug-and-play on any modern Linux, but need an Intel driver on windows if I remember correctly. I’ve never had one die and I’ve never experienced a crash or network dropout in the 9 years I’ve been running them. The Marvell chipset just seems terrible, unfortunately - I’ve had problems with it on multiple different cards and motherboards on every OS under the sun.