| ▲ | twampss 5 hours ago |
| Is this just llmfit but a web version of it? https://github.com/AlexsJones/llmfit |
|
| ▲ | deanc 5 hours ago | parent | next [-] |
| Yes. But llmfit is far more useful as it detects your system resources. |
| |
| ▲ | Someone1234 an hour ago | parent | next [-] | | I feel like they both solve different issues well: - If you already HAVE a computer and are looking for models: LLMFit - If you are looking to BUY a computer/hardware, and want to compare/contrast for local LLM usage: This You cannot exactly run LLMFit on hardware you don't have. | |
| ▲ | dgrin91 4 hours ago | parent | prev [-] | | Honestly I was surprised about this. It accurately got my GPU and specs without asking for any permissions. I didnt realize I was exposing this info. | | |
| ▲ | johnisgood 3 hours ago | parent | next [-] | | Why were you surprised? You can check out here how it does that: https://github.com/AlexsJones/llmfit/blob/main/llmfit-core/s... To detect NVIDIA GPUs, for example: https://github.com/AlexsJones/llmfit/blob/main/llmfit-core/s... In this case it just runs the command "nvidia-smi". Note: llmfit is not web-based. | |
| ▲ | dekhn 4 hours ago | parent | prev | next [-] | | How could it not? That information is always available to userspace. | | |
| ▲ | bityard 3 hours ago | parent [-] | | "Available to userspace" is a much different thing than "available to every website that wants it, even in private mode". I too was a little surprised by this. My browser (Vivladi) makes a big deal about how privacy-conscious they are, but apparently browser fingerprinting is not on their radar. | | |
| ▲ | dekhn 3 hours ago | parent | next [-] | | We switched to talking about llmfit in this subthread, it runs as native code. | |
| ▲ | swiftcoder 3 hours ago | parent | prev [-] | | It's pretty hard to avoid GPU fingerprinting if you have webgl/webgpu enabled |
|
| |
| ▲ | spudlyo 3 hours ago | parent | prev | next [-] | | I run LibreWolf, which is configured to ask me before a site can use WebGL, which is commonly used for fingerprinting. I got the popup on this site, so I assume that's how they're doing it. | |
| ▲ | rithdmc 4 hours ago | parent | prev [-] | | Do you mean the OPs website? Mine's way off. > Estimates based on browser APIs. Actual specs may vary |
|
|
|
| ▲ | rootusrootus 3 hours ago | parent | prev [-] |
| That's super handy, thanks for sharing the link. Way more useful than the web site this post is about, to be honest. It looks like I can run more local LLMs than I thought, I'll have to give some of those a try. I have decent memory (96GB) but my M2 Max MBP is a few years old now and I figured it would be getting inadequate for the latest models. But llmfit thinks it's a really good fit for the vast majority of them. Interesting! |
| |
| ▲ | hrmtst93837 an hour ago | parent [-] | | Your hardware can run a good range of local models, but keep an eye on quantization since 4-bit models trade off some accuracy, especially with longer context or tougher tasks. Thermal throttling is also an issue, since even Apple silicon can slow down when all cores are pushed for a while, so sustained performance might not match benchmark numbers. |
|