| ▲ | justrunitlocal 8 hours ago |
| We've been running our 10 dev org on 8 H100s on open models (with some tweaks). Sure they aren't as good as the big providers but they 1. don't go down 2. have pretty damn high tok/s. It pays for itself. Posting with a fresh account because I'm not supposed to share these details for obvious reason. If you want help on setting this up, just reply with a way to reach you. |
|
| ▲ | kgeist 5 hours ago | parent | next [-] |
| We're planning to do the same thing - buy something like 8xH100 and run all coding there. The CTO almost agreed to find the budget for it but I need to make sure there are no risks before we buy (i.e. it's a viable/usable setup for professional AI-assisted coding) Can you share what models you run and find best performing for this setup? That would help a lot. I already run a smaller AI server in the office but only 32b models fit there. I already have experience optimizing inference, I'm just interested what models you think are great for 8xH100 for coding, I'll figure out the details how to fit it :) |
| |
| ▲ | dools 19 minutes ago | parent | next [-] | | Check out Verda you can rent whatever super powerful GPU clusters you need in 10 minute increments. Deploy any open weight model using SGLang and away you go | |
| ▲ | htrp 38 minutes ago | parent | prev | next [-] | | 8 x h100 80's don't give you enough to run the latest 1tn + parameter models (especially at the context window lengths to be competitive with the frontier models) | | |
| ▲ | dools 16 minutes ago | parent [-] | | Verda has B300 clusters, 8 for USD $55/hour in 10 minute billing blocks |
| |
| ▲ | Havoc 2 hours ago | parent | prev [-] | | Deepseek, GLM, Minimax or Kimi are the most likely contenders. | | |
| ▲ | dools 14 minutes ago | parent [-] | | I’ve been using kimi 2.5/2.6 for the past 2 weeks and it’s really not far off OpenAI and Claude models. I am a coder so it’s not all vibes but I am definitely more in the “spec to code” mode than “edit this file for me” and it copes just fine. Needs a bit more supervision than the frontier models but it’s also significantly cheaper. If I were anthropic I’d be shitting myself, their prices are going to 10x over the next 2 years |
|
|
|
| ▲ | ok_dad 7 hours ago | parent | prev | next [-] |
| yea just buy 300k worth of hardware and bob's your uncle |
| |
| ▲ | justrunitlocal 7 hours ago | parent | next [-] | | It was pretty hard to justify the purchase to the board but we got a decent deal from a nearby data-center (~15% discount). Thankfully, it's fixed cost, its an asset we can use for our taxes, and it will survive for years to come. The only thing we have to work on is maintenance as well as looking into some renewable energy options. We're also looking into how to do some secure cost sharing with this so that all people need to pay for are what it costs for us to run everything! We're just planning on reserving at least 51% of the capacity for us and the rest for everyone else. | | |
| ▲ | ok_dad 7 hours ago | parent [-] | | Sorry, didn't mean to be dismissive, I was just being a dickhead needlessly. I actually respect this a ton, good work. | | |
| ▲ | justrunitlocal 7 hours ago | parent [-] | | It's fine! There's no world where individuals can buy this kind of stuff. Our company is too small to do it, but I'd love for there to be a public utility of sorts for being able to use LLMs. It is absurd that only these >$1T companies are allowed to run this. I also find it dangerous for society to have so much power and wealth concentrated there too. Anyway, this is the internet and skepticism is warranted :D. | | |
| ▲ | ok_dad 7 hours ago | parent [-] | | Yea, I actually looked into a similar thing myself recently. I was looking at how we could replace Cursor, and I found that for ~10 people we'd need a half dozen H100's or something on that scale, which would cost ~$1500 per developer or so to build and maintain on cloud infra, and to buy it would cost roughly 3 developers yearly salaries or so (this aligns with your numbers). We don't use that much inference, so we decided paying Cursor ~$200-300 per dev per month is better, for now, but in the future we might regret that when prices normalize more. However, we also don't use cloud agents or independent agents, we basically use AI as a pair programmer, so if we had to drop AI coding assistants completely our process wouldn't break too badly. I wish I could task my 3080 gaming card to do some inference, but I can only get ~10B models on there, so it's kinda worthless unless it's for something a small model can do. | | |
| ▲ | zozbot234 6 hours ago | parent [-] | | The best deal is arguably to buy as much on prem inference as you can reasonably expect to use by running the hardware around the clock, even at slower throughput, and use 3rd-party inference for things that are genuinely latency-sensitive. I just don't see how this resolves to needing a half-dozen V100, surely you're not using that much compute? You don't need to place your entire model on GPU, engines for on prem inference generally support CPU/RAM-based offload. |
|
|
|
| |
| ▲ | mumbisChungo 6 hours ago | parent | prev [-] | | One dev's salary to give a 10 person team unlimited approximately free agentic coding for the foreseeable future, plus privacy. | | |
|
|
| ▲ | johndough 6 hours ago | parent | prev | next [-] |
| > Sure they aren't as good as the big providers If you haven't done so already, finetune the model on all your company's code that you can get your hands on. This is one of the great advantages that you get when running local models. I like the style of the generated code much better now, I have to rewrite much less, and my prompts can be shorter too. But maybe these already are the "tweaks" that you mentioned. |
| |
| ▲ | GenerWork 6 hours ago | parent [-] | | How would they do that? Would it be as easy as telling a model "Hey, review all this code, identify patterns, and then write in this style going forward"? Sorry if this is a stupid question, I've never finetuned or trained a LLM. | | |
| ▲ | Havoc 2 hours ago | parent [-] | | Unsloth has consumer accessible stuff on fine tuning models |
|
|
|
| ▲ | 2ndorderthought 7 hours ago | parent | prev [-] |
| This is the actual answer. Man I hope to find a company like yours sometime soon. I am sick of all the issues with having 3rd party IP generation |