| ▲ | duggan 3 days ago |
| Search “centre a div” in Google Wade through ads Skim a treatise on the history of centering content Skim over the “this question is off topic / duplicate” noise if Stack Overflow Find some code on the page Try to map how that code will work in the context of your other layout Realize it’s plain CSS and you’re looking for Tailwind Keep searching Try some stuff until it works Or… Ask LLM. Wait 20-30 seconds. Move on to the next thing. |
|
| ▲ | duskdozer 12 hours ago | parent | next [-] |
| Half the reason search engines are so miserable to use anymore is that they've been laden down with so much low quality LLM-generated content. |
|
| ▲ | SchemaLoad 3 days ago | parent | prev | next [-] |
| The middle step is asking an LLM how it's done and making the change yourself. You skip the web junk and learn how it's done for next time. |
| |
| ▲ | duggan 3 days ago | parent [-] | | Yep, that’s not a bad approach, either. I did that a lot initially, it’s really only with the advent of Claude Code integrated with VS Code that I’m learning more like I would learn from a code review. It also depends on the project. Work code gets a lot more scrutiny than side projects, for example. |
|
|
| ▲ | Izkata 3 days ago | parent | prev | next [-] |
| > Search “centre a div” in Google Aaand done. Very first result was a blog post showing all the different ways to do it, old and new, without any preamble. |
|
| ▲ | stephenr 3 days ago | parent | prev | next [-] |
| Or, given that OP is presumably a developer who just doesn't focus fully on front end code they could skip straight to checking MDN for "center div" and get a How To article (https://developer.mozilla.org/en-US/docs/Web/CSS/How_to/Layo...) as the first result without relying on spicy autocomplete. Given how often people acknowledge that ai slop needs to be verified, it seems like a shitty way to achieve something like this vs just checking it yourself with well known good reference material. |
| |
| ▲ | duggan 3 days ago | parent [-] | | LLMs work very well for a variety of software tasks — we have lots of experience around the industry now. If you haven’t been convinced by pure argument in 2026 then you probably won’t be. But the great thing is you don’t have to take anyone’s word for it. This isn’t crypto, where everyone using it has a stake in its success.
You can just try it, or not. | | |
| ▲ | stephenr 3 days ago | parent [-] | | That's a lot of words to say "trust me bruh" which is kind of poetic given that's the entire model (no pun intended) that LLMs work on. | | |
| ▲ | duggan 3 days ago | parent [-] | | Hardly. Just pointing out that water is wet, from my perspective. But there is an interesting looking-glass effect at play, where the truth seems obvious and opposite on either side. |
|
|
|
|
| ▲ | bitwize 3 days ago | parent | prev [-] |
| Wait till the VC tap gets shut off. You: Hey ChatGPT, help me center a div. ChatGPT: Certainly, I'd be glad to help! But first you must drink a verification can to proceed. Or: ChatGPT: I'm sorry, you appear to be asking a development-related question, which your current plan does not support. Would you like me to enable "Dev Mode" for an additional $200/month? Drink a verification can to accept charges. |
| |
| ▲ | lenkite 3 days ago | parent | next [-] | | Seriously, they have got their HOOKS into these Vibe Coders and AI Artists who will pony up $1000/month for their fix. | | |
| ▲ | bonesss 3 days ago | parent [-] | | A little hypothesis: a lot of .Net and Java stuff is mainlined from a giant mega corp straight to developers through a curated certification, MVP, blogging, and conference circuit apparatus designed to create unquestioned corporate friendly, highly profitable, dogma. You say ‘website’ and from the letter ‘b’ they’re having a Pavlovian response (“Azure hosted SharePoint, data lake, MSSQL, user directory, analytics, PowerBI, and…”). Microsoft’s dedication to infusing OpenAI tech into everything seems like a play to cut even those tepid brains out of the loop and capture the vehicles of planning and production. Training your workforce to be dependent on third-party thinking, planning, and advice is an interesting strategy. |
| |
| ▲ | llmslave2 3 days ago | parent | prev | next [-] | | Calling it now: AI withdrawal will become a documented disorder. | | |
| ▲ | duskdozer 12 hours ago | parent | next [-] | | https://en.wikipedia.org/wiki/Chatbot_psychosis | |
| ▲ | LinXitoW 3 days ago | parent | prev | next [-] | | We already had that happen. When GPT 5 was released, it was much less sycophantic. All the sad people with AI girl/boyfriends threw a giant fit because OpenAI "murdered" the "soul" of their "partner". That's why 4o is still available as a legacy model. | |
| ▲ | freedomben 3 days ago | parent | prev [-] | | I can absolutely see that happening. It's already kind of happened to me a couple of times when I found myself offline and was still trying to work on my local app. Like any addiction, I expect it to cost me some money in the future |
| |
| ▲ | duskdozer 12 hours ago | parent | prev | next [-] | | Definitely. Right now I can access and use them for free without significant annoyance. I'm a canary for enshittification; I'm curious what it's going to look like. | |
| ▲ | jckahn 3 days ago | parent | prev | next [-] | | Alternatively, just use a local model with zero restrictions. | | |
| ▲ | alwillis 3 days ago | parent | next [-] | | The next best thing is to use the leading open source/open weights models for free or for pennies on OpenRouter [1] or Huggingface [2]. An article about the best open weight models, including Qwen and Kimi K2 [3]. [1]: https://openrouter.ai/models [2]: https://huggingface.co [3]: https://simonwillison.net/2025/Jul/30/ | |
| ▲ | baq 3 days ago | parent | prev | next [-] | | This is currently negative expected value over the lifetime of any hardware you can buy today at a reasonable price, which is basically a monster Mac - or several - until Apple folds and rises the price due to RAM shortages. | |
| ▲ | master_crab 3 days ago | parent | prev [-] | | This requires hardware in the tens of thousands of dollars (if we want the tokens spit out at a reasonable pace). Maybe in 3-5 years this will work on consumer hardware at speed, but not in the immediate term. | | |
| ▲ | vntok 3 days ago | parent [-] | | $2000 will get you 30~50 tokens/s on perfectly usable quantization levels (Q4-Q5), taken from any one among the top 5 best open weights MoE models. That's not half bad and will only get better! | | |
| ▲ | master_crab 3 days ago | parent | next [-] | | If you are running lightweight models like deepseek 32B. But anything more and it’ll drop. Also, costs have risen a lot in the last month for RAM and AI adjacent hardware. It’s definitely not 2k for the rig needed for 50 tokens a second | |
| ▲ | threeducks 3 days ago | parent | prev | next [-] | | Could you explain how? I can't seem to figure it out. DeepSeek-V3.2-Exp has 37B active parameters, GLM-4.7 and Kimi K2 have 32B active parameters. Lets say we are dealing with Q4_K_S quantization for roughly half the size, we still need to move 16 GB 30 times per second, which requires a memory bandwidth of 480 GB/s, or maybe half that if speculative decoding works really well. Anything GPU-based won't work for that speed, because PCIe 5 provides only 64 GB/s and $2000 can not afford enough VRAM (~256GB) for a full model. That leaves CPU-based systems with high memory bandwidth. DDR5 would work (somewhere around 300 GB/s with 8x 4800MHz modules), but that would cost about twice as much for just the RAM alone, disregarding the rest of the system. Can you get enough memory bandwidth out of DDR4 somehow? | |
| ▲ | int_19h 3 days ago | parent | prev [-] | | That doesn't sound realistic to me. What is your breakdown on the hardware and the "top 5 best models" for this calculation? | | |
| ▲ | vntok 4 hours ago | parent [-] | | Look up AMD's Strix Halo mini-PC such as GMKtec's EVO-X2. I got the one with 128GB of unified RAM (~100GB VRAM) last year for 1900€ excl. VAT; it runs like a beast especially for SOTA/near-SOTA MoE models. |
|
|
|
| |
| ▲ | fragmede 3 days ago | parent | prev | next [-] | | Just you wait until the powers that be take cars away from us! What absolute FOOLS you all are to shape your lives around something that could be taken away from us at any time! How are you going to get to work when gas stations magically disappear off the face of the planet? I ride a horse to work, and y'all are idiots for developing a dependency on cars. Next thing you're gonna tell me is we're going to go to war for oil to protect your way of life. Come on! | | |
| ▲ | stephenr 3 days ago | parent | next [-] | | The reliance on SaaS LLMs is more akin to comparing owning a horse vs using a car on a monthly subscription plan. | |
| ▲ | prathamtharwani 2 days ago | parent | prev | next [-] | | This is a poor analogy. Cars (mostly) don't require a subscription. | |
| ▲ | llmslave2 3 days ago | parent | prev | next [-] | | Can't believe this car bubble has lasted so long. It's gonna pop any decade now! | |
| ▲ | LinXitoW 3 days ago | parent | prev [-] | | I mean, they're taking away parts of cars at the moment. You gotta pay monthly to unlock features your car already has. | | |
| ▲ | stephenr 3 days ago | parent [-] | | Just like the comment you replied to this is an argument against subscription model "thing" as a service business models, not against cars. |
|
| |
| ▲ | duggan 3 days ago | parent | prev [-] | | I mean sure, that could happen. Either it's worth $200/month to you, or you get back to writing code by hand. |
|