| ▲ | yomismoaqui 3 days ago |
| It's a little shameful but I still struggle when centering divs on a page. Yes, I know about flexbox for more than a decade but always have to search to remember how it is done. So instead of refreshing that less used knowledge I just ask the AI to do it for me. The implications of this vs searching MDN Docs is another conversation to have. |
|
| ▲ | jfengel 3 days ago | parent | next [-] |
| No shame in that. I keep struggling to figure out the point of view of the CSS designers. They don't think like graphic designers, or like programmers. It's not easy for beginners. It's not aimed at ease of implementation. It's not amenable to automated validation. It's not meant to be generated. If there is some person for whom CSS layout comes naturally, I have not met them. As far as I can tell their design goal was to confuse everyone, at which they succeeded magnificently. |
| |
| ▲ | alwillis 3 days ago | parent [-] | | > I keep struggling to figure out the point of view of the CSS designers. Before 2017, the web had no page layout ability. Think about it. Before the advent of Flexbox and CSS Grid, certain layouts were impossible to do. All we had were floats, absolute positioning, negative margin hacks, and using the table element for layout. > They don't think like graphic designers or like programmers. It's not easy for beginners. CSS is dramatically easier if you write it in order of specificity: styles that affect large parts of the DOM go at the top; more specific styles come later. Known as Inverted Triangle CSS (ITCSS), it has been around for a long time [1]. > It's not aimed at ease of implementation. It's not amenable to automated validation. If you mean linting or adhering to coding guidelines, there are several; Stylelint is popular [2].
Any editor that supports Language Server Protocol (LSP), like VS Code and Neovim (among others), can use CSS and CSS Variables LSPs [3], [4] for code completion, diagnostics, formatting, etc. > It's not meant to be generated.
Says who? There have been CSS generators and preprocessors since 2006, not to mention all the tools which turn mockups into CSS. LLMs have no problem generating CSS. Lots of developers need to relearn CSS; the book Every Layout is a good start [5]. [1]: https://css-tricks.com/dont-fight-the-cascade-control-it/ [2]: https://stylelint.io [3]: https://github.com/microsoft/vscode-css-languageservice [4]: https://github.com/vunguyentuan/vscode-css-variables [5]: https://every-layout.dev | | |
| ▲ | naasking 2 days ago | parent [-] | | Developers can learn a new programming language in a few weeks to months of just using it. If they can't learn to reliably and predictably use CSS in the same way, then I'd say that makes CSS flawed. | | |
| ▲ | alwillis 2 days ago | parent [-] | | > If they can't learn to reliably and predictably use CSS in the same way, then I'd say that makes CSS flawed. It's not the fault of CSS that most developers don't learn to use it correctly. That's like blaming the bicycle when learning to ride one. Frankly, it's not a priority for most of them to learn CSS; they don't see it as a "real" programming language; therefore it's not worth their time. | | |
| ▲ | naasking 2 days ago | parent [-] | | > It's not the fault of CSS that most developers don't learn to use it correctly. That's like blaming the bicycle when learning to ride one. It's not like blaming the bicycle, that's the whole point of my analogy to programming languages. Like I said, learning a new programming language in a few weeks of regular use is a common experience. This also happens with bikes, because you can try a few things, lose balance, make a few intuitive adjustments, and iterate easily. This just doesn't work with CSS. There are so many pitfalls, corner cases and reasoning is non-compositional and highly contextual. That's the complete opposite of learning to ride a bike or learning a new programming language. You literally do need to read like, a formal specification of CSS to really understand it, and even then you'll regularly get tripped up. People just learn to stick to a small subset of CSS for which they've managed to build a predictable model for, which is why we got toolkits like Bootstrap. Edit: this also explains why things like Tailwind are popular: it adds a certain amount of predictability and composition to CSS. Using CSS was way worse in the past when browser compatibility was worse, but it's still not a great experience. |
|
|
|
|
|
| ▲ | simonw 3 days ago | parent | prev | next [-] |
| Hah, centering divs with flexbox is one of my uses for this too! I can never remember the syntax off the top of my head, but if I say "center it with flexbox" it spits out exactly the right code every time. If I do this a few more times it might even stick in my head. |
|
| ▲ | robofanatic 3 days ago | parent | prev | next [-] |
| > Yes, I know about flexbox for more than a decade but always have to search to remember how it is done. These days I use display: flex; so much that I wish the initial value of the display property in CSS should be flex instead of inline; |
|
| ▲ | barrkel 3 days ago | parent | prev | next [-] |
| Try tailwind. Very amenable to LLM generation since it's effectively a micro language, and being colocated with the document elements, it doesn't need a big context to zip together. |
|
| ▲ | llmslave2 3 days ago | parent | prev [-] |
| Surely searching "centre a div" takes less time than prompting and waiting for a response... |
| |
| ▲ | duggan 3 days ago | parent | next [-] | | Search “centre a div” in Google Wade through ads Skim a treatise on the history of centering content Skim over the “this question is off topic / duplicate” noise if Stack Overflow Find some code on the page Try to map how that code will work in the context of your other layout Realize it’s plain CSS and you’re looking for Tailwind Keep searching Try some stuff until it works Or… Ask LLM. Wait 20-30 seconds. Move on to the next thing. | | |
| ▲ | duskdozer 12 hours ago | parent | next [-] | | Half the reason search engines are so miserable to use anymore is that they've been laden down with so much low quality LLM-generated content. | |
| ▲ | SchemaLoad 3 days ago | parent | prev | next [-] | | The middle step is asking an LLM how it's done and making the change yourself. You skip the web junk and learn how it's done for next time. | | |
| ▲ | duggan 3 days ago | parent [-] | | Yep, that’s not a bad approach, either. I did that a lot initially, it’s really only with the advent of Claude Code integrated with VS Code that I’m learning more like I would learn from a code review. It also depends on the project. Work code gets a lot more scrutiny than side projects, for example. |
| |
| ▲ | Izkata 3 days ago | parent | prev | next [-] | | > Search “centre a div” in Google Aaand done. Very first result was a blog post showing all the different ways to do it, old and new, without any preamble. | |
| ▲ | stephenr 3 days ago | parent | prev | next [-] | | Or, given that OP is presumably a developer who just doesn't focus fully on front end code they could skip straight to checking MDN for "center div" and get a How To article (https://developer.mozilla.org/en-US/docs/Web/CSS/How_to/Layo...) as the first result without relying on spicy autocomplete. Given how often people acknowledge that ai slop needs to be verified, it seems like a shitty way to achieve something like this vs just checking it yourself with well known good reference material. | | |
| ▲ | duggan 3 days ago | parent [-] | | LLMs work very well for a variety of software tasks — we have lots of experience around the industry now. If you haven’t been convinced by pure argument in 2026 then you probably won’t be. But the great thing is you don’t have to take anyone’s word for it. This isn’t crypto, where everyone using it has a stake in its success.
You can just try it, or not. | | |
| ▲ | stephenr 3 days ago | parent [-] | | That's a lot of words to say "trust me bruh" which is kind of poetic given that's the entire model (no pun intended) that LLMs work on. | | |
| ▲ | duggan 3 days ago | parent [-] | | Hardly. Just pointing out that water is wet, from my perspective. But there is an interesting looking-glass effect at play, where the truth seems obvious and opposite on either side. |
|
|
| |
| ▲ | bitwize 3 days ago | parent | prev [-] | | Wait till the VC tap gets shut off. You: Hey ChatGPT, help me center a div. ChatGPT: Certainly, I'd be glad to help! But first you must drink a verification can to proceed. Or: ChatGPT: I'm sorry, you appear to be asking a development-related question, which your current plan does not support. Would you like me to enable "Dev Mode" for an additional $200/month? Drink a verification can to accept charges. | | |
| ▲ | lenkite 3 days ago | parent | next [-] | | Seriously, they have got their HOOKS into these Vibe Coders and AI Artists who will pony up $1000/month for their fix. | | |
| ▲ | bonesss 3 days ago | parent [-] | | A little hypothesis: a lot of .Net and Java stuff is mainlined from a giant mega corp straight to developers through a curated certification, MVP, blogging, and conference circuit apparatus designed to create unquestioned corporate friendly, highly profitable, dogma. You say ‘website’ and from the letter ‘b’ they’re having a Pavlovian response (“Azure hosted SharePoint, data lake, MSSQL, user directory, analytics, PowerBI, and…”). Microsoft’s dedication to infusing OpenAI tech into everything seems like a play to cut even those tepid brains out of the loop and capture the vehicles of planning and production. Training your workforce to be dependent on third-party thinking, planning, and advice is an interesting strategy. |
| |
| ▲ | llmslave2 3 days ago | parent | prev | next [-] | | Calling it now: AI withdrawal will become a documented disorder. | | |
| ▲ | duskdozer 12 hours ago | parent | next [-] | | https://en.wikipedia.org/wiki/Chatbot_psychosis | |
| ▲ | LinXitoW 3 days ago | parent | prev | next [-] | | We already had that happen. When GPT 5 was released, it was much less sycophantic. All the sad people with AI girl/boyfriends threw a giant fit because OpenAI "murdered" the "soul" of their "partner". That's why 4o is still available as a legacy model. | |
| ▲ | freedomben 3 days ago | parent | prev [-] | | I can absolutely see that happening. It's already kind of happened to me a couple of times when I found myself offline and was still trying to work on my local app. Like any addiction, I expect it to cost me some money in the future |
| |
| ▲ | duskdozer 12 hours ago | parent | prev | next [-] | | Definitely. Right now I can access and use them for free without significant annoyance. I'm a canary for enshittification; I'm curious what it's going to look like. | |
| ▲ | jckahn 3 days ago | parent | prev | next [-] | | Alternatively, just use a local model with zero restrictions. | | |
| ▲ | alwillis 3 days ago | parent | next [-] | | The next best thing is to use the leading open source/open weights models for free or for pennies on OpenRouter [1] or Huggingface [2]. An article about the best open weight models, including Qwen and Kimi K2 [3]. [1]: https://openrouter.ai/models [2]: https://huggingface.co [3]: https://simonwillison.net/2025/Jul/30/ | |
| ▲ | baq 3 days ago | parent | prev | next [-] | | This is currently negative expected value over the lifetime of any hardware you can buy today at a reasonable price, which is basically a monster Mac - or several - until Apple folds and rises the price due to RAM shortages. | |
| ▲ | master_crab 3 days ago | parent | prev [-] | | This requires hardware in the tens of thousands of dollars (if we want the tokens spit out at a reasonable pace). Maybe in 3-5 years this will work on consumer hardware at speed, but not in the immediate term. | | |
| ▲ | vntok 3 days ago | parent [-] | | $2000 will get you 30~50 tokens/s on perfectly usable quantization levels (Q4-Q5), taken from any one among the top 5 best open weights MoE models. That's not half bad and will only get better! | | |
| ▲ | master_crab 3 days ago | parent | next [-] | | If you are running lightweight models like deepseek 32B. But anything more and it’ll drop. Also, costs have risen a lot in the last month for RAM and AI adjacent hardware. It’s definitely not 2k for the rig needed for 50 tokens a second | |
| ▲ | threeducks 3 days ago | parent | prev | next [-] | | Could you explain how? I can't seem to figure it out. DeepSeek-V3.2-Exp has 37B active parameters, GLM-4.7 and Kimi K2 have 32B active parameters. Lets say we are dealing with Q4_K_S quantization for roughly half the size, we still need to move 16 GB 30 times per second, which requires a memory bandwidth of 480 GB/s, or maybe half that if speculative decoding works really well. Anything GPU-based won't work for that speed, because PCIe 5 provides only 64 GB/s and $2000 can not afford enough VRAM (~256GB) for a full model. That leaves CPU-based systems with high memory bandwidth. DDR5 would work (somewhere around 300 GB/s with 8x 4800MHz modules), but that would cost about twice as much for just the RAM alone, disregarding the rest of the system. Can you get enough memory bandwidth out of DDR4 somehow? | |
| ▲ | int_19h 3 days ago | parent | prev [-] | | That doesn't sound realistic to me. What is your breakdown on the hardware and the "top 5 best models" for this calculation? | | |
| ▲ | vntok 4 hours ago | parent [-] | | Look up AMD's Strix Halo mini-PC such as GMKtec's EVO-X2. I got the one with 128GB of unified RAM (~100GB VRAM) last year for 1900€ excl. VAT; it runs like a beast especially for SOTA/near-SOTA MoE models. |
|
|
|
| |
| ▲ | fragmede 3 days ago | parent | prev | next [-] | | Just you wait until the powers that be take cars away from us! What absolute FOOLS you all are to shape your lives around something that could be taken away from us at any time! How are you going to get to work when gas stations magically disappear off the face of the planet? I ride a horse to work, and y'all are idiots for developing a dependency on cars. Next thing you're gonna tell me is we're going to go to war for oil to protect your way of life. Come on! | | |
| ▲ | stephenr 3 days ago | parent | next [-] | | The reliance on SaaS LLMs is more akin to comparing owning a horse vs using a car on a monthly subscription plan. | |
| ▲ | prathamtharwani 2 days ago | parent | prev | next [-] | | This is a poor analogy. Cars (mostly) don't require a subscription. | |
| ▲ | llmslave2 3 days ago | parent | prev | next [-] | | Can't believe this car bubble has lasted so long. It's gonna pop any decade now! | |
| ▲ | LinXitoW 3 days ago | parent | prev [-] | | I mean, they're taking away parts of cars at the moment. You gotta pay monthly to unlock features your car already has. | | |
| ▲ | stephenr 3 days ago | parent [-] | | Just like the comment you replied to this is an argument against subscription model "thing" as a service business models, not against cars. |
|
| |
| ▲ | duggan 3 days ago | parent | prev [-] | | I mean sure, that could happen. Either it's worth $200/month to you, or you get back to writing code by hand. |
|
| |
| ▲ | freedomben 3 days ago | parent | prev | next [-] | | If only it were that easy. I got really good at centering and aligning stuff, but only when the application is constructed in the way I expect. This is usually not a problem as I'm usually working on something I built myself, but if I need to make a tweak to something I didn't build, I frequently find myself frustrated and irritated, especially when there is some higher or lower level that is overriding the setting I just added. As a bonus, I pay attention to what the AI did and its results, and I have actually learned quite a bit about how to do this myself even without AI assistance | |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
|