| ▲ | YesBox 2 days ago |
| I've noticed a huge drop in negative comments on HN when discussing LLMs in the last 1-2 months. All the LLM coded projects I've seen shared so far[1] have been tech toys though. I've watched things pop up on my twitter feed (usually games related), then quietly go off air before reaching a gold release (I manually keep up to date with what I've found, so it's not the algorithm). I find this all very interesting: LLMs dont change the fundamental drives needed to build successful products. I feel like I'm observing the TikTokification of software development. I dont know why people aren't finishing. Maybe they stop when the "real work" kicks in. Or maybe they hit the limits of what LLMs can do (so far). Maybe they jump to the next idea to keep chasing the rush. Acquiring context requires real work, and I dont see a way forward to automating that away. And to be clear, context is human needs; i.e. the reasons why someone will use your product. In the game development world, it's very difficult to overstate how much work needs to be done to create a smooth, enjoyable experience for the player. While anyone may be able to create a suite of apps in a weekend, I think very few of them will have the patience and time to maintain them (just like software development before LLMs! i.e. Linux, open source software, etc.). [1] yes, selection bias. There are A LOT of AI devs just marketing their LLMs. Also it's DEFINITELY too early to be certain. Take everything Im saying with a one pound grain of salt. |
|
| ▲ | blibble 2 days ago | parent | next [-] |
| > I've noticed a huge drop in negative comments on HN when discussing LLMs in the last 1-2 months. real people get fed up of debating the same tired "omg new model 1000x better now" posts/comments from the astroturfers, the shills and their bots each time OpenAI shits out a new model (article author is a Microslop employee) |
| |
| ▲ | hollowturtle 2 days ago | parent | next [-] | | Simply this ^ I'm tired of debating bots and people paid to grow the hype, so I won't anymore I'll just work and look for the hype passing by from a distance. In the meanwhile I'll keep waiting for people making actual products with LLMs that will kill old generation products like windows, excel, teams, gmail etc that will replace slop with great ui/ux and push really performant apps | |
| ▲ | g947o 2 days ago | parent | prev | next [-] | | Especially when 90% of these articles are based on personal, anecdotally evidence and keep repeating the same points without offering anything new. If these articles actually provide quantitative results in a study done across an organization and provide concrete suggestions like what Google did a while ago, that would be refreshing and useful. (Yes, this very article has strong "shill" vibes and fits the patterns above) | |
| ▲ | dudeinhawaii a day ago | parent | prev | next [-] | | This is a cringe comment from an era of when "Micro$oft" was hip and reads like you are a fanboi for Anthropic/Google foaming at the mouth. Would be far more useful if you provided actual verifiable information and dropped the cringe memes. Can't take seriously someone using "Microslop" in a sentence". | |
| ▲ | simonw 2 days ago | parent | prev [-] | | You're only hurting yourself if you decide there's some wild conspiracy afoot here to pay shills to tell people that coding agents are useful... as opposed to people finding them useful enough to want to tell other people about it. | | |
| ▲ | blibble 2 days ago | parent | next [-] | | [flagged] | | |
| ▲ | simonw 2 days ago | parent [-] | | If I worked for Microsoft as a software engineer and believed that LLMs were going to end software engineering I would not expect the value increase in my stock options to overcome my loss of income when Microsoft inevitably laid me off. (I do not think LLMs will obsolete software engineering as a career.) | | |
| ▲ | g947o 2 days ago | parent | next [-] | | I don't assume everyone can think of that next step. If I were that smart, I would not be writing a blog article just talking about using LLM to create new projects/tools outside production environment, because the same thing has been written 1000 times at least, and this article would not offer anything new, which would be a waste of time. Which unfortunately is what's happening here. (I came to HN comments of this article to look for new perspectives. I found exactly nothing.) | |
| ▲ | blibble 2 days ago | parent | prev [-] | | [flagged] |
|
| |
| ▲ | llmslave2 2 days ago | parent | prev [-] | | [flagged] | | |
| ▲ | jsnell 2 days ago | parent [-] | | Why is it the people posting positive comments who are "responding to incentives" by posting more, while it's the people posting negative comments who do so by stopping posting? Like, your exact points work equally well with the polarity reversed: the anti-AI influencer/grifter ecosystem is well-developed at this point, and many people desperately want AIs to be useless. I don't know if the original claim about sentiment is true, but if it is, I don't think yours or blibble's (conflicting) claims about the reason are very believable. | | |
| ▲ | Capricorn2481 2 days ago | parent | next [-] | | > Like, your exact points work equally well with the polarity reversed: the anti-AI influencer/grifter ecosystem is well-developed at this point, and many people desperately want AIs to be useless Maybe it's equal for non-tech people. But I don't think a lot of tech people are desperate for AI to be useless, I think they're desperate for it to be useful. If you're someone who is smart enough to work with or without AI and you just find the tools not that helpful, I doubt you're all that worried about being replaced. But when we see companies increasingly bullish on something we know doesn't work that well, it's a bit worrying. | |
| ▲ | blibble 2 days ago | parent | prev [-] | | because there's no sweet tech-oligarch job, early access to the latest model, OpenAI speaking engagement invite, or larger bonus to be awarded by being aiphobic? seems patently obvious | | |
| ▲ | simonw 2 days ago | parent [-] | | There are a few people making decent enough money on the paid newsletter/speaking gig circuit for AI phobia these days. It's a tougher gig though, because teaching people how NOT to use AI won't provide those customers as much value as teaching them how to use it. (Because it works.) | | |
| ▲ | blibble 2 days ago | parent [-] | | [flagged] | | |
| ▲ | llmslave2 2 days ago | parent [-] | | Tech companies will be seen the same way as cigarette companies, and their apologists seen like the doctors and scientists who were paid to lie. |
|
|
|
|
|
|
|
|
| ▲ | simonw 2 days ago | parent | prev | next [-] |
| It could be that the people who are focused on building monetizable products with LLMs don't feel the need to share what they are doing - they're too busy quietly getting on with building and marketing their products. Sharing how you're using these tools is quite a lot of work! |
| |
| ▲ | yeasku 2 days ago | parent | next [-] | | What would be more likely, That people making startups is too bussy working to share it on HN or that AI is useless in real projects. | | |
| ▲ | pigpop 17 hours ago | parent | next [-] | | If you haven't noticed, people come here to kill time. If you're killing time then you're not being productive, therefor the people who are heads down trying to launch their startup before they run out of runway are not going to be here until they're ready to market their product. | |
| ▲ | atonse 2 days ago | parent | prev [-] | | The former. | | |
| ▲ | simonw 2 days ago | parent [-] | | Totally. | | |
| ▲ | yeasku 2 days ago | parent [-] | | I see people sharing stuff here every day. What makes LLM makers different that they dont have time to share it like everybody else does? |
|
|
| |
| ▲ | YesBox 2 days ago | parent | prev | next [-] | | Agreed! LLMs are a force multiplier for real products too. They're going to augment people who are willing to do the real work. But, Im also wondering if LLMs are going to create a new generation of software dev "brain rot" (to use the colloquial term), similar to short form videos. I should mention in the gamedev world, it's quite common share because sharing is marketing, hence my perspective. | | |
| ▲ | lillutoo 2 days ago | parent [-] | | I feel weird when I read comments that have words like "force mulitplier". This sounds like an LLM comment. But you probably are a real person. So are you just becoming more like an LLM because you interact with it so much, or did you always talk like this and LLMs are just replicating that behavior? |
| |
| ▲ | enraged_camel 2 days ago | parent | prev [-] | | I admit I'm in this boat. I get immense value from LLMs, easily 5x if not more, and the codebases I work in are large, mature and complex. But providing "receipts" as the kids call it these days would be a huge undertaking, with not a lot of upside. In fact, the downsides are considerable. Aside from the time investment, I have no interest in arguing with people about whether what I work on is just CRUD (it's not) or that the problems I work on are not novel (who cares, your product either provides value for your users or it does not). |
|
|
| ▲ | bombdailer 2 days ago | parent | prev | next [-] |
| The type of people to use AI are necessarily the people who will struggle most when it comes time to do the last essential 20% of the work that AI can't do. Once thinking is required to bring all the parts into a whole, the person who gives over their thinking skills to AI will not be equipped to do the work, either because they never had the capacity to begin with or because AI has smoothed out the ripples of their brain. I say this from experience. |
| |
| ▲ | Krei-se 2 days ago | parent | next [-] | | I think you can tell from some answers here that people talk to these models a lot and adapt their language structure :( Means they stop asking themselves whether it makes any sense what they ask the model for. It does not turn middle management into developers it turns developers into middle managers that just shout louder or replace a critical mind with another yesman or the next super best model that finally brings their genius ideas to life. Then well they get to the same wall of having to learn for themselves to reach gold and ofc that's an insult to any manager. Whoever cannot do the insane job has to be wrong, never the one asking for insanity. Sad i had to scroll so far down to get some fitting description of why those projects all die. Maybe it's not just me leaving all social networks even HN because well you may not talk to 100% bots but you sure talk to 90% of people that talk to models a lot instead of using them as a tool. | |
| ▲ | simonw 2 days ago | parent | prev [-] | | Using AI tools makes me think harder. | | |
| ▲ | acosmism 2 days ago | parent [-] | | harder != better | | |
| ▲ | sothatsit 2 days ago | parent [-] | | My thinking is definitely better. I spend more time worrying about the specific architecture, memory layout, GPU features, etc. to come up with ideas for optimisations, and I think less about specific implementation details. I’ve gotten a better mental model of our code faster because of this. I have also found substantial speed ups by thinking about the problem at a higher level, while iterating on implementation details quickly using Opus. |
|
|
|
|
| ▲ | TheAceOfHearts 2 days ago | parent | prev | next [-] |
| Deploying and maintaining something in a production-ready environment is a huge amount of work. It's not surprising that most people give up once they have a tech demo, especially if they're not interested in spending a ton of time maintaining these projects. Last year Karpathy posted about a similar experience, where he quickly vibe coded some tools only to realize that deploying it would take far more effort than he originally anticipated. I think it's also rewarding to just be able to build something for yourself, and one benefit of scratching your own itch is that you don't have to go through the full effort of making something "production ready". You can just build something that's tailed specifically to the problem you're trying to solve without worrying about edge cases. Which is to say, you're absolutely right :). |
|
| ▲ | Havoc 2 days ago | parent | prev | next [-] |
| > huge drop in negative comments on HN when discussing LLMs I interpret it more as spooked silence |
|
| ▲ | bcrosby95 2 days ago | parent | prev | next [-] |
| Yeah, I do a lot of hobby game making and the 80/20 rule definitely applies. Your game will be "done" in 20% of the time it takes to create a polished product ready for mass consumption. Stopping there is just fine if you're doing it as a hobby. I love to do this to test out isolated ideas. I have dozens of RPGs in this state, just to play around with different design concepts from technical to gameplay. |
|
| ▲ | elzbardico 2 days ago | parent | prev [-] |
| Sometimes I feel like a lot of those posts are instances of Kent Brockman:
"I for one, welcome our new insect overlords." Given the enthusiasm of our ruling class towards automating software development work, it may make sense for a software engineer to publicly signal how much onboard as a professional they are with it. But, I've seen stranger stuff throughout my professional life: I still remember people enthusiastically defending EJB 2.1 and xdoclet as perfectly fine ways of writing software. |