Remix.run Logo
cess11 18 hours ago

Not to me. I have also not seen any signs that this technology has had macroeconomic effects, and I don't know of any developers in meatspace that are impressed.

To me it seems like a bunch of religious freaks and psychopaths rolled out a weird cult, in part to plaster over layoffs for tax reasons.

oytis 17 hours ago | parent | next [-]

> I don't know of any developers in meatspace that are impressed

I have a theory that there is some anomaly around Bay Area that makes LLMs much better there. Unfortunately the effects seem to be not observable from the outside, it doesn't seem to work on anything open source

bcrosby95 18 hours ago | parent | prev | next [-]

My boss was puzzled that despite LLMs writing ~30% of our code, he's not seeing a 30% increase in efficiency. Strange, that is.

johnb231 18 hours ago | parent [-]

Devs finish the work 30% faster and take the rest of the day off. That's what I would do. Working remotely.

cess11 17 hours ago | parent [-]

People aren't generally able to keep up the discipline to time when to pass on tickets to hide changes in their ability, unless it's forced by a constant anxiety.

Developers are also not very good at estimating how long something is supposed to take. If there was even a 10% jump in profitability in the software department it would have been obvious to bean counters and managers. You'd also see a massive recruitment spree, because large organisations ramp up activities that make money in the short term.

wilson090 18 hours ago | parent | prev [-]

The anti-LLM crowd on HN is far more cultish. I don't know why some developers insist on putting their head in the sand on this.

Jensson 17 hours ago | parent | next [-]

If LLM makes your coworkers slower why should you worry?

zer00eyz 16 hours ago | parent | prev | next [-]

The pro-LLM crowd on HN is just as cultish. The divide is as diverse as the work we do:

There is work that I do that is creative, dynamic and "new". The LLM isn't very helpful at doing that work. In fact it's pretty bad at getting that sort of thing "right" at all. There is also plenty of work that I do that is just transformational, or boiler plate or a gluing this to that. Here the LLM shines and makes my job easy by doing lots of the boring work.

Personal and professional context are going to drive that LLM experience. That context matters more than the model ever will. I would bet that there is a strong correlation between what you do day to day and how you feel about the quality of LLM's output.

skydhash 14 hours ago | parent [-]

What is the thing about glue code that people are rambling about? I’ve never seen such glue code that is tedious to write. What I’ve seen are code examples that I copy-pasted, code generators that I’ve used, and snippets that I’ve inserted. I strongly suspect that the tediousness was about making these work (aka understanding), not actually typing the code.

zer00eyz 8 hours ago | parent [-]

> I’ve never seen such glue code that is tedious to write.

Its a fair point, its not the writing per se thats tedious:

Fetch data from API 9522, write storage/trasformation/validation code, write display code. Test, tweak/fix, deploy.

Do you know how many badly designed and poorly documented API's I have had to go through in 25+ years? Do you know how many times I have written the same name/first_name/FirstName/First_name mapping between what comes in and what already exists. Today it's an old personal project, tommrow a client app, the day after home assistant (and templated yaml).

Why should I spend any time figuring out if the api doc is poorly or well written? Why should I learn what esoteric scheme of tokens you have chosen to put up the facade of security. Is mapping code fun to write? It's like the boiler plate around handling an error or writing a log message (things that you let autocomplete do if you can). Do you really want to invest in the bizarre choices of systems you USE but not often enough to make it worth your time to commit their goofy choices to memory (I'm looking at you templated yaml).

You are right that the "code is easy". It's the whole process and expense of brain power on things that are, in the long run, useless that makes it tedious. The study where people did not retain what the wrote/did with the LLM is a selling point not a down side. Tomorrow I have to do the same with API 9523 and 9524, and I'm going to be happy if it gets done and I retain none of it.

cess11 4 hours ago | parent [-]

I quite enjoy inventing parsers for docs and generating clients. You should try that approach instead of writing everything by hand.

cess11 16 hours ago | parent | prev | next [-]

On what, exactly? Where are the measurable gains?

I've tried out a lot of angles on LLM:s and besides first pass translations and audio transcriptions I have a hard time finding any use for them that is a good fit for me. In coding I've already generated scaffolding and CRUD stuff, and typically write my code in a way that makes certain errors impossible where I actually put my engineering while the assistant insists on adding checks for those errors anyway.

That's why I gave up on Aider and pushing contexts into LLM:s in Zed. As far as I can tell this is an unsolvable problem currently, the assistant would need to have a separate logic engine on the AST and basically work as a slow type checker.

Fancy autocomplete commonly insists on using variables that are previously unused or make overly complicated suggestions. This goes for both local models and whatever Jetbrains pushed out in IDEA Ultimate. One could argue that I'm doing it wrong but I like declaring my data first and then write the logic which means there might be three to ten data points lingering unused in the beginning of a function while I'm writing my initial implementation. I've tried to wriggle around this by writing explicit comments and so on but it doesn't seem to work. To me it's also often important to have simple, rather verbose code that is trivial to step or log into, and fancy autocomplete typically just don't do this.

I've also found that it takes more words to force models into outputting the kind of code I want, e.g. slurp the entire file that is absolutely sure to exist and if it doesn't we need to nuke anyway, instead of five step read configured old school C-like file handles. This problem seems worse in PHP than Python, but I don't like Python and if I use it I'll be doing it inside Elixir anyway so I need to manually make sure quotations don't break the Elixir string.

Personally I also don't have the time to wait for LLM:s. I'm in a hurry when I write my code, it's like I'm jogging through it, because I've likely done the thinking and planning ahead of writing, so I just want to push out the code and execute it often in a tight cycle. Shutting down for twenty to three hundred seconds while the silly oracle is drawing power over and over again is really annoying. Like, I commonly put a watch -n on the test runner in a side terminal with usually 3-10 seconds depending on how slow it feels at the moment, and that's a cadence LLM:s don't seem to be able to keep up with.

Maybe the SaaS ones are faster but for one I don't use them for legal reasons and secondly every video of one that I watch is either excruciatingly slow or they snipped or sped up 'thinking' portions. Some people seem to substitute for people and chat with their LLM:s like I would with a coworker or expert in some subject, which I'm not interested in, in part because I fiercely dislike the 'personality' LLM:s usually emulate. They are also not knowledgeable in my main problem domains and can't learn, unlike a person, whom I could explain context and constraints to before we get to the part where I'm unsure or not good enough.

To me these products are reminiscent of Wordpress. They might enable people like https://xcancel.com/leojr94_ to create plugins or prototypes, and some people seem to be able to maintain small, non-commercial software tools with them, but it doesn't seem like they're very good leverage for people that work on big software. Enterprise, critical, original systems, that kind of thing.

Edit: Related to that, I sometimes do a one-shot HTML file generation because I suck at stuff like Tailwind and post-HTML4 practices, and then I paste in the actual information and move things around. Seems fine for that, but I could just script it and then I'd learn more.

leptons 17 hours ago | parent | prev [-]

> I don't know why some developers insist on putting their head in the sand on this.

You don't think we're not using "AI" too? We're using these tools, but we can see pretty clearly how they aren't really the boon they are being hyped-up to be.

The LLM is kind of like a dog. I was trying to get my dog to do a sequence of things - pick up the toy we were playing with and bring it over to me. He did it a couple of times, but then after trying to explain what I wanted yet again, he went and picked up a different toy and brought it over. That's almost what I wanted.

Then I realized that matches the experience I've had with various "AI" coding tools.

I have to spend so much time reading and correcting the "AI" generated code, when I could have just coded the same thing myself correctly the first time. And this never stops with the "AI". At least with my dog, he is very food motivated and he learns the tricks like his life depends on it. The LLM, not so much.