Remix.run Logo
actsasbuffoon 3 days ago

This mirrors a weird thought I’ve had recently. It’s not a thing I necessarily agree with, but just an idea.

I hear people say things like, “AI isn’t coming for my job because LLMs suck at [language or tech stack]!”

And I wonder, does that just mean that other stacks have an advantage? If a senior engineer with Claude Code can solve the problem in Python/TypeScript in significantly less time than you can solve it in [tech stack] then are you really safe? Maybe you still stack up well against your coworkers, but how well does your company stack up against the competition?

And then the even more distressing thought accompanies it: I don’t like the code that LLMs produce because it looks nothing like the code I write by hand. But how relevant is my handwritten code becoming in a world where I can move 5x faster with coding agents? Is this… shitty style of LLM generated code actually easier for code agents to understand?

Like I said, I don’t endorse either of these ideas. They’re just questions that make me uncomfortable because I can’t definitively answer them right now.

majormajor 3 days ago | parent | next [-]

All the disadvantages of those stacks still exist.

So if you need to avoid GC issues, or have robust type safety, or whatever it is, to gain an edge in a certain industry or scenario, you can't just switch to the vibe tool of choice without (best case) giving up $$$ to pay to make up for the inefficiency or (worst case) having more failures that your customers won't tolerate.

But this means the gap between the "hard" work and the "easy" work may become larger - compensation included. Probably most notably in FAANG companies where people are brought in expected to be able to do "hard" work and then frequently given relatively-easy CRUD work in low-ROI ancillary projects but with higher $$$$ than that work would give anywhere else.

And the places currently happy to hire disaffected ex-FAANG engineers who realized they were being wasted on polishing widgets may start having more hiring difficulty as the pipeline dries up. Like trying to hire for assembly or COBOL today.

dgunay 3 days ago | parent | prev | next [-]

Letting go of the particulars of the generated code is proving difficult for me. I hand edit most of the code my agents produce for taste even if it is correct, but I feel that in the long term that's not the optimal use of my time in agent-driven programming. Maybe the models will just get so good that they know how I would write it myself.

bilekas 3 days ago | parent [-]

I would argue this approach will help you in the long term with code maintainability. Which I feel will be one of the biggest issues down the line with AI generated codebases as they get larger.

monkpit 3 days ago | parent [-]

The solution is to codify these sorts of things in prompts and tool use and gateways like linters etc. you have to let go…

dgunay 5 hours ago | parent | next [-]

I have been doing this, and it does sort of work, but the problem is that for things that can't easily be turned into deterministic lints, prompting isn't 100% reliable. Every bit you go against the LLM's training data, it's more likely to forget to do it.

bilekas 3 days ago | parent | prev [-]

What do you mean "you have to let go".

I use some ai tools and sometimes they're fine, but I won't in my lifetime anyway hand over everything to an AI, not out of some fear or anything, but even purely as a hobby. I like creating things from scratch, I like working out problems, why would I need to let that go?

jaggederest 3 days ago | parent [-]

Well, the point is, if it's not a hobby, you have to encode your preferences in lint and formatters, rather than holding onto manually messing with the output.

It's really freeing to say "Well, if the linter and the formatter don't catch it, it doesn't matter". I always update lint settings (writing new rules if needed) based on nit PR feedback, so the codebase becomes easier to review over time.

It's the same principle as any other kind of development - let the machine do what the machine does well.

hoyo1s 3 days ago | parent | prev | next [-]

Sometimes one just need [language or tech stack] to do something, especially for some performance/security considerations.

For now LLMs still suffers from hallucination and lack of generalizability, The large amount of code generated is sometimes not necessarily a benefit, but a technical debt.

LLMs are good for open and fast, prototype web applications, but if we need a stable, consistent, maintainable, secure framework, or scientific computing, pure LLMs are not enough, one can't vibe everything without checking details

fragmede 3 days ago | parent | prev | next [-]

LLMs write python and typescript well, because of all the examples in their training data. But what if we made a new programming language whos goal was to be optimal for an LLM to generate it? Would it be closer to assembly? If we project that the future is vibe coded, and we scarcely look at the outputted code, testing, instead, that the output matches the input correctly, not looking at the code, what would that language look like?

alankarmisra 3 days ago | parent | next [-]

They’d presumably do worse. LLMs have no intrinsic sense of programming logic. They are merely pattern matching against a large training set. If you invent a new language that doesn’t have sufficient training examples for a variety of coding tasks, and is syntactically very different from all the existing languages, the LLMs wouldn’t have enough training data and would do very badly.

majormajor 3 days ago | parent | prev | next [-]

What is it that you think would make a certain non-Python language "more optimal" for an LLM? Is there something inherently LLM-friendly about certain language patterns or is "huge sets of training examples" and "a robust standard library" (the latter to conserve tokens/attention vs having to spit out super-verbose 20x longer assembly all day) all "optimality" means?

metrix 3 days ago | parent | prev | next [-]

I have thought the same thing. How is it created? is it an idea by an LLM to make the language, or a dev to create a language designed for an llm.

How do we get the LLM to gain knowledge on this new language that we have no example usage of?

hoyo1s 3 days ago | parent | prev [-]

Strict type-checking and at least with some dependent type and inductive type

3 days ago | parent | prev | next [-]
[deleted]
airtonix 3 days ago | parent | prev [-]

[dead]