Remix.run Logo
helloplanets 10 hours ago

And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.

There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.

fallous 9 hours ago | parent | next [-]

You just described the burden of outsourcing programming.

onion2k 5 hours ago | parent | next [-]

Outsourcing development and vibe coding are incredibly similar processes.

If you just chuck ideas at the external coding team/tool you often get rubbish back.

If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.

darkwater 7 hours ago | parent | prev | next [-]

With the basic and enormous difference that the feedback loop is 100 or even 1000x faster. Which changes the type of game completely, although other issues will probably arise as we try this new path.

Terr_ 6 hours ago | parent [-]

That embeds an assumption that the outsourced human workers are incapable of thought, and experience/create zero feedback loops of their own.

Frustrated rants about deliverables aside, I don't think that's the case.

darkwater 4 hours ago | parent | next [-]

No. It just means the harsh reality: what's really soul crushing in outsourced work is having endless meetings to pass down / get back information, having to wait days/weeks/months to get some "deliverable" back on which iterate etc. Yes, outsourced human workers are totally capable of creative thinking that makes sense, but their incentive will always be throughput over quality, since their bosses usually give closed prices (at least in what I lived personally).

If you are outsourcing to an LLM in this case YOU are still in charge of the creative thought. You can just judge the output and tune the prompts or go deep in more technical details and tradeoffs. You are "just" not writing the actual code anymore, because another layer of abstraction has been added.

Jagerbizzle an hour ago | parent | next [-]

Also, with an LLM you can tell it to throw away everything and start over whenever you want.

When you do this with an outsourced team, it can happen at most once per sprint, and with significant pushback, because there's a desire for them to get paid for their deliverable even if it's not what you wanted or suffers some other fundamental flaw.

raw_anon_1111 an hour ago | parent [-]

Yep, just these past two weeks. I tried to reuse an implementation I had used for another project, it took me a day to modify it (with Codex), I tried it out and it worked fine with a few hundred documents.

Then I tried to push through 50000 documents, it crashed and burned like I suspected. It took one day to go from my second more complicated but more scalable spec where I didn’t depend on an AWS managed service to working scalable code.

It would have taken me at least a week to do it myself

dimitrios1 an hour ago | parent | prev [-]

It doesn't have to be soul crushing.

Just like people more, and have better meetings.

Life is what you make it.

Enjoy yourself while you can.

ambicapter 44 minutes ago | parent | prev [-]

Not really, its just obviously true that the communication cycle with your terminal/LLM is faster than with a human over Slack/email.

tomrod 8 hours ago | parent | prev | next [-]

100%! There is significant analogy between the two!

salawat 8 hours ago | parent [-]

There is a reason management types are drawn to it like flies to shit.

theshrike79 6 hours ago | parent [-]

Working with and communicating with offshored teams is a specific skill too.

There are tips and tricks on how to manage them and not knowing them will bite you later on. Like the basic thing of never asking yes or no questions, because in some cultures saying "no" isn't a thing. They'll rather just default to yes and effectively lie than admit failure.

9 hours ago | parent | prev | next [-]
[deleted]
agumonkey 7 hours ago | parent | prev [-]

We need a new word for on-premise offshoring.

On-shoring ;

aleph_minus_one 7 hours ago | parent | next [-]

> On-shoring

I thought "on-shoring" is already commonly used for the process that undos off-shoring.

saghm 6 hours ago | parent | next [-]

How about "in-shoring"? We already have "insuring" and "ensuring", so we might as well add another confusingly similar sounding term to our vocabulary.

weebull 2 hours ago | parent | next [-]

How about we leave "...shoring" alone?

5 hours ago | parent | prev [-]
[deleted]
boring-human 40 minutes ago | parent | prev [-]

En-shoring?

tmtvl 2 hours ago | parent | prev | next [-]

Rubber-duckying... although a rubber ducky can't write code... infinite-monkeying?

biofox 3 minutes ago | parent [-]

In silico duckying

pferde 4 hours ago | parent | prev | next [-]

Corporate has been using the term "best-shoring" for a couple of years now. To my best guess, it means "off-shoring or on-shoring, whichever of the two is cheaper".

intended 6 hours ago | parent | prev | next [-]

Ai-shoring.

Tech-shoring.

johnisgood 5 hours ago | parent | next [-]

Would work, but with "snoring". :D

dzdt 4 hours ago | parent | prev [-]

vibe-shoring

heliumtera 3 hours ago | parent | prev [-]

We already have a perfect one

Slop;

dkdbejwi383 6 hours ago | parent | prev | next [-]

Fair enough but I am a programmer because I like programming. If I wanted to be a product manager I could have made that transition with or without LLMs.

sgarland 44 minutes ago | parent | next [-]

Agreed. The higher-ups at my company are, like most places, breathlessly talking about how AI has changed the profession - how we no longer need to code, but merely describe the desired outcome. They say this as though it’s a good thing.

They’re destroying the only thing I like about my job - figuring problems out. I have a fundamental impedance mismatch with my company’s desires, because if someone hands me a weird problem, I will happily spend all day or longer on that problem. Think, hypothesize, test, iterate. When I’m done, I write it up in great detail so others can learn. Generally, this is well-received by the engineer who handed the problem to me, but I suspect it’s mostly because I solved their problem, not because they enjoyed reading the accompanying document.

raw_anon_1111 an hour ago | parent | prev [-]

I’m a programmer (well half my job) because I was a short (still short) fat (I got better) kid with a computer in the 80s.

Now, the only reason I code and have been since the week I graduated from college was to support my insatiable addictions to food and shelter.

While I like seeing my ideas come to fruition, over the last decade my ideas were a lot larger than I could reasonably do over 40 hours without having other people working on projects I lead. Until the last year and a half where I could do it myself using LLMs.

Seeing my carefully designed spec that includes all of the cloud architecture get done in a couple of days - with my hands on the wheel - that would have taken at least a week with me doing some work while juggling dealing with a couple of other people - is life changing

GCUMstlyHarmls 9 hours ago | parent | prev | next [-]

I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.

lambdaone 4 hours ago | parent | next [-]

Right now vibe coding is more like training cats. You are constantly pushing against the model's tendency to produce its default outputs regardless of your directions. When those default outputs are what you want - which they are in many simple cases of effectively English-to-code translation with memorized lookup - it's great. When they are not, you might as well write the code yourself and at least be able to understand the code you've generated.

kimixa 3 hours ago | parent [-]

Yup - I've related it to working with Juniors, often smart and have good understandings and "book knowledge" of many of the languages and tools involved, but you often have to step back and correct things regularly - normally around local details and project specifics. But then the "junior" you work with every day changes, so you have to start again from scratch.

I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.

throwthrowuknow an hour ago | parent [-]

Try writing more documentation. If your project is bigger than a one man team then you need it anyways and with LLM coding you effectively have an infinite man team.

KptMarchewa 3 hours ago | parent | prev [-]

I've never seen horse that scratches you.

rixed 3 hours ago | parent | prev | next [-]

To me it feels a bit like literate programming, it forces you to form a much more accurate idea of your project before your start. Not a bad thing, but can be wasteful also when eventually you realise after the fact that the idea was actually not that good :)

fflluuxx 4 hours ago | parent | prev | next [-]

This is why people thinkless of artists like Damien Hirst and Jeff Koons because their hands have never once touched the art. They have no connection to the effort. To the process. To the trail and error. To the suffer. They’ve out sourced it, monetized it, and make it as efficient as possible. It’s also soulless.

jiveturkey 9 hours ago | parent | prev | next [-]

> need to make it crystal clear

That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.

balamatom 5 hours ago | parent | next [-]

And programming languages are designed for clarifying the implementation details of abstract processes; while human language is this undocumented, half grandfathered in, half adversarially designed instrument for making apes get along (as in, move in the same general direction) without excessive stench.

The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).

But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.

What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.

8 hours ago | parent | prev [-]
[deleted]
Der_Einzige 9 hours ago | parent | prev [-]

Yet another example of "comments that are only sort of true because high temperature sampling isn't allowed".

If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!

hnlmorg 7 hours ago | parent | next [-]

Have you actually tried high temperature values for coding? Because I don’t think it’s going to do what you claim it will.

LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.

Terr_ 6 hours ago | parent | next [-]

To put it another way, a high-temperature mad-libs machine will write a very unusual story, but that isn't necessarily the same as a clever story.

balamatom 4 hours ago | parent [-]

So why is this "temperature" not on, like, a rotary encoder?

So you can just, like, tweak it when it's working against your intent in either direction?

bob1029 5 hours ago | parent | prev [-]

High temperature seems fine for my coding uses on GPT5.2.

Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time.

I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops.

adevilinyc 9 hours ago | parent | prev [-]

How do you configure LLM température in coding agents, e.g. opencode?

kabr 8 hours ago | parent | next [-]

https://opencode.ai/docs/agents/#temperature

set it in your opencode.json

Der_Einzige 8 hours ago | parent | prev [-]

You can't without hacking it! That's my point! The only places you can easily are via the API directly, or "coomer" frontends like SillyTavern, Oobabooga, etc.

Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111

yoyohello13 8 hours ago | parent [-]

Once again, porn is where the innovation is…

dizhn 6 hours ago | parent [-]

Please.. "Creative Writing"