Remix.run Logo
grey-area 3 days ago

Or maybe it's just not as good as it's been sold to be. I haven't seen any small teams doing very big things with it, which ones are you thinking of?

sim7c00 3 days ago | parent | next [-]

you are not wrong. the only 'sane' approaches ive seen with vibe coding is making a PoC to see if some concept works. then rewrite it entirely to make sure its sound.

besides just weird or broken code, anything exposed to user input is usually severly lacking sanity checks etc.

llms are not useless for coding. but imho letting llms do the coding will not yield production grade code.

bbarnett 3 days ago | parent | next [-]

Koko the gorilla understood language, but most others of her ilk simlpy make signs because a thing will happen.

Move hand this way and a human will give a banana.

LLMs have no understanding at all of the underlying language, they've just seen that a billion times a task looks like such and such, so have these tokens after them.

SirHumphrey 3 days ago | parent | next [-]

What does it matter if they have understanding of the underlying language or not? Heck, do humans even have the "understanding of the underlying language". What does that even mean?

It's a model. It either predicts usefully or not. How it works is mostly irrelevant.

shagmin 3 days ago | parent | next [-]

Defining what that means exactly is one endeavor. But it's important to the how, because whatever it may mean implies a drastically limited set of capabilities, a ceiling, etc., compared to whatever it may mean - if it weren't the case.

sim7c00 3 days ago | parent | prev | next [-]

interesting take. i dont know a lot about grammarz yet in my own language i can speak fairly ok...

all i know about these LLMs is that even if they understand language or can create it, they know nothing of the subjects they speak of.

copilot told me to cast an int to str to get rid of an error.

thanks copilot, it was on kernel code.

glad i didnt do it :/. just closed browser and opened man pages. i get nowhere with these things. it feels u need to understand so much its likely less typing to write the code. code is concise and clear after all, mostly unambiguous. language on the other hand...

i do like it as a bit of a glorified google, but looking at what code it outputs my confidence it its findings lessens every prompt

cstoner 3 days ago | parent [-]

> all i know about these LLMs is that even if they understand language or can create it, they know nothing of the subjects they speak of.

As a recent example of this, I was recently curious about how the heart gets the oxygen depleted blood back to the heart. Pumping blood out made sense to me, but the return path was less obvious.

So I asked chatgpt whether the heart sucks in the blood from veins.

It told me that the heart does not suck in the blood, it creates a negative pressure zone that causes the blood to flow into it ... :facepalm:

Sure, my language was non-technical/imprecise, but I bet if I asked a cardiologist about this they would have said something like "That's not the language I would have used, but basically."

I don't know why, but lately I've been getting a lot of cases where these models contradicts themself even within the same response. I'm working out a lot (debating a triathlon) and it told me to swim and do upper body weight lifting on the same day to "avoid working out the same muscle group in the same day". Similarly it told me to run and do leg workouts on the same day.

> i do like it as a bit of a glorified google, but looking at what code it outputs my confidence it its findings lessens every prompt

I'm having the exact same reaction. I'm finding they are still more useful than google, even with an error rate close to 70%, but I am quickly learning that you can't trust anything they output and should double check everything.

sim7c00 8 hours ago | parent | next [-]

it seems more looping possible answers, just going back to same bad answers hoping you forgot you gave it. we're training incredibly expensive and eloquent goldfish.

maybe this is the effect of the LLMs interacting with eachother, the dumbing down. gpt-6 will be a markov chain again and gpt-7 will know that f!sh go m00!

hattmall a day ago | parent | prev | next [-]

AI is impressive for a subject you know nothing about. If you ask it what you already know it becomes far less impressive.

2 days ago | parent | prev [-]
[deleted]
ryandrake 3 days ago | parent | prev | next [-]

I think that more often than we'd like to admit, we humans are also just not thinking that much about or understanding what we are communicating, and just outputting the statistically most likely next word over and over.

grey-area a day ago | parent | prev | next [-]

Without understanding you can’t have creativity or fix mistakes. It matters a lot.

anuramat 3 days ago | parent | prev | next [-]

Nobody knows what intelligence is, yet somehow everyone has a strong opinion on what it isn't; after all, how could piecewise affine transformations/markov chains/differential equations EVER do X?

Piskvorrr 3 days ago | parent | prev [-]

In which case...what good is a model that predicts semi-randomly? Oh.

("But it works - when it works" is a tautology, not a useful model)

anuramat 3 days ago | parent [-]

What does "semi-random" even mean? Are humans not "semi-random" in the same sense?

3 days ago | parent [-]
[deleted]
jedwards1211 3 days ago | parent | prev [-]

There’s been a lot of criticism that Koko’s language abilities were overblown and her expressions were overinterpreted as well.

A4ET8a8uTh0_v2 3 days ago | parent | prev | next [-]

POC approach seems to work for me lately. It still takes effort to convince manager that it makes sense to devote time to polishing it afterwards, but some of the initial reticence is mitigated.

edit: Not a programmer. Just a guy who needs some stuff done for some of the things I need to work on.

actinium226 3 days ago | parent | prev [-]

For a second I thought PoC meant Piece of Crap XD

michaeldoron 3 days ago | parent | prev | next [-]

A team of 9 people made Base44, a product for vibe-coding apps, and sold it for $80M within 6 months.

https://techcrunch.com/2025/06/18/6-month-old-solo-owned-vib...

piva00 3 days ago | parent | next [-]

That's just an example of surfing on the incestuous hype, they created a vibe-coded tool that was bought by Wix to help vibe-code other stuff.

Is there any example of successful companies created mostly/entirely by "vibe coding" that isn't itself a company in the AI hype? I haven't seen any, all examples so far are similar to yours.

wredcoll 3 days ago | parent [-]

Yeah, that's like those crypto/nft "products" that just let you move crypto/nfts around.

ponector 3 days ago | parent | prev [-]

A team of 9 created or vibe-coded a product?

bubblyworld 3 days ago | parent | prev | next [-]

As always, two things can be true. Ignore both the hucksters and the people loudly denigrating everything LLM-related, and somewhere in between you find the reality.

I'm in a tiny team of 3 writing b2b software in the energy space and claude code is a godsend for the fiddly-but-brain-dead parts of the job (config stuff, managing cloud infra, one-and-done scripts, little single page dashboards, etc).

We've had much less success with the more complex things like maintaining various linear programming/neural net models we've written. It's really good at breaking stuff in subtle ways (like removing L2 regularisation from a VAE while visually it still looks like it's implemented). But personally I still think the juice is worth the squeeze, mainly I find it saves me mental energy I can use elsewhere.

datadrivenangel 3 days ago | parent | prev [-]

I've seen small teams of a few people write non-trivial software services with AI that are useful enough to get users and potentially viable as a business.

We'll see how well they scale.