Remix.run Logo
thewebguyd 2 days ago

> An example of a cycle looks like "now add an Options tab that let's me customize the global hotkey" where I'm only an end-user

Which is a prompt that someone with experience would write. Your average, non-technical person isn't going to prompt something like that, they are going to say "make it so I can change the settings" or something else super vague and struggle. We all know how difficult it is to define software requirements.

Just because an LLM wrote the actual code doesn't mean your prompts weren't more effective because of your experience and expertise in building software.

Sit someone down in front of an LLM with zero development or UI experience at all and they will get very different results. Chances are they won't even specify "macOS menu bar app" in the prompt and the LLM will end up trying to make them a webapp.

Your vibe coding experience just proves my initial point, that these tools are useful for those who already have experience and can lean on that to craft effective prompts. Someone non-technical isn't going to make effective use of an LLM to make software.

hombre_fatal 2 days ago | parent | next [-]

Counter point: https://news.ycombinator.com/item?id=46234943

Your original claim:

> The hype is all about "this tech will enable non-experts to do things they couldn't do before"

Are you saying that a prompt like "make a macOS weather app for me" and "make an options menu that lets me set my location" are only something an expert can do?

I need to know what you think their expertise is in.

ModernMech 2 days ago | parent | prev [-]

Here's how I look at it as a roboticist:

The LLM prompt space is an ND space where you can start at any point, and then the LLM carves a path through the space for so many tokens using the instructions you provided, until it stops and asks for another direction. This frames LLM prompt coding as a sort of navigation task.

The problem is difficult because at every decision point, there's an infinite number of things you could say that could lead to better or worse results in the future.

Think of a robot going down the sidewalk. It controls itself autonomously, but it stops at every intersection and asks "where to next boss?" You can tell it either to cross the street, or drive directly into traffic, or do any number of other things that could cause it to get closer to its destination, further away, or even to obliterate itself.

In the concrete world, it's easy to direct this robot, and to direct it such that it avoids bad outcomes, and to see that it's achieving good outcomes -- it's physically getting closer to the destination.

But when prompting in an abstract sense, its hard to see where the robot is going unless you're an expert in that abstract field. As an expert, you know the right way to go is across the street. As a novice, you might tell the LLM to just drive into traffic, and it will happily oblige.

The other problem is feedback. When you direct the physical robot to drive into traffic, you witness its demise, its fate is catastrophic, and if you didn't realize it before, you'd see the danger then. The robot also becomes incapacitated, and it can't report falsely about its continued progress.

But in the abstract case, the LLM isn't obliterated, it continues to report on progress that isn't real, and as a non expert, you can't tell its been flattened into a pancake. The whole output chain is now completely and thoroughly off the rails, but you can't see the smoldering ruins of your navigation instructions because it's told you "Exactly, you're absolutely right!"