| ▲ | danielvaughn 6 days ago |
| The approach I've taken to "vibe coding" is to just write pseudo-code and then ask the LLM to translate. It's a very nice experience because I remain the driver, instead of sitting back and acting like the director of a movie. And I also don't have to worry about trivial language details. Here's a prompt I'd make for fizz buzz, for instance. Notice the mixing of english, python, and rust. I just write what makes sense to me, and I have a very high degree of confidence that the LLM will produce what I want. fn fizz_buzz(count):
loop count and match i:
% 3 => "fizz"
% 5 => "buzz"
both => "fizz buzz"
|
|
| ▲ | jerf 6 days ago | parent | next [-] |
| That's a really powerful approach because LLMs are very very strong at what is basically "style transfer". Much better than they are at writing code from scratch. One of my most recent big AI wins was going the other way; I had to read some Mulesoft code in its native storage format, which is some fairly nasty XML encoding scheme, mixed with code, mixed with other weird things, but asking the AI to just "turn this into psuedocode" was quite successful. Also very good at language-to-language transfer. Not perfect but much better than doing it by hand. It's still important to validate the transfer, it does get a thing or two wrong per every few dozen lines, but it's still way faster than doing it from scratch and good enough to work with if you've got testing. |
| |
| ▲ | hardwaregeek 5 days ago | parent | next [-] | | My mental model for LLMs is that they’re a fuzzy compiler of sorts. Any kind of specification whether that’s BNF or a carefully written prompt will get “translated”. But if you don’t have anything to translate it won’t output anything good. | | |
| ▲ | gyomu 5 days ago | parent | next [-] | | > if you don’t have anything to translate it won’t output anything good. One of the greatest quotes in the history of computer science: “On two occasions I have been asked, – "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question" | |
| ▲ | danielvaughn 5 days ago | parent | prev [-] | | Yep, exactly. "Garbage in, garbage out" still applies. |
| |
| ▲ | hangonhn 5 days ago | parent | prev [-] | | I agree with that assessment but that makes me wonder if a T5 style LLM would work better than a decoder only style LLM like GPT or Claude. Has anyone tried that? |
|
|
| ▲ | globular-toast 5 days ago | parent | prev | next [-] |
| Is this seriously quicker than just writing in a language that you know? I mean, you're not benefitting from syntax highlighting, autocompletion, indentation, snippets etc. This looks like more work than I do at a higher cost and insane latency. |
| |
| ▲ | CJefferson 5 days ago | parent | next [-] | | I find it particularly useful when I would need to look up lots of library functions I don't remember. For example, in python I recently did something (just looked it up: for ever my file in directory 'd' ending '.capture':
Read file
Split every line into A=B:C
Make a dictionary send A to [B,C]
Return a list of pairs [filename, dict from filename]
I don't python enough to remember reading all files in a directory, or splitting strings. I didn't even bother proof reading the English (as you can see) | | |
| ▲ | stahorn 5 days ago | parent [-] | | Same when you a few times per year need to write some short bash script. It's really nice to not have to remember how it really works again! |
| |
| ▲ | danielvaughn 5 days ago | parent | prev | next [-] | | Those are just features waiting to be developed. I'm currently experimenting with building LLM-powered editor services (all the stuff you mentioned). It's not there yet, but as local models become faster and more powerful, it'll unlock. This particular example isn't very useful, but anecdotally it feels very nice to not need perfect syntax. How many programmer hours have been wasted because of trivial coding errors? | | |
| ▲ | globular-toast 5 days ago | parent | next [-] | | > How many programmer hours have been wasted because of trivial coding errors? Historically probably quite a lot, but with a decent editor and tools like gofmt that became popular in the past 10 years I'd say syntax is just not a problem any more. I can definitely recall the frustration of a missing closing bracket in HTML in the 90s, but nowadays people can turn out perfectly syntactically correct code on day 1 of a new language. | | |
| ▲ | danielvaughn 5 days ago | parent [-] | | That’s fair. Not to shift the goal post but my intuition has shifted recently as to what I’d consider a “trivial” problem. API details, off-by-one errors, and other issues like that are what I’d lump into that category. Easy way to say it is that source code requires perfection, whereas pseudo-code takes the pressure off of that last 10%, and IMO that could have significant benefits for cognitive load if not latency. Still all hypothetical, and something I’m actively experimenting with. Not a hill I’m gonna die on, but it’s super fun to play and imagine what might be possible. | | |
| |
| ▲ | 5 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | motorest 5 days ago | parent | prev [-] | | > Is this seriously quicker than just writing in a language that you know? Yes. Well, it depends. Most of the prompts specifying requirements and constraints can be reused, so you don't need to reinvent the wheel each time you prompt a LLM to do something. The same goes for test suites: you do not need to recreate a whole test suite whenever you touch a feature. You can even put together prompt files for specific types of task, such as extending test coverage (as in, don't touch project code and only append unit tests to the existing set) or refactoring work (as in, don't touch tests and only change project code) Also, you do not need to go for miracle single-shot sessions, or purist all-or-nothing prompts. A single prompt can fill in most of the code you require to implement a feature,and nothing prevents you from tweaking the output. It is seriously quicker because people like you and me use LLMs to speed up how the boring stuff is implemented. Guides like this are important to share some lessons on how to get LLMs to work and minimize drudge work. |
|
|
| ▲ | unshavedyak 5 days ago | parent | prev | next [-] |
| I do something similar, merely writing out the function signatures i want in code. The more concrete of the idea i have in my head the more i outline, outline tests, etc. However this is far less vibe coding and more actual work with an LLM imo. Overall i'm not finding much value in vibe coding. The LLM will "create value" that quickly starts to become an albatross of edge cases and unreviewed code. The bugs will work their way in and prevent the LLM from making progress, and then i have to dig in to find the sanity - which is especially difficult when the LLM dug that far. |
| |
| ▲ | danielvaughn 5 days ago | parent [-] | | Yeah I'm nowhere near ready to loosen the leash. Show me a long-running agent that can get within 90% of its goal, then I'll be convinced. But right now we barely even have the tools to properly evaluate such agents. |
|
|
| ▲ | animal531 5 days ago | parent | prev | next [-] |
| I've had great success with this with pseudo-code from research papers. I don't always understand the syntax but the LLM has no such problems. |
|
| ▲ | kristoff200512 5 days ago | parent | prev | next [-] |
| I initially used natural language as prompts, but the code output wasn’t ideal. When I listed the steps it should follow, I found that it executed them very well. |
|
| ▲ | j45 5 days ago | parent | prev | next [-] |
| Pseudo code is a great idea, similar to explaining how something should run |
|
| ▲ | serf 5 days ago | parent | prev [-] |
| I do something like that when I get down to the function level and there is an algorithm that is either struggling for the role or poorly optimized, but the models that excel in codebase architecture have their hands held behind their back with that level of micromanaging. the results are good because as another replier mentioned, LLMs are good at style transfer when given a rigid ruleset -- but this technique sometimes just means extra work at the operator level to needlessly define something the model is already very aware of. "write a fizzbuzz fn" will create a function with the same output. "write a fizzbuzz function using modulo" will get you closer to verbatim -- but my point here is that in the grand scheme of "will this get me closer to alleviating typing-caused-RSI-pain" the pseudocode usually only needs to get whipped out when the LLM does something braindead at the function level. |
| |
| ▲ | nine_k 5 days ago | parent [-] | | But "write a fizzbuzz fn" has one important assumption / limitation: the LLM should have seen a ton of fizbuzz implementations already to be able to respond. Hence, LLMs can be helpful to produce boilerplate / glue code, the kind that has already been written in many variations, but cannot be directly reused. Anything novel you should rather outline at a more detailed level. |
|