▲ | globular-toast 5 days ago | |||||||||||||||||||||||||||||||
Is this seriously quicker than just writing in a language that you know? I mean, you're not benefitting from syntax highlighting, autocompletion, indentation, snippets etc. This looks like more work than I do at a higher cost and insane latency. | ||||||||||||||||||||||||||||||||
▲ | CJefferson 5 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
I find it particularly useful when I would need to look up lots of library functions I don't remember. For example, in python I recently did something (just looked it up:
I don't python enough to remember reading all files in a directory, or splitting strings. I didn't even bother proof reading the English (as you can see) | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | danielvaughn 5 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Those are just features waiting to be developed. I'm currently experimenting with building LLM-powered editor services (all the stuff you mentioned). It's not there yet, but as local models become faster and more powerful, it'll unlock. This particular example isn't very useful, but anecdotally it feels very nice to not need perfect syntax. How many programmer hours have been wasted because of trivial coding errors? | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | motorest 5 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
> Is this seriously quicker than just writing in a language that you know? Yes. Well, it depends. Most of the prompts specifying requirements and constraints can be reused, so you don't need to reinvent the wheel each time you prompt a LLM to do something. The same goes for test suites: you do not need to recreate a whole test suite whenever you touch a feature. You can even put together prompt files for specific types of task, such as extending test coverage (as in, don't touch project code and only append unit tests to the existing set) or refactoring work (as in, don't touch tests and only change project code) Also, you do not need to go for miracle single-shot sessions, or purist all-or-nothing prompts. A single prompt can fill in most of the code you require to implement a feature,and nothing prevents you from tweaking the output. It is seriously quicker because people like you and me use LLMs to speed up how the boring stuff is implemented. Guides like this are important to share some lessons on how to get LLMs to work and minimize drudge work. |