| |
| ▲ | forty 4 days ago | parent | next [-] | | I assume they have an opinion on the topic, but it doesn't mean they are right (or wrong). Think of driving a car. If the shortest path (in term to time of travel) is through traffic jam, and there is a longer path where you can drive must faster, it's very likely that most people will have the feeling to be more efficient with the longer path. Also the slow down of using LLM might be more subtle and harder to measure. They might happen at code review time, handling more bugs and incident, harder maintainance, recovering your deleted DB ;)... | | |
| ▲ | epolanski 4 days ago | parent | next [-] | | Apologies, but from antirez[1] to many other brilliant 1000x developers advocate for LLMs speeding up the process. I can see the impact on my own input both in quantity and quality (LLMs can come up with ideas I would not come up to, and are very useful for tinkering and quickly testing different solutions). As any tool it is up to the user to make the best out of it and understand the limits. At this point it is clear that naysayers: 1) either don't understand our job 2) or haven't given AI tools the proper stress testing in different conditions 3) or are luddites being defensive about the "old" world [1] https://www.antirez.com/news/154 | | |
| ▲ | forty 4 days ago | parent | next [-] | | From your source ``` The fundamental requirement for the LLM to be used is: don’t use agents or things like editor with integrated coding agents. You want to: * Always show things to the most able model, the frontier LLM itself. * Avoid any RAG that will show only part of the code / context to the LLM. This destroys LLMs performance. You must be in control of what the LLM can see when providing a reply. * Always be part of the loop by moving code by hand from your terminal to the LLM web interface: this guarantees that you follow every process. You are still the coder, but augmented. ``` Not sure about you, but I think this process, which your source seems to present as a prerequisites to use LLM efficiently (and seems good advice to me too, and actually very similar of how I use LLM myself) must be followed by less than 1% of LLM users. | |
| ▲ | forty 4 days ago | parent | prev | next [-] | | I wish I had only antirezs working on my projects, and would for sure be much more confident that some significant time might be saved with llms if that was the case. | |
| ▲ | zelphirkalt 4 days ago | parent | prev [-] | | 1000x developers, ahahaha! Come on now, this is too comical. Even 10x is extremely rare. The deciding factor is not speed. It is knowledge. Will I be able to dish out a great compiler in a week? Probably not. But an especially knowledgeable compiler engineer might just do it, for a simple language. Situations like this are the only 10x we have in our profession, if we don't count completely incapable people. The use of AI doesn't make you 1000x. It might make you output an infinite factor of AI slop more, but then you are just pushing the maintenance burden to a later point in time. In total it might make your output completely useless in the long run, making you a 0x dev in the worst case. |
| |
| ▲ | eichin 4 days ago | parent | prev [-] | | We've known for decades that self-reported time perception in computer interactions is drastically off (Jef Raskin, The Humane Interface in particular) so unless they have some specifically designed external observations, they are more likely to be wrong. (There have been more recent studies - discussed here on HN - about perception wrt chat interfaces for code specifically - that confirm the effect on modern tools.) |
| |
| ▲ | hvb2 4 days ago | parent | prev [-] | | There's 2 problems there
1. 'faster' is subjective since you cannot do the same task twice without the second time being biased by the learnings from the first pass
2. While speed might be one measure, I've rarely found speed to be the end goal. Unless you're writing something that's throw away, you'll be reading what was written many times over. Long term maintainability is at odds with speed, in most cases. | | |
| ▲ | epolanski 4 days ago | parent [-] | | You're implying that LLMs make maintainability worst when the opposite could happen if you know how to use the tools. | | |
| ▲ | zelphirkalt 4 days ago | parent [-] | | But the tools are trained on tons and tons of mediocre work and will have a strong tendency to output such. Please share your prompts aimed at preventing mediocre code entering the code bases you work on. So far almost no code I got from LLMs was acceptable to stay as suggested. I found it useful in cases, when I myself didn't know what a typical (!) way is to do things with some framework, but even then often opted for another way, depending on my project's goals and design. Sometimes useful to get you unstuck, but oh boy I wouldn't let it code for me. Then I would have to review so much bad code, it would be very frustrating. |
|
|
|