▲ | pulse7 5 days ago | |||||||
The difference between LLM and a very junior programmer: junior programmer will learn and change, LLM won't change! The more instructions you put in the prompt, the more will be forgotten and the more it will bounce back to the "general world-wide average". And on next prompt you must start all over again... Not so with junior programmers ... | ||||||||
▲ | irb 5 days ago | parent | next [-] | |||||||
This is the only thing that makes junior programmers worthwhile. Any task will take longer and probably be more work for me if I give it to a junior programmer vs just doing it myself. The reason I give tasks to junior programmers is so that they eventually become less junior, and can actually be useful. Having a junior programmer assistant who never gets better sounds like hell. | ||||||||
| ||||||||
▲ | buserror 5 days ago | parent | prev | next [-] | |||||||
Ahaha you likely haven't seen as many Junior Programmer as I have then! </jk> But I agree completely some juniors are a pleasure to see bloom, it's nice when one day you see their eye shine and "wow this is so cool, never realized you made that like THAT for THAT reason" :-) | ||||||||
▲ | n4r9 5 days ago | parent | prev | next [-] | |||||||
The other big difference is that you can spin up an LLM instantly. You can scale up your use of LLMs far more quickly and conveniently than you can hire junior devs. What used to be an occasional annoyance risks becoming a widespread rot. | ||||||||
▲ | relistan 5 days ago | parent | prev | next [-] | |||||||
My guess is that you're letting the context get polluted with all the stuff it's reading in your repo. Try using subagents to keep the top level context clean. It only starts to forget rules (mostly) when the context is too full of other stuff and the amount taken up by the rules is small. | ||||||||
▲ | muzani 5 days ago | parent | prev | next [-] | |||||||
They're automations. You have to program them like every other script. | ||||||||
▲ | freilanzer 5 days ago | parent | prev [-] | |||||||
The learning is in the model versions. |