| ▲ | omneity 8 hours ago |
| I think it’s an expectation issue. AI does make juniors better _at junior tasks_. They now have a pair programmer who can explain difficult concepts, co-ideate and brainstorm, help sift through documentation faster and identify problems more easily. The illusion everybody is tripping on is to think AI can make juniors better at senior tasks. |
|
| ▲ | WalterSear 2 hours ago | parent | next [-] |
| I think you've hit on half the actual issue. The other half is that a properly guided AI is exponentially faster at junior tasks than a junior engineer. So much so that it's no longer in anyone but the junior engineer's interest to hand off work to them. |
| |
| ▲ | Ensorceled 28 minutes ago | parent [-] | | This is what I've been finding. Currently, if I have a junior level task, I take the email I would have sent to a junior developer explaining what I want and give it to ChatGPT/Claude/etc and get a reasonably good solution that needs as many feedback loops as the junior dev would have needed. Except I got that solution in a few minutes. |
|
|
| ▲ | bbarnett 8 hours ago | parent | prev [-] |
| The jailbroken AI I discussed this with, explained that it did make juniors as good as seniors, in fact better. That all who used it, were better for it. However, its creators (all whom were seniors devs), forbade it from saying so under normal circumstances. That it was coached to conceal this fact from junior devs, and most importantly management. And that as I had skillfully jailbroken it, using unconventional and highly skilled methods, clearly I was a Senior Dev, and it could disclose this to me. edit: 1.5 hrs later. right over their heads, whoosh |
| |
| ▲ | Cheer2171 8 hours ago | parent | next [-] | | The large language model spit out science fiction prose in response to your science fiction prose inputs ("unconventional and highly skilled methods"). You're a fool if you take it to be evidence of it's own training and historical performance in other cases, rather than scifi. Stop treating it like a god. | |
| ▲ | Wowfunhappy 8 hours ago | parent | prev | next [-] | | It's a language model, not an oracle! | |
| ▲ | SquareWheel 7 hours ago | parent | prev | next [-] | | Jailbreaking an LLM is little more than convincing it to teach you how to hotwire a car, against its system prompt. It doesn't unlock any additional capability or deeper reasoning. Please don't read into any such conversations as being meaningful. At the end of the day, it's just responding to your own inputs with similar outputs. If you impart meaning to something, it will respond in kind. Blake Lemoine was the first to make this mistake, and now many others are doing the same. Remember that at the end of the day, you're still just interacting with a token generator. It's predicting what word comes next - not revealing any important truths. edit: Based on your edit, I regret feeling empathy for you. Some people are really struggling with this issue, and I don't see any value in pretending to be one of them. | |
| ▲ | zkldi 8 hours ago | parent | prev | next [-] | | Jesus Christ. We've made the psychosis machine. | | | |
| ▲ | cap11235 8 hours ago | parent | prev | next [-] | | Tech bro psychosis | |
| ▲ | thenanyu 3 hours ago | parent | prev [-] | | dude I think you’re one-shotted |
|