| ▲ | rafaelmn 4 days ago | ||||||||||||||||
If you're trusting core contributors without AI I don't see why you wouldn't trust them with it. Hiring a few core devs to work on it should be a rounding error to Anthropic and a huge flex if they are actually able to deliver. | |||||||||||||||||
| ▲ | mort96 4 days ago | parent | next [-] | ||||||||||||||||
I trust people to understand the code they write. I don't trust them to understand code they didn't write. | |||||||||||||||||
| |||||||||||||||||
| ▲ | t43562 4 days ago | parent | prev | next [-] | ||||||||||||||||
It's extremely tempting to write stuff and not bother to understand it similar to the way most of us don't decompile our binaries and look at the assembler when we write C/C++. So, should I trust an LLM as much as a C compiler? | |||||||||||||||||
| ▲ | jddj 4 days ago | parent | prev [-] | ||||||||||||||||
What if it impairs judgement? | |||||||||||||||||