| ▲ | allenu 14 hours ago | ||||||||||||||||
> I think the real divide is over quality and standards. I think there are multiple dimensions that people fall on regarding the issue and it's leading to a divide based on where everyone falls on those dimensions. Quality and standards are probably in there but I think risk-tolerance/aversion could be behind some how you look at quality and standards. If you're high on risk-taking, you might be more likely to forego verifying all LLM-generated code, whereas if you're very risk-averse, you're going to want to go over every line of code to make sure it works just right for fear of anything blowing up. Desire for control is probably related, too. If you desire more control in how something is achieved, you probably aren't going to like a machine doing a lot of the thinking for you. | |||||||||||||||||
| ▲ | bandrami 7 hours ago | parent | next [-] | ||||||||||||||||
This. My aversion to LLMs is much more that I have low risk tolerance and the tails of the distribution are not well-known at this point. I'm more than happy to let others step on the land mines for me and see if there's better understanding in a year or two. | |||||||||||||||||
| |||||||||||||||||
| ▲ | aleph_minus_one an hour ago | parent | prev [-] | ||||||||||||||||
I think it's a little bit more complicated. I, for example, would claim to be rather risk-tolerant, but I (typically) don't like AI-generated code. The solution to the paradox this creates if one considers the model of your post is simple: - I deeply love highly elegant code, which the AI models do not generate. - I cannot stand people (and AIs) bullshitting me; this makes me furious. I thus have an insanely low tolerance for conmen (and conwomen and conAIs). | |||||||||||||||||