| ▲ | wizzwizz4 2 hours ago | |
Humans also have the ability to introspect. Ultimately, (nearly) every software project is intended to provide a service to humans, and most humans are similar in most ways: "what would I want it to do?" is a surprisingly-reliable heuristic for dealing with ambiguity, especially if you know where you should and shouldn't expect it to be valid. The best LLMs can manage is "what's statistically-plausible behaviour for descriptions of humans in the corpus", which is not the same thing at all. Sometimes, I imagine, that might be more useful; but for programming (where, assuming you're not reinventing wheels or scrimping on your research, you're often encountering situations that nobody has encountered before), an alien mind's extrapolation of statistically-plausible human behaviour observations is not useful. (I'm using "alien mind" metaphorically, since LLMs do not appear particularly mind-like to me.) | ||
| ▲ | bluGill 2 hours ago | parent [-] | |
Most companies I've worked for have had 'know the customer' events so that developers learn what the customers really do and in turn even if we are not in their domain we have a good idea what they care about. | ||