▲ | dasil003 a day ago | |||||||
> We've all experienced that stare when talking to someone who does not have sufficient depth of understanding in a topic. I think you're really putting your finger on something here. LLMs have blown us away because they can interact with language in a very similar way to humans, and in fact it approximates how humans operate in many contexts when they lack a depth of understanding. Computers never could do this before, so it's impressive and novel. But despite how impressive it is, humans who were operating this way were never actually generating significant value. We may have pretended they were for social reasons, and there may even have been some real value associated with the human camaraderie and connections they were a part of, but certainly it is not of value when automated. Prior to LLMs just being able to read and write code at a pretty basic level was deemed an employable skill, but because it was not a natural skill for lots of human, it was also a market for lemons and just the basic coding was overvalued by those who did not actually understand it. But of course the real value of coding has always been to create systems that serve human outcomes, and the outcomes that are desired are always driven by human concerns that are probably inscrutable to something without the same wetware as us. Hell, it's hard enough for humans to understand each other half the time, but even when we don't fully understand each other, the information conferred through non-verbal cue, and familiarity with the personalities and connotations that we only learn through extended interaction has a robust baseline which text alone can never capture. When I think about strategic technology decisions I've been involved with in large tech companies, things are often shaped by high level choices that come from 5 or 6 different teams, each of which can not be effectively distilled without deep domain expertise, and which ultimately can only be translated to a working system by expert engineers and analysts who are able to communicate in an extremely high bandwidth fashion relying on mutual trust and applying a robust theory of the mind every step along the way. Such collaborators can not only understand distilled expert statements of which they don't have direct detailed knowledge, but also, they can make such distilled expert statements and confirm sufficient understanding from a cross-domain peer. I still think there's a ton of utility to be squeezed out of LLMs as we learn how to harness and feed them context most effectively, and they are likely to revolutionize the way programming is done day-to-day, but I don't believe we are anywhere near AGI or anything else that will replace the value of what a solid senior engineer brings to the table. | ||||||||
▲ | rhetocj23 19 hours ago | parent | next [-] | |||||||
Nice post. I am dumbfounded as to how this doesnt seem to resonate widely on HN. | ||||||||
▲ | robomartin a day ago | parent | prev [-] | |||||||
I am not liking the term "AGI". I think intelligence and understanding are very different things and they are both required to build a useful tool that we can trust. To use an image that might be familiar to lots of people reading this, the Sheldon character in Big Bang Theory is very intelligent about lots of fields of study and yet lacks tons of understanding about many things, particularly social interaction, the human impact of decisions, etc. Intelligence alone (AGI) isn't the solution we should be after. Nice buzz word, but not the solution we need. This should not be the objective at the top of the hill. | ||||||||
|