▲ | ohyes 17 hours ago | |||||||||||||
Had someone put up a project plan for something that was not disclosed as LLM assisted output. While technically correct it came to the wrong conclusions about the best path forward and inevitably hamstrung the project. I only discovered this later when attempting to fix the mess and having my own chat with an LLM and getting mysteriously similar responses. The problem was that the assumptions made when asking the LLM were incorrect. LLMs do not think independently and do not have the ability to challenge your assumptions or think laterally. (yet, possibly ever, one that does may be a different thing). Unfortunately, this still makes them as good as or better than a very large portion of the population. I get pissed off not because of the new technology or the use of the LLM, but the lack of understanding of the technology and the laziness with which many choose to deliver the results of these services. I am more often mad at the person for not doing their job than I am at the use of a model, the model merely makes it easier to hide the lack of competence. | ||||||||||||||
▲ | justfix17 16 hours ago | parent | next [-] | |||||||||||||
> LLMs do not think Yep. More seriously, you described a great example of one of the challenges we haven't addressed. LLM output masquerades as thoughtful work products and wastes people's time (or worse tanks a project, hurts people, etc). Now my job reviewing work is even harder because bad work has fewer warning signs to pick up on. Ugh. I hope that your workplace developed a policy around LLM use that addressed the incident described. Unfortunately I think most places probably just ignore stuff like this in the faux scramble to "not be left behind". | ||||||||||||||
| ||||||||||||||
▲ | 131012 15 hours ago | parent | prev | next [-] | |||||||||||||
> LLMs do not think independently and do not have the ability to challenge your assumptions It IS possible for a LLM to challenge your assumptions, as its training material may include critical thinking on many subjects. The helpful assistant, being almost by definition a sycophant, cannot. | ||||||||||||||
| ||||||||||||||
▲ | thewebguyd 16 hours ago | parent | prev [-] | |||||||||||||
> do not have the ability to challenge your assumptions or think laterally. Particularly on the challenging your assumptions part is where I think LLMs fail currently, though I won't pretend to know enough about how to even resolve that; but right now, I can put whatever nonsense I want into ChatGPT and it will happily go along telling me what a great idea that is. Even on the remote chance it does hint that I'm wrong, you can just prompt it into submission. None of the for-profit AI companies are going to start letting their models tell users they're wrong out of fear of losing users (people generally don't like to be held accountable) but ironically I think it's critically important that LLMs start doing exactly that. But like you said, the LLM can't think so how can it determine what's incorrect or not, let alone if something is a bad idea or not. Interesting problem space, for sure, but unleashing these tools to the masses with their current capabilities I think has done, and is going to continue to do more harm than good. | ||||||||||||||
|