| Had someone put up a project plan for something that was not disclosed as LLM assisted output. While technically correct it came to the wrong conclusions about the best path forward and inevitably hamstrung the project. I only discovered this later when attempting to fix the mess and having my own chat with an LLM and getting mysteriously similar responses. The problem was that the assumptions made when asking the LLM were incorrect. LLMs do not think independently and do not have the ability to challenge your assumptions or think laterally. (yet, possibly ever, one that does may be a different thing). Unfortunately, this still makes them as good as or better than a very large portion of the population. I get pissed off not because of the new technology or the use of the LLM, but the lack of understanding of the technology and the laziness with which many choose to deliver the results of these services. I am more often mad at the person for not doing their job than I am at the use of a model, the model merely makes it easier to hide the lack of competence. |
| |
| ▲ | justfix17 15 hours ago | parent | next [-] | | > LLMs do not think Yep. More seriously, you described a great example of one of the challenges we haven't addressed. LLM output masquerades as thoughtful work products and wastes people's time (or worse tanks a project, hurts people, etc). Now my job reviewing work is even harder because bad work has fewer warning signs to pick up on. Ugh. I hope that your workplace developed a policy around LLM use that addressed the incident described. Unfortunately I think most places probably just ignore stuff like this in the faux scramble to "not be left behind". | | |
| ▲ | ludicrousdispla 15 hours ago | parent [-] | | It's even worse than you suggest, for the following reason. The rare employee that cares enough to read through an entire report is more likely to encounter false information which they will take as fact (not knowing that LLM produced the report, or unaware that LLMs produce garbage). The lazy employees will be unaffected. |
| |
| ▲ | 131012 14 hours ago | parent | prev | next [-] | | > LLMs do not think independently and do not have the ability to challenge your assumptions It IS possible for a LLM to challenge your assumptions, as its training material may include critical thinking on many subjects. The helpful assistant, being almost by definition a sycophant, cannot. | | |
| ▲ | newAccount2025 11 hours ago | parent [-] | | Strong agree. If you simply ask an LLM to challenge your thinking, spot weaknesses in your argument, or what else you might consider, it can do a great job. This is literally my favorite way to use it. Here’s an idea, tell me why it’s wrong. |
| |
| ▲ | thewebguyd 15 hours ago | parent | prev [-] | | > do not have the ability to challenge your assumptions or think laterally. Particularly on the challenging your assumptions part is where I think LLMs fail currently, though I won't pretend to know enough about how to even resolve that; but right now, I can put whatever nonsense I want into ChatGPT and it will happily go along telling me what a great idea that is. Even on the remote chance it does hint that I'm wrong, you can just prompt it into submission. None of the for-profit AI companies are going to start letting their models tell users they're wrong out of fear of losing users (people generally don't like to be held accountable) but ironically I think it's critically important that LLMs start doing exactly that. But like you said, the LLM can't think so how can it determine what's incorrect or not, let alone if something is a bad idea or not. Interesting problem space, for sure, but unleashing these tools to the masses with their current capabilities I think has done, and is going to continue to do more harm than good. | | |
| ▲ | myrryr 13 hours ago | parent | next [-] | | This is why once you are using to using them, you start asking them for there the plan goes wrong. They won't tell you off the bat, whuch can be frustrating, but they are really good at challenging your assumptions, if you ask them to do so. They are good at telling you what else you should be asking, if you ask them to do so. People don't use the tools effectively and then think that the tool can't be used effectively... Which isn't true, you just have to know how the tool acts. | |
| ▲ | DrewADesign 13 hours ago | parent | prev [-] | | I'm no expert, but the most frequent recommendations I hear to address this are: a) tell it that it's wrong and to give you the correct information. b) use some magical incantation system prompt that will produce a more critical interlocutor. The first requires knowing enough about the topic to know the chatbot is full of shit, which dramatically limits the utility of an information retrieval tool. The second assumes that the magical incantation correctly and completely does what you think it does, which is not even close to guaranteed. Both assume it even has the correct information and is capable of communicating it to you. While attempting to use various models to help modify code written in a less-popular language with a poorly-documented API, I learned how much time that can waste the hard way. If your use case is trivial, or you're using it as a sounding board with a topic you're familiar with as you might with, say, a dunning-kruger-prone intern, then great. I haven't found a situation in which I find either of those use cases compelling. |
|
|