Remix.run Logo
thewebguyd 15 hours ago

> do not have the ability to challenge your assumptions or think laterally.

Particularly on the challenging your assumptions part is where I think LLMs fail currently, though I won't pretend to know enough about how to even resolve that; but right now, I can put whatever nonsense I want into ChatGPT and it will happily go along telling me what a great idea that is. Even on the remote chance it does hint that I'm wrong, you can just prompt it into submission.

None of the for-profit AI companies are going to start letting their models tell users they're wrong out of fear of losing users (people generally don't like to be held accountable) but ironically I think it's critically important that LLMs start doing exactly that. But like you said, the LLM can't think so how can it determine what's incorrect or not, let alone if something is a bad idea or not.

Interesting problem space, for sure, but unleashing these tools to the masses with their current capabilities I think has done, and is going to continue to do more harm than good.

myrryr 13 hours ago | parent | next [-]

This is why once you are using to using them, you start asking them for there the plan goes wrong. They won't tell you off the bat, whuch can be frustrating, but they are really good at challenging your assumptions, if you ask them to do so.

They are good at telling you what else you should be asking, if you ask them to do so.

People don't use the tools effectively and then think that the tool can't be used effectively...

Which isn't true, you just have to know how the tool acts.

DrewADesign 13 hours ago | parent | prev [-]

I'm no expert, but the most frequent recommendations I hear to address this are:

a) tell it that it's wrong and to give you the correct information.

b) use some magical incantation system prompt that will produce a more critical interlocutor.

The first requires knowing enough about the topic to know the chatbot is full of shit, which dramatically limits the utility of an information retrieval tool. The second assumes that the magical incantation correctly and completely does what you think it does, which is not even close to guaranteed. Both assume it even has the correct information and is capable of communicating it to you. While attempting to use various models to help modify code written in a less-popular language with a poorly-documented API, I learned how much time that can waste the hard way.

If your use case is trivial, or you're using it as a sounding board with a topic you're familiar with as you might with, say, a dunning-kruger-prone intern, then great. I haven't found a situation in which I find either of those use cases compelling.