▲ | groby_b a day ago | |||||||||||||||||||||||||||||||
Simple fact: AI is extremely powerful, in the hands of experts who invested time in deeply understanding it, and in understanding how to actually use it well. Who are then willing to commit more time to build an actually sustainable solution. Alas, many members of the C suite do not exactly fit that description. They just have typed in a prompt or three, marveled that a computer can reply, and fantasize that it's basically a human replacement. There are going to be a lot of (figurative, incorporated) dead bodies on the floor. But there will also be a few winners who actually understood what they were doing, and the wins will be massive. Same as it was post dot-com. | ||||||||||||||||||||||||||||||||
▲ | SchemaLoad a day ago | parent | next [-] | |||||||||||||||||||||||||||||||
Something I've noticed is LLMs seem to be able to answer questions on everything, in quite a lot of detail. But I can't seem to get them to actually do anything useful, you basically have to hand hold them the entire way to the point they don't really add value. I'm sure there is plenty of research in to this, but there does seem to be a big difference between being able to answer questions, and actual intelligence. For example I have some product ideas in my head for things to 3D print, but I don't know enough about design to come up with the exact mechanisms and hinges for it. I've tried the chatbots but none of them can really tell me anything useful. But once I already know the answer, they can list all kinds of details and know all about the specific mechanisms. But are completely unable to suggest them to me when I don't mention them by name in the prompt. | ||||||||||||||||||||||||||||||||
▲ | stretchwithme a day ago | parent | prev [-] | |||||||||||||||||||||||||||||||
AI is useful to people who read and understand the answers and who would have eventually come up with a similar result on their own. They have judgement. They can improve what was generated. They can fix a result when it falls short of the objective. And they know when to give up on trying to get AI to understand. When rephrasing won't improve next word prediction. Which happens when the situation is complex. | ||||||||||||||||||||||||||||||||
|