| ▲ | tekne 4 hours ago | |
Wait... why? Making an unreliable, nondeterministic system give reliable results for a bounded task with well-understood parameters is... like half of engineering, no? There's a huge difference between "generate this code here's a vague feature description" and "here's a list of criteria, assign this input to one of these buckets" -- the latter is obviously subject to prompt engineering, hallucination, etc -- but so can a human pipeline! | ||
| ▲ | JCTheDenthog 3 hours ago | parent | next [-] | |
>the latter is obviously subject to prompt engineering, hallucination, etc -- but so can a human pipeline! ...which is why we write deterministic code to take the human out of the pipeline. One of the early uses of computers was calculating firing tables for artillery, to replace teams of humans that were doing the calculations by hand (and usually with multiple humans performing each calculation to catch errors). If early computers had a 99% chance of hallucinating the wrong answer to an artillery firing table, the response from the governments and militaries that used them would not be to keep using computers to calculate them. It would be to go back to having humans do it with lots of manual verification steps and duplicated work to be sure of the results. If you're trying to make LLMs (a vague simulacrum of humans) with their inherent and unsolvable[1] hallucination problems replace deterministic systems, people are going to eventually decide to return to the tried and true deterministic systems. | ||
| ▲ | Neywiny 3 hours ago | parent | prev [-] | |
Because it's not possible. There is nothing you can say to the LLM that will guarantee that something happens. It's not how it works. It will maybe be taken into consideration if you're lucky. But if you're trying to tell me that every time you list criteria you get them all perfectly matched, you're clearly gifted. | ||