| ▲ | JohnMakin 4 hours ago | |||||||
> Imagine a programming language where statements are suggestions and functions return “Success” while hallucinating. Reasoning becomes impossible; reliability collapses as complexity grows. This is essentially declarative programming. Most traditional programming is imperative, what most developers are used to - I give the exact set of instructions and expect them to be obeyed as I write them. Agents are way more declarative than imperative - you give them a result, they work on getting that result. Now the problem of course, is in something declarative like say, SQL, this result is going to be pretty consistent and well-defined, but you're still trusting the underlying engine on how to go about it. Thinking about agents declaratively has helped me a lot rather than to try to design these rube-goldberg "control" systems around them. Didn't get it right? Ok, I validated it's not correct, let's try again or approach it differently. If you really need something imperative, then write something imperative! Or have the agent do so. This stuff reads like trying to use the wrong tool for the job. | ||||||||
| ▲ | repelsteeltje 3 hours ago | parent | next [-] | |||||||
I was thinking of declarative, but PROLOG rather than SQL. So with actual control flow and reasoning capabilities. And then you run into similar issues as the llm does, like silent failures, loops, contradictions unless you're very careful. The essence might be the same closed world assumption problem. In llm case this manifests as hallucination rather that admitting it does not know. | ||||||||
| ▲ | miltonlost 2 hours ago | parent | prev [-] | |||||||
SQL's declarativeness is also based on the mathematics of relational algebra, so it will return the same result every time. Will it return it in the same amount of time every single query? No, that depends on indexing and database size. But the query itself won't be altered in the same way an LLM would be. | ||||||||
| ||||||||