▲ | JohnnyRebel 2 days ago | |
I totally agree; the liability is real, which is why we don’t let the LLM “invent” numbers. We use the model as the interface, but all financial data comes from a structured database. In practice, it works like RAG: the LLM interprets the user’s question, retrieves the right data, and explains the result in plain English. That way the math is deterministic, the answers are grounded, and the AI layer just makes it accessible. | ||
▲ | wrs 2 days ago | parent [-] | |
I can see that this is potentially a good sweet spot for the current state of AI. More complex and custom enterprise BI queries can get totally bollixed up in interpretation — even humans can’t agree on definitions so there’s no way to know if the query is “correct”. Perhaps in small business accounting SaaS you have the luxury of saying “this is the model, no substitutions please” and produce clearly interpretable answers. |