▲ | wrs 2 days ago | ||||||||||||||||
This and other data analysis front ends could be a fantastic application for LLMs + tool use. It’s also a market where getting the wrong answer could result in huge liability, so at this point you’re really rolling the dice that you’re a great LLM whisperer. (There’s no such thing as an LLM engineer, at least not yet.) | |||||||||||||||||
▲ | presentation 2 days ago | parent | next [-] | ||||||||||||||||
Yeah, I’m biased since my startup is a very non-AI payroll app, but trusting my finances to an LLM sounds frightening and the money saved is not much since just hiring an accountant whose neck is on the line to get it right just isn’t that expensive. | |||||||||||||||||
| |||||||||||||||||
▲ | JohnnyRebel 2 days ago | parent | prev | next [-] | ||||||||||||||||
I totally agree; the liability is real, which is why we don’t let the LLM “invent” numbers. We use the model as the interface, but all financial data comes from a structured database. In practice, it works like RAG: the LLM interprets the user’s question, retrieves the right data, and explains the result in plain English. That way the math is deterministic, the answers are grounded, and the AI layer just makes it accessible. | |||||||||||||||||
| |||||||||||||||||
▲ | FredPret 2 days ago | parent | prev [-] | ||||||||||||||||
LLM engineer -> silicon psychologist who can sometimes sell the beast into making the year-end postings pass all tests? | |||||||||||||||||
|