Remix.run Logo
order-matters 3 hours ago

the details are key here. there is plenty of automatable financial work, sure, but also when it comes to reporting finances/costs (formally or informally) and having a real human being be accountable for them, you REALLY need to trust that nothing is hallucinated.

Any idea how they ensure this doesnt happen? As in, how can a user verify that the model did not touch any of the numbers and that it only built pipelines for them.

what I've been telling my CFO who wants to get AI involved in things is that for a lot of accounting and finance work "Trust but verify" doesnt work because verify is often the same process as doing the work.

infecto 3 hours ago | parent | next [-]

To be honest I am having a hard time remembering the last time a LLM hallucinated in our pipelines. Make mistakes, sure but not make things up. For a daily recon process this is a solved problem imo.

fnordpiglet 12 minutes ago | parent | next [-]

I see it hallucinate quite often in development but mostly in getting small details wrong that are automatically corrected by lint processes. Large scale hallucination seems better guarded but I also suspect it’s because latitude is constrained by context and harnesses like lint, type systems, as well as fine tuned tool flows in coding models to control for divergence. But I would classify making mistakes like variable names wrong or package naming or signatures wrong as hallucations.

KellyCriterion an hour ago | parent | prev [-]

Curious! Could you elaborate a little bit on your pipeline as we are currently looking to solve this for our internal processes in which we have to deal with lots of financial information from outside, containing mass of numbers, like annual reports, bank statements, balance sheets etc.

tyre 29 minutes ago | parent [-]

Not who you’re replying for but I can give some thoughts.

For anything math, it’s much more reliable to give agents tools. So if you want to verify that your real estate offer is in the 90–95th percentile of offerings in the past three months, don’t give Claude that data and ask it to calculate. Offload to a tool that can query Postgres.

Similar with things needing data from an external source of truth. For example, what payers (insurance companies) reimburse for a specific CPT code (medical procedure) can change at any time and may be different between today and when the service was provided two months ago. Have a tool that farms out the calculation, which itself uses a database or whatever to pull the rate data.

The LLM can orchestrate and figure out what needs to be done, like a human would, but anything else is either scary (math) or expensive (it using context to constantly pull documentation.)

tomrod 3 hours ago | parent | prev [-]

> Any idea how they ensure this doesnt happen?

Build a deterministic query set and automate it for monthly or daily reporting reconcilliation.

Leave AI out of it.