| ▲ | simmerup 8 hours ago |
| Terrifying that people are creating financial models with AI when they don’t have the skills to verify the model does what they expect |
|
| ▲ | nebula8804 8 hours ago | parent | next [-] |
| All we need is one major crash caused by AI to scare the capital owners. Then maybe us white collar workers can breath a bit for at least another few more years(maybe a decade+). |
| |
| ▲ | onion2k 4 hours ago | parent | next [-] | | All we need is one major crash caused by AI to scare the capital owners. All the previous human-driven crashes didn't change anything about capital owners' approach to money, so why would an AI-driven crash change things? | | |
| ▲ | ktzar 2 hours ago | parent | next [-] | | because we have an alternative that we humans can fix. The problem with AI is that it creates without leaving a trace of understanding. | |
| ▲ | leptons 2 hours ago | parent | prev [-] | | The scapegoating is different. Using an LLM makes them more culpable for the failure, because they should have known better than to use a tech that is well known to systematically lie. |
| |
| ▲ | danielbln 5 hours ago | parent | prev [-] | | A decade+ is wishful copium. |
|
|
| ▲ | martinald 8 hours ago | parent | prev | next [-] |
| They have an excel sheet next to it - they can test it against that. Plus they can ask questions if something seems off and have it explain the code. |
| |
| ▲ | AlotOfReading 8 hours ago | parent | next [-] | | I'm not sure being able to verify that it's vaguely correct really solves the issue. Consider how many edge cases inhabit a "30 sheet, mind-numbingly complicated" Excel document. Verifying equivalence sounds nontrivial, to put it mildly. | | |
| ▲ | Dylan16807 4 hours ago | parent [-] | | Consider how many edge cases it misses. Equivalence probably shouldn't be the top priority here. | | |
| ▲ | Nevermark 3 hours ago | parent [-] | | Equivalence here would definitely be the worst test, except for all the alternatives. |
|
| |
| ▲ | lmm 8 hours ago | parent | prev | next [-] | | > They have an excel sheet next to it - they can test it against that. It used to be that we'd fix the copy-paste bugs in the excel sheet when we converted it to a proper model, good to know that we'll now preserve them forever. | |
| ▲ | karlgkk 8 hours ago | parent | prev [-] | | [flagged] | | |
| ▲ | yomismoaqui 8 hours ago | parent [-] | | You would be surprised at the volume of money made by businesses supported by Excel. | | |
| ▲ | martinald 8 hours ago | parent [-] | | Yes. I suspect there are thousands of Excel files that "process" >$1bn/yr out there. | | |
|
|
|
|
| ▲ | myfakebadcode 8 hours ago | parent | prev | next [-] |
| I’m trying to learn rust coming from python (for fun). I use various LLM for python and see it stumble. It is a beautiful experience to realize wtf you don’t know and how far over their skis so many will get trusting AI. The idea of deploying a rust project at my level of ability with an AI at the helm is is terrifying. |
|
| ▲ | taneq 6 hours ago | parent | prev | next [-] |
| If they have the skills to verify the Excel model then they can apply the same approach to the numbers produced by the AI-generated model, even if they can’t inspect it directly. In my experience a lot of Excel models aren’t really tested, just checked a bit and them deemed correct. |
|
| ▲ | fatheranton 8 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | mkoubaa 8 hours ago | parent | prev | next [-] |
| It's not terrifying at all, some shops will fail and some will succeed and in the aggregate it'll be no different for the rest of us |
|
| ▲ | derrida 7 hours ago | parent | prev [-] |
| Business as usual. |