| ▲ | defrost 8 hours ago |
| The "upside" description: On the other you have a non-technical executive who's got his head round Claude Code and can run e.g. Python locally.
I helped one recently almost one-shot converting a 30 sheet mind numbingly complicated Excel financial model to Python with Claude Code.
Once the model is in Python, you effectively have a data science team in your pocket with Claude Code. You can easily run Monte Carlo simulations, pull external data sources as inputs, build web dashboards and have Claude Code work with you to really integrate weaknesses in your model (or business). It's a pretty magical experience watching someone realise they have so much power at their fingertips, without having to grind away for hours/days in Excel.
almost makes me physically sick.I've a reasonably intense math background corrupted by application to geophysics and implementing real world numerical applications. To be fair, this statement alone: * 30 sheet mind numbingly complicated Excel financial model makes my skin crawl and invokes a flight reflex. Still, I'll concede that a Claude Code conversion to Python of a 30 sheet Excel financial model is unlikely to be significantly worse than the original. |
|
| ▲ | majormajor 8 hours ago | parent | next [-] |
| One of the dirty secrets of a lot of these "code adjacent" areas is that they have very little testing. If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output? Or maybe you'll be too worried about getting chided for "not being data driven" enough. If an exec tells an intern or temp to vibecode that thing instead, then you definitely won't have any checkpoints in the process to make sure the human-language prompt describing process was properly turned into the right simulation. But unlike in coding, you don't have a user-facing product that someone can click around in, or send requests to, and verify. Is there a test suite for the giant excel doc? I'm assuming no, maybe I'm wrong. It feels like it's going to be very hard for anyone working in areas with less black-and-white verifiability or correctness like that sort of financial modeling. |
| |
| ▲ | Hammershaft 3 hours ago | parent | next [-] | | This has had tremendous real world consequences. The European austerity wave of the early 2010s was largely downstream of an excel spreadsheet errors that changed the result of a major study on the impact of debt/gdp. https://www.newscientist.com/article/dn23448-how-to-stop-exc... | |
| ▲ | tharkun__ 8 hours ago | parent | prev | next [-] | | This is a pet peeve of mine at work. Any and I mean any statistic someone throws at me I will try and dig in. And if I'm able to, I will usually find that something is very wrong somewhere. As in, the underlying data is usually just wrong, invalidating the whole thing or the data is reasonably sound but the person doing the analysis is making incorrect assumptions about parts of the data and then drawing incorrect conclusions. | | |
| ▲ | aschla 7 hours ago | parent | next [-] | | It seems to be an ever-present trait of modern business. There is no rigor, probably partly because most business professionals have never learned how to properly approach and analyze data. Can't tell you how many times I've seen product managers making decisions based on a few hundred analytics events, trying to glean insight where there is none. | | |
| ▲ | p_v_doom 2 hours ago | parent | next [-] | | Also rigor is slow. Looks like a waste of time. What are you optimizing all that code for, it works doesnt it? Dont let perfect be the enemy of good. If it works 80% thats enough, just push it. What is technical debt? | |
| ▲ | gyomu 4 hours ago | parent | prev [-] | | If what you're saying 1) is true and 2) does matter in the success of a business, then wouldn't anyone be able to displace an incumbent trivially by applying a bit of rigor? I think 1) holds (as my experience matches your cynicism :), but I have a feeling that data minded people tend to overestimate the importance of 2)... | | |
| ▲ | mettamage 2 hours ago | parent | next [-] | | Rigor helps for better insights about data. That can help for entrepreneurship. What also can help for entrepreneurship is having a bias for action. So even if your insights are wrong, if you act and keep acting you will keep acting then you will partially shape reality to your will and bend to its will. So there are certain forces where you can compensate for your lack of rigor. The best companies have both of those things by their side. | |
| ▲ | laserlight 2 hours ago | parent | prev [-] | | > does matter in the success of a business In many experience, many of the statistics these people use doesn't matter in the success of a business --- they are vanity metrics. But people use statistics, and especially the wrong statistics, to pass their agenda. Regardless, it's important to fix the statistics. |
|
| |
| ▲ | defrost 7 hours ago | parent | prev [-] | | I've frequently found, over a few decades, that numerical systems are cyclically 'corrected' until results and performance match prior expectations. There are often more errors. Sometimes the actual results are wildly different in reality to what a model expects .. but the data treatment has been bug hunted until it does what was expected .. and then attention fades away. | | |
| ▲ | pprotas 3 hours ago | parent [-] | | Or the company just changes the definition of success, so that the metrics (that used to be bad last quarter) are suddenly good |
|
| |
| ▲ | p_v_doom 2 hours ago | parent | prev | next [-] | | > If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Back in my data scientist days I used to push for testing and verification of models. Got told off for reducing the teams speed. If the model works well enough to get money in, and the managers that make the final calls do not understand the implications of being wrong, this would be the majority of cases. | |
| ▲ | obscurette 5 hours ago | parent | prev | next [-] | | > If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output? The local statistics office here recently presented salary statistics claiming that teachers' salaries had unexpectedly increased by 50%. All the press releases went out, and it was only questions raised by the public that forced the statistics office to review and correct the data. | |
| ▲ | singingbard 5 hours ago | parent | prev [-] | | I did a fair about of data analysis and deciding when or if my report was correct was a huge adrenaline rush. A huge test for me was to have people review my analyses and poke holes. You feel good when your last 50 reports didn’t have a single thing anyone could point out. I’ve been seeing a lot of people try to build analyses with AI who haven’t been burned with the
“just because it sounds correct doesn’t mean it’s right” dilemma who haven’t realized what it takes before you can stamp your name on an analysis. |
|
|
| ▲ | decimalenough 8 hours ago | parent | prev | next [-] |
| I'm almost certain it will be significantly worse. The Excel sheet will have been tuned over the years by people who knew exactly what it was doing and fixed countless bugs along the way. The Claude Code copy will be a simulacrum that may behave the same way with some inputs, but is likely to get many of edge cases wrong, and, when you're talking about 30 sheets of Excel, there will be many, many of these sharp edges. |
| |
| ▲ | defrost 8 hours ago | parent | next [-] | | I won't disagree - I suffered from insufficient damning praise in my last sentence above. IMHO, earned through years of bleeding eyeballs, the first will be riddled with subtle edge cases curiously patched and fettled such that it'll limp through to the desired goal .. mostly. The automated AI assisted transcoding will be ... interesting. | |
| ▲ | holoduke 3 hours ago | parent | prev [-] | | My assumption is that with the right approach you can create a much much better and reliable program using only Claude code. You are referring to yolo coding results |
|
|
| ▲ | PunchyHamster an hour ago | parent | prev | next [-] |
| we're going from "bad excel sheet caused recession" to "bad vibe-coded financial thing caused recession" |
|
| ▲ | bitwize 4 hours ago | parent | prev | next [-] |
| The thing is, when you use AI, you're not really doing things, you're having things done. AI isn't a tool, it's a service. Now, back in the day, IBM designed and built an "executive data terminal". It wasn't really a computer terminal in the sense that you and I understand it. Rather, it was a video and two-way-audio feed to a room with a team of underlings, which an executive could ask for business data and analyses, which could be called up on a computer display (also routed to the executive's office). This allowed the executive to ask questions so he (it was the 1960s, it was almost invariably a he) could make informed decisions, and the team of underlings to call up data or crunch numbers on the computer and show the results on the display. So because executives are used to having things done for them, I can totally see AI being used by executives to replace the "team of underlings" in this setup—in principle. The fact is that were I in that CEO's chair, I'd be thinking twice before trusting anything an LLM tells me, and double-checking those results—perhaps with my team of underlings. Discussed on Hackernews: https://news.ycombinator.com/item?id=42405462
IEEE article: https://spectrum.ieee.org/ibm-demo |
|
| ▲ | ChrisMarshallNY 8 hours ago | parent | prev [-] |
| Obligatory xkcd: https://xkcd.com/1667/ |