| ▲ | IanCal 3 days ago |
| You absolutely don’t need this. We know this to be true as we use humans and they are none of these things (at 100%) and we use other ml systems that don’t hit all there. Directionally those things are beneficial but you just need the benefits to outweigh the costs. |
|
| ▲ | aprilthird2021 3 days ago | parent | next [-] |
| > 100% auditable, explainable and deterministic workflow. Not 100% deterministic workers but workflow. The auditability and explainability of your system becomes difficult with AI and LLMs in between because you don't know at what point in the reasoning things turned wrong. You need, for a lot of things, to know at every step of the way who is culpable and what part of the work they were doing and why it went wrong and how |
|
| ▲ | kakacik 3 days ago | parent | prev | next [-] |
| Depends on the industry, clearly you never worked in such. Regulated (medical, transport, municipality, state, army and so on) or just with some decent enforced regulations like whole finance for example, and bam! you have serious regulatory issues that every single sane business tries desperately to stay away from. |
| |
| ▲ | 3 days ago | parent | next [-] | | [deleted] | |
| ▲ | IanCal 3 days ago | parent | prev | next [-] | | “There are business problems” and “most business problems” are not the same thing. | |
| ▲ | foobarian 3 days ago | parent | prev [-] | | > you have serious regulatory issues ... until people decide they are OK with things being less than 100% and relax the regulations. Helped along by the purveyors of the AI tools no doubt |
|
|
| ▲ | gizajob 3 days ago | parent | prev | next [-] |
| The difference is that although humans aren’t 100% accurate, they are responsible for their work. |
| |
| ▲ | dwohnitmok 3 days ago | parent [-] | | This has been going down over time. A lot of the software industry has been moving away from assigning humans individual responsibility for failure (e.g. blameless post mortems). | | |
| ▲ | Yoric 2 days ago | parent [-] | | I suspect that it's only a small corner of the software industry, which is itself only a small corner of industry. I further suspect that most actors will still want someone responsible to take the blame when an incident takes place. Even if they have to make one up. |
|
|
|
| ▲ | bandrami 3 days ago | parent | prev [-] |
| Yeah no. I make software used on actual flight simulators and we literally need it to be deterministic, to the extent of needing the same help query to always return the exact same results for all users at all times. |
| |
| ▲ | IanCal 3 days ago | parent | next [-] | | Some business problems need that. That’s not the same as asserting most do and it’s certainly not the same all business problems. Some things need to be deterministic. Many don’t. Even your business will have many such problems that don’t need 100% all those properties - every task performed by a human for example. You as a developer are not all of these things 100%! And your help query may need to be deterministic but does it need to be explainable? Many ml solutions aren’t really explainable, certainly not to 100% whatever that may mean, but can easily be deterministic. | |
| ▲ | charcircuit 3 days ago | parent | prev [-] | | If you were on a real flight and asked a human for help, they wouldn't give a deterministic answer. This doesn't seem like an actual requirement that is needed, but rather something that is post hoc rationalized because it was cheaper to make that way. While terms like consistency may come up when referring to having deterministic output as a requirement, the true reason could actually just be cost. | | |
| ▲ | throwup238 3 days ago | parent | next [-] | | > If you were on a real flight and asked a human for help, they wouldn't give a deterministic answer. If you were on a real flight, asking a qualified human - like a trained pilot - would result in a very deterministic checklist. Deterministic responses to emergencies is at least half of the training from the time we get a PPL. | |
| ▲ | hi_hi 3 days ago | parent | prev | next [-] | | Regulated industries (amongst many) need to be deterministic. Imagine your bank being non-deterministic. | | |
| ▲ | charcircuit 3 days ago | parent [-] | | >Imagine your bank being non-deterministic. That's already the case. Payments are not deterministic. It can take multiple days for things to settle. The real world is messy. When I make a payment I have no clue if the money is actually going to make it to a merchant or if some fraud system will block it. | | |
| ▲ | hi_hi 3 days ago | parent | next [-] | | The bank can very much determine if the payment has been made or not (although not immediately, as you mentioned). As a rule, banks like to keep track of money. | |
| ▲ | soco 3 days ago | parent | prev | next [-] | | Yes it settles deterministically. With AI it claims to be settled and goes on, and it's up to you to figure it out how deterministic the whole transaction actually was. | |
| ▲ | Yoric 2 days ago | parent | prev [-] | | Is it the main issue? Payments suffer from race conditions, but the processes themselves are deterministic, auditable and may be rolled back. Not sure how many of these important attributes would remain with a neural network at the helm. |
|
| |
| ▲ | IanCal 3 days ago | parent | prev [-] | | Even then it can be deterministic but not explainable. Tfidf is fairly explainable but about the limit imo for full explanations making sense such that you can fully reason about them and predict outcomes and issues accurately. Embeddings could give better, fully deterministic results but I wouldn’t say they’re 100% explainable. |
|
|