| ▲ | mhovd 2 hours ago |
| The risk to benefits ratio of introducing a language model to interpret so clear signals is nowhere near justified. Monitoring and analytics is important, but it is a solved problem. A language model will only be able to hallucinate about the relationship between meals and glycemic response. At best it does no harm, at worst it can directly misinform. |
|
| ▲ | pimeys 2 hours ago | parent | next [-] |
| Yep. The oref1 algorithm is amazing and proven to make diabetic's quality of life better, AND SAFE. I don't understand why would you need to add AI to that mix. But I will check this algo out. Maybe it has some interesting bits. |
|
| ▲ | wg0 44 minutes ago | parent | prev | next [-] |
| Thanks for calling out! We're even yet debating and trying to understand what impact AI has on software engineering and quality let alone putting AI into something that's directly linked to a human's well being. |
|
| ▲ | AnthonBerg an hour ago | parent | prev [-] |
| My experience is completely the opposite, of using LLMs to pattern match and cast diagnostic nets. Is your perspective based on, say, opinionated principle?, or experience? The benefits are enormous. The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so. |
| |
| ▲ | consp 10 minutes ago | parent | next [-] | | > The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so. My local physician says otherwise, with respect to facebook posts about dosages. I'm convinced the same applies to LLM generated content with respect to people blindly following the computer. | |
| ▲ | pferde an hour ago | parent | prev | next [-] | | I think you're being too optimistic about your fellow humans' judgement. "Death by GPS" is a quite common occurrence: https://www.sciencedirect.com/science/article/abs/pii/S13550... | |
| ▲ | andersonpico 14 minutes ago | parent | prev | next [-] | | > No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so. if you can't trust this thing then what is it doing? the implication that people that trust this software do not have adult competency is also confusing. > Is your perspective based on, say, opinionated principle?, or experience? your perspective is solely based on recent trauma so I don't know if it is more reliable in any
capacity | |
| ▲ | pu_pe 37 minutes ago | parent | prev | next [-] | | Risks: Changing parameters on the insulin pump because the LLM said so Neglecting to seek actual medical advice believing a LLM replaces it Misunderstanding medical complexity (ie a prescription due to medical history not available to the LLM) | |
| ▲ | mexicocitinluez 8 minutes ago | parent | prev [-] | | > No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so. You 1000% don't work with the general public in a tech way. |
|