| ▲ | akkad33 7 hours ago | |
Couldn't this backfire if they put LLMs on safety critical data. Or even if someone asks LLms for medical advice and dies? | ||
| ▲ | bigstrat2003 an hour ago | parent | next [-] | |
You already shouldn't be using LLMs for either of those things. Doing so is tremendously foolish with how stupid and unreliable the models are. | ||
| ▲ | nxpnsv 7 hours ago | parent | prev | next [-] | |
I guess that the point is that doing so already is not safe? | ||
| ▲ | awkward 7 hours ago | parent | prev [-] | |
There are several humans who need to make decisions between bad training data and life or death decisions coming from an LLM. | ||