| ▲ | defen 7 hours ago | ||||||||||||||||||||||
let's take a second to think about the threat vectors here. The two obvious ones I can think of are: "AI hallucinates and tells you to put non-food into the food" and "AI hallucinates and gives you unsafe prep instructions" (e.g. "heat the chicken to an internal temperature of 110 degrees"). For both of those, it's not clear why "random recipe from an internet blog" is safer than something the AI generates. At some level if someone is preparing your food you need to trust that they know how to prepare food, no matter where they're getting their instructions from. | |||||||||||||||||||||||
| ▲ | kube-system 4 hours ago | parent | next [-] | ||||||||||||||||||||||
People who do not understand or even use AI are not in a position to even begin "thinking about threat vectors". That isn't how they've come to their worldview, at all. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | daveguy 5 hours ago | parent | prev [-] | ||||||||||||||||||||||
Yeah, but I would trust a human writing a blog not to suggest heating chicken to 110F because the human writing the blog understands that they are taking responsibility for that recipe... The AI LLM model doesn't have a clue about responsibility except to regurgitate feel-good snippets about responsibility. | |||||||||||||||||||||||
| |||||||||||||||||||||||