Remix.run Logo
defen 7 hours ago

let's take a second to think about the threat vectors here. The two obvious ones I can think of are: "AI hallucinates and tells you to put non-food into the food" and "AI hallucinates and gives you unsafe prep instructions" (e.g. "heat the chicken to an internal temperature of 110 degrees"). For both of those, it's not clear why "random recipe from an internet blog" is safer than something the AI generates. At some level if someone is preparing your food you need to trust that they know how to prepare food, no matter where they're getting their instructions from.

kube-system 4 hours ago | parent | next [-]

People who do not understand or even use AI are not in a position to even begin "thinking about threat vectors". That isn't how they've come to their worldview, at all.

satvikpendem 18 minutes ago | parent [-]

Yeah, it's ideological, like a religion as someone else mentioned, that's then supported ex post facto.

daveguy 5 hours ago | parent | prev [-]

Yeah, but I would trust a human writing a blog not to suggest heating chicken to 110F because the human writing the blog understands that they are taking responsibility for that recipe... The AI LLM model doesn't have a clue about responsibility except to regurgitate feel-good snippets about responsibility.

tokioyoyo 3 hours ago | parent | next [-]

Wild takes in this thread. Copy and blog writing industry is just random fiverrs or hires from countries with cheap labour to pump up the SEO rankings.

Everyone grew up with an understanding to “never trust the random internet content for 100%”, now we’re trying to say that AI has to be 100% reliable.

daveguy 2 hours ago | parent [-]

Okay, captain pedantic. Clearly I'm assuming a known food blogger with a reputation at stake employed by bon appetite / food network / etc in this scenario. Not some random SEO spam.

newZWhoDis 5 hours ago | parent | prev [-]

>because the human writing the blog understands

Bold assumption