| ▲ | lbarrow 8 hours ago |
| Spitting your food out because the AI generated the recipe is so clearly irrational that I chuckled a bit on reading that |
|
| ▲ | dirkc 8 hours ago | parent | next [-] |
| People talk about AI getting things wrong all the time, why is it "so clearly irrational" to be doubtful of a recipe that might include ingredients that can make you sick? |
| |
| ▲ | VectorLock 8 hours ago | parent | next [-] | | Because I hope that someone who's hands were required to assemble the recipe didn't blindly add ingredients like "bleach" if the AI happened to hallucinate them. | | |
| ▲ | stvltvs 7 hours ago | parent [-] | | A naive hope perhaps, but this ignores the risk of LLMs just creating a bad recipe based on the blind combination of various recipes in their training data. | | |
| ▲ | VectorLock 7 hours ago | parent [-] | | As the parent comment said the people seemed to be enjoying the food otherwise so the LLM didn't create an unpalatable combination, and I can't think of any combination of edible and unharmful ingredients that might combine to something harmful (when consuming a reasonable amount) | | |
| ▲ | xmprt 5 hours ago | parent | next [-] | | This is exactly what makes it dangerous. Food can taste ok but actually cause you to get sick. Not all bacteria is going to taste off. I'm assuming you're not a chef because if you were then you'd know how absurd your statement is. For a super simple example, if you don't properly handle or cook raw meat then you risk getting sick even though the food might not immediately taste bad. Maybe that's obvious to you but might not be to the person preparing the food. Another example: Rhubarb pie is supposed to be made with the leaves and not the stalk because the stalk is poisonous and can cause illness. Just kidding, it's actually the other way around but if you were just reading a ChatGPT recipe that made that mistake maybe you wouldn't have caught it. | |
| ▲ | psvv 5 hours ago | parent | prev [-] | | If meat was involved, the cooking time may have been unsafe if other precautions weren't taken by the cook (like checking the internal temperature). |
|
|
| |
| ▲ | defen 7 hours ago | parent | prev | next [-] | | let's take a second to think about the threat vectors here. The two obvious ones I can think of are: "AI hallucinates and tells you to put non-food into the food" and "AI hallucinates and gives you unsafe prep instructions" (e.g. "heat the chicken to an internal temperature of 110 degrees"). For both of those, it's not clear why "random recipe from an internet blog" is safer than something the AI generates. At some level if someone is preparing your food you need to trust that they know how to prepare food, no matter where they're getting their instructions from. | | |
| ▲ | kube-system 4 hours ago | parent | next [-] | | People who do not understand or even use AI are not in a position to even begin "thinking about threat vectors". That isn't how they've come to their worldview, at all. | | |
| ▲ | satvikpendem 14 minutes ago | parent [-] | | Yeah, it's ideological, like a religion as someone else mentioned, that's then supported ex post facto. |
| |
| ▲ | daveguy 5 hours ago | parent | prev [-] | | Yeah, but I would trust a human writing a blog not to suggest heating chicken to 110F because the human writing the blog understands that they are taking responsibility for that recipe... The AI LLM model doesn't have a clue about responsibility except to regurgitate feel-good snippets about responsibility. | | |
| ▲ | tokioyoyo 3 hours ago | parent | next [-] | | Wild takes in this thread. Copy and blog writing industry is just random fiverrs or hires from countries with cheap labour to pump up the SEO rankings. Everyone grew up with an understanding to “never trust the random internet content for 100%”, now we’re trying to say that AI has to be 100% reliable. | | |
| ▲ | daveguy 2 hours ago | parent [-] | | Okay, captain pedantic. Clearly I'm assuming a known food blogger with a reputation at stake employed by bon appetite / food network / etc in this scenario. Not some random SEO spam. |
| |
| ▲ | newZWhoDis 5 hours ago | parent | prev [-] | | >because the human writing the blog understands Bold assumption |
|
| |
| ▲ | strongpigeon 8 hours ago | parent | prev | next [-] | | Because it assumes the person actually making the food has no common sense? | | |
| ▲ | therouwboat 7 hours ago | parent | next [-] | | We had billion dollar AI company install vending machine that was giving stuff away for free, so maybe AI users don't have common sense. | | | |
| ▲ | wpm 7 hours ago | parent | prev [-] | | If they're asking an LLM for a recipe, they don't. | | |
| ▲ | pixel_popping 7 hours ago | parent | next [-] | | My wife does it all the time, and it's actually decent. | |
| ▲ | bloody-crow 6 hours ago | parent | prev | next [-] | | That's just pure nonsense. My partner is very competent cook and she invents new recipes and experiments all the time. I don't see why she can't use LLM output as an inspiration to combine with her own expertise, sense of taste, and preferences to come up with an excellent dish. | |
| ▲ | baggy_trough 4 hours ago | parent | prev [-] | | That's quite an assertion. |
|
| |
| ▲ | steve1977 7 hours ago | parent | prev | next [-] | | People get things wrong all the time as well, so I wouldn't trust them either. | | |
| ▲ | happytoexplain 7 hours ago | parent [-] | | People get things wrong in a different, more observable/predictable way. Sure, we are easily tricked dummies and we can't know if a human is right or wrong, but our human-trust heuristics are highly developed. Our AI-trust heuristics don't exist. | | |
| ▲ | steve1977 7 hours ago | parent [-] | | I mean I had people serve me expired food and chicken that was half raw. The latter I could observe, the former I couldn't so easily. Both were things that could have made me sick. | | |
| ▲ | happytoexplain 7 hours ago | parent [-] | | For sure. I'm not defending human perfection, I'm defending human caution (Disclaimer: The format of the preceding sentence was chosen without AI assistance). |
|
|
| |
| ▲ | s1artibartfast 4 hours ago | parent | prev | next [-] | | Someone once try to feed me dinner from a recipie they found on the internet. I punched their lights out and then called the cops. | |
| ▲ | 8 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | mikestew 7 hours ago | parent | prev [-] | | Dunno about you, but I like the increased viscosity in my sauces when I use glue: https://www.bbc.com/news/articles/cd11gzejgz4o |
|
|
| ▲ | ikkun 8 hours ago | parent | prev | next [-] |
| I could see being concerned about food safety; I wouldn't trust an AI recipe to tell me how long/what temperature to cook chicken, and I might not trust someone who uses AI to generate recipes to know either. |
| |
| ▲ | kbelder 4 hours ago | parent | next [-] | | An appropriate response might be asking "Hey, I don't trust AI... what's the recipe?" The described action seems performative and emotional, as it they were ideologically opposed to AI. Like spitting out food because it was prepared by a caste you found unclean. | |
| ▲ | ctoth 7 hours ago | parent | prev | next [-] | | Hi! I love to cook! I also use AI to brainstorm recipes sometimes! Wanna try asking Claude, ChatGPT, Gemini, or even Grok what temperature chicken needs to be cooked to? I just asked Claude: 165°F (74°C) internal temperature. Where does this come from? | | |
| ▲ | ikkun 7 hours ago | parent | next [-] | | if you ask that question alone, AI is most likely to get it right, but the usual pitfalls of AI apply; they sometimes randomly get things wrong, people are more likely to miss wrong information when it's surrounded with correct information, and LLMs are specifically good at making text that seems correct on the surface. and in my experience, people often use AI specifically because they don't have a lot of knowledge in an area. if you do already know plenty about cooking, I'm sure using AI is probably fine, I just see it as a red flag. cooking is also a form of art, with a strong social aspect. using AI for it has a similar ick factor to using generative AI for pictures. I'm not saying I immediately distrust anyone using it, but I do think it's a sign that maybe the person cares a bit less about what they're doing. | |
| ▲ | miloignis 7 hours ago | parent | prev | next [-] | | Arguably, that's wrong - not because it's unsafe, but because it's not the best temperature for any part of the chicken I know of. I'm a big J. Kenji López-Alt and Serious Eats fan, and 165 is too hot for good chicken breast and too cool for good dark meat: https://www.seriouseats.com/chicken-thigh-temperature-techni... | |
| ▲ | happytoexplain 7 hours ago | parent | prev | next [-] | | I can't tell if you're criticizing the parent or are innocently asking how Claude knows the temperature for chicken. To be clear in the case of the former: Harm data points have approximately one trillion times the weight of no-harm data points, as a rule of thumb. | |
| ▲ | stvltvs 7 hours ago | parent | prev | next [-] | | Even if it can give the right answer when asked, will it necessarily account for that in a recipe it generates? A beginning cook may not know enough to ask. | |
| ▲ | ahahahahah 4 hours ago | parent | prev [-] | | That's such pointless evidence. Let's see what gemini says in response to a more realistic prompt: https://gemini.google.com/share/f0bcbe46c337 Well, look at that. 1.5 lbs of chicken breast in the oven @425 for 10 minutes, and a minute or two of broiling should do the trick. Unlike all human-written recipes I found, it doesn't give the temperature to cook it to. |
| |
| ▲ | s1artibartfast 4 hours ago | parent | prev | next [-] | | I a cook not paying attention or messing up and accurate recipe is overwhelmingly more likely. IF someone is to the point of worrying about AI recipe risk for chicken, they should have already rejected any food made by amateur or professional cooks due to excessive risk. | |
| ▲ | lbarrow 7 hours ago | parent | prev [-] | | Yea, I suppose that is fair regarding cook timings. |
|
|
| ▲ | pixel_popping 8 hours ago | parent | prev | next [-] |
| but was it done with GPT-5.4 xhigh with an adversarial loop? |
|
| ▲ | racl101 4 hours ago | parent | prev | next [-] |
| First thanksgiving dinner? |
|
| ▲ | layer8 7 hours ago | parent | prev | next [-] |
| I interpret it as an expression of disgust. Similar to how people will stop reading and throw away a good book when they learn the author is a morally reprehensible person. |
| |
| ▲ | wak90 7 hours ago | parent [-] | | Like, I wouldn't spit the food out. But I would be disgusted. Someone told me they planned their vacation with an llm and I couldn't help but express disdain for this friend of mine. Why are we outsourcing creativity and research and interest in discovery to an llm? | | |
| ▲ | ericd 2 hours ago | parent | next [-] | | Would you have disdain for someone who used a human travel agent to plan out an itinerary? | |
| ▲ | thevinter 6 hours ago | parent | prev | next [-] | | Probably because the person wasn't interested in planning their vacation and wanted just to enjoy the end result? Let's not assume different people find the same parts of the process enjoyable. | |
| ▲ | s1artibartfast 4 hours ago | parent | prev | next [-] | | AI planned a european honeymoon for the wife and I and it was fantastic, one our the best vacations. I hate internet travel research. We told it our interests and gave it feedback. I also discovered the best way to go to an art museum is to walk through with AI, taking pictures of each piece of art. It will tell you the historical context of its creation, a 1 page summary of the most facinating facts. It is like having a team of 100 art history professors in your pocket. | |
| ▲ | bloody-crow 6 hours ago | parent | prev | next [-] | | Really don't get this take. I really hate vacation planning and would outsource this part in a heartbeat. My partner does this for me currently and she seems enjoy it quite a bit, but if she wasn't, the LLM-generated plans I've tried out of curiosity were equally as good. | |
| ▲ | lostmsu 6 hours ago | parent | prev [-] | | > Why are we outsourcing creativity and research and interest in discovery to an llm? This is also weird. I hate planning vacations, but I like going to them. |
|
|
|
| ▲ | dvfjsdhgfv 6 hours ago | parent | prev | next [-] |
| Really? I can think of a few reasons I wouldn't trust AI-generated recipes. |
|
| ▲ | misiti3780 7 hours ago | parent | prev | next [-] |
| lol = if you're against AI recipes, you have bigger problems. |
|
| ▲ | ajross 7 hours ago | parent | prev | next [-] |
| The very fact that your takeaway from that story was "look at how dumb my enemies are" is why this is a conflict worth worrying about. Are you right? Yeah, basically. Are you going to laugh at your stupid neighbors until they burn your house down in rage? Maybe? You don't treat fear with malice. |
|
| ▲ | throwaway613746 4 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | happytoexplain 7 hours ago | parent | prev [-] |
| I mostly agree that it's an overreaction. However, "irrational" is a really bad choice of word. Every non-technical person understands that sometimes AI says wrong things - like, random, crazy wrong things, not just a little off. It's just a general rule kept in the back of the mind. Food is easily in that realm of "be careful". Did the AI produce a recipe that would be harmful to you and the cook didn't notice? Almost certainly not. So, sure, they were being over-cautious. But "irrational"? No, no, no. It's definitely rational. Look at what you're writing. "Doing X is so clearly irrational that I chuckled a bit." Please don't perpetuate the image of the elitist techie. That is what was just firebombed. |
| |
| ▲ | s1artibartfast 4 hours ago | parent [-] | | there is almost nothing seriously dangerous about food, particularly everyday food.There are a handful of niche things that are seriously dangerous, like cooking Fugu or Poison mushrooms with special preperation. I think this says more about how neurotic and paranoid people are. |
|